Sample records for visual perceptual learning

  1. Visual Perceptual Learning and Models.

    PubMed

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  2. Perceptual learning in children with visual impairment improves near visual acuity.

    PubMed

    Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N

    2013-09-17

    This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P < 0.001). Only the children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).

  3. The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.

    PubMed

    Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo

    2014-09-01

    Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus non-motor functions, with only the posterior lobe being responsible for learning in the perceptual domain. Copyright © 2014. Published by Elsevier Ltd.

  4. Perceptual learning and adult cortical plasticity.

    PubMed

    Gilbert, Charles D; Li, Wu; Piech, Valentin

    2009-06-15

    The visual cortex retains the capacity for experience-dependent changes, or plasticity, of cortical function and cortical circuitry, throughout life. These changes constitute the mechanism of perceptual learning in normal visual experience and in recovery of function after CNS damage. Such plasticity can be seen at multiple stages in the visual pathway, including primary visual cortex. The manifestation of the functional changes associated with perceptual learning involve both long term modification of cortical circuits during the course of learning, and short term dynamics in the functional properties of cortical neurons. These dynamics are subject to top-down influences of attention, expectation and perceptual task. As a consequence, each cortical area is an adaptive processor, altering its function in accordance to immediate perceptual demands.

  5. Perceptual grouping enhances visual plasticity.

    PubMed

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.

  6. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    PubMed

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.

  7. Exogenous attention facilitates location transfer of perceptual learning.

    PubMed

    Donovan, Ian; Szpiro, Sarit; Carrasco, Marisa

    2015-01-01

    Perceptual skills can be improved through practice on a perceptual task, even in adulthood. Visual perceptual learning is known to be mostly specific to the trained retinal location, which is considered as evidence of neural plasticity in retinotopic early visual cortex. Recent findings demonstrate that transfer of learning to untrained locations can occur under some specific training procedures. Here, we evaluated whether exogenous attention facilitates transfer of perceptual learning to untrained locations, both adjacent to the trained locations (Experiment 1) and distant from them (Experiment 2). The results reveal that attention facilitates transfer of perceptual learning to untrained locations in both experiments, and that this transfer occurs both within and across visual hemifields. These findings show that training with exogenous attention is a powerful regime that is able to overcome the major limitation of location specificity.

  8. Exogenous attention facilitates location transfer of perceptual learning

    PubMed Central

    Donovan, Ian; Szpiro, Sarit; Carrasco, Marisa

    2015-01-01

    Perceptual skills can be improved through practice on a perceptual task, even in adulthood. Visual perceptual learning is known to be mostly specific to the trained retinal location, which is considered as evidence of neural plasticity in retinotopic early visual cortex. Recent findings demonstrate that transfer of learning to untrained locations can occur under some specific training procedures. Here, we evaluated whether exogenous attention facilitates transfer of perceptual learning to untrained locations, both adjacent to the trained locations (Experiment 1) and distant from them (Experiment 2). The results reveal that attention facilitates transfer of perceptual learning to untrained locations in both experiments, and that this transfer occurs both within and across visual hemifields. These findings show that training with exogenous attention is a powerful regime that is able to overcome the major limitation of location specificity. PMID:26426818

  9. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Time course influences transfer of visual perceptual learning across spatial location.

    PubMed

    Larcombe, S J; Kennard, C; Bridge, H

    2017-06-01

    Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Perceptual Grouping Enhances Visual Plasticity

    PubMed Central

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100

  12. Effects of regular aerobic exercise on visual perceptual learning.

    PubMed

    Connell, Charlotte J W; Thompson, Benjamin; Green, Hayden; Sullivan, Rachel K; Gant, Nicholas

    2017-12-02

    This study investigated the influence of five days of moderate intensity aerobic exercise on the acquisition and consolidation of visual perceptual learning using a motion direction discrimination (MDD) task. The timing of exercise relative to learning was manipulated by administering exercise either before or after perceptual training. Within a matched-subjects design, twenty-seven healthy participants (n = 9 per group) completed five consecutive days of perceptual training on a MDD task under one of three interventions: no exercise, exercise before the MDD task, or exercise after the MDD task. MDD task accuracy improved in all groups over the five-day period, but there was a trend for impaired learning when exercise was performed before visual perceptual training. MDD task accuracy (mean ± SD) increased in exercise before by 4.5 ± 6.5%; exercise after by 11.8 ± 6.4%; and no exercise by 11.3 ± 7.2%. All intervention groups displayed similar MDD threshold reductions for the trained and untrained motion axes after training. These findings suggest that moderate daily exercise does not enhance the rate of visual perceptual learning for an MDD task or the transfer of learning to an untrained motion axis. Furthermore, exercise performed immediately prior to a visual perceptual learning task may impair learning. Further research with larger groups is required in order to better understand these effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    PubMed

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  14. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training

    PubMed Central

    Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566

  15. Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.

    PubMed

    Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas

    2016-01-01

    While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.

  16. Gains following perceptual learning are closely linked to the initial visual acuity.

    PubMed

    Yehezkel, Oren; Sterkin, Anna; Lev, Maria; Levi, Dennis M; Polat, Uri

    2016-04-28

    The goal of the present study was to evaluate the dependence of perceptual learning gains on initial visual acuity (VA), in a large sample of subjects with a wide range of VAs. A large sample of normally sighted and presbyopic subjects (N = 119; aged 40 to 63) with a wide range of uncorrected near visual acuities (VA, -0.12 to 0.8 LogMAR), underwent perceptual learning. Training consisted of detecting briefly presented Gabor stimuli under spatial and temporal masking conditions. Consistent with previous findings, perceptual learning induced a significant improvement in near VA and reading speed under conditions of limited exposure duration. Our results show that the improvements in VA and reading speed observed following perceptual learning are closely linked to the initial VA, with only a minor fraction of the observed improvement that may be attributed to the additional sessions performed by those with the worse VA.

  17. Relationships between Visual and Auditory Perceptual Skills and Comprehension in Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    Weaver, Phyllis A.; Rosner, Jerome

    1979-01-01

    Scores of 25 learning disabled students (aged 9 to 13) were compared on five tests: a visual-perceptual test (Coloured Progressive Matrices); an auditory-perceptual test (Auditory Motor Placement); a listening and reading comprehension test (Durrell Listening-Reading Series); and a word recognition test (Word Recognition subtest, Diagnostic…

  18. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    ERIC Educational Resources Information Center

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  19. Perceptual learning modifies the functional specializations of visual cortical areas.

    PubMed

    Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang

    2016-05-17

    Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.

  20. Repetitive Transcranial Direct Current Stimulation Induced Excitability Changes of Primary Visual Cortex and Visual Learning Effects-A Pilot Study.

    PubMed

    Sczesny-Kaiser, Matthias; Beckhaus, Katharina; Dinse, Hubert R; Schwenkreis, Peter; Tegenthoff, Martin; Höffken, Oliver

    2016-01-01

    Studies on noninvasive motor cortex stimulation and motor learning demonstrated cortical excitability as a marker for a learning effect. Transcranial direct current stimulation (tDCS) is a non-invasive tool to modulate cortical excitability. It is as yet unknown how tDCS-induced excitability changes and perceptual learning in visual cortex correlate. Our study aimed to examine the influence of tDCS on visual perceptual learning in healthy humans. Additionally, we measured excitability in primary visual cortex (V1). We hypothesized that anodal tDCS would improve and cathodal tDCS would have minor or no effects on visual learning. Anodal, cathodal or sham tDCS were applied over V1 in a randomized, double-blinded design over four consecutive days (n = 30). During 20 min of tDCS, subjects had to learn a visual orientation-discrimination task (ODT). Excitability parameters were measured by analyzing paired-stimulation behavior of visual-evoked potentials (ps-VEP) and by measuring phosphene thresholds (PTs) before and after the stimulation period of 4 days. Compared with sham-tDCS, anodal tDCS led to an improvement of visual discrimination learning (p < 0.003). We found reduced PTs and increased ps-VEP ratios indicating increased cortical excitability after anodal tDCS (PT: p = 0.002, ps-VEP: p = 0.003). Correlation analysis within the anodal tDCS group revealed no significant correlation between PTs and learning effect. For cathodal tDCS, no significant effects on learning or on excitability could be seen. Our results showed that anodal tDCS over V1 resulted in improved visual perceptual learning and increased cortical excitability. tDCS is a promising tool to alter V1 excitability and, hence, perceptual visual learning.

  1. Attentional Modulation in Visual Cortex Is Modified during Perceptual Learning

    ERIC Educational Resources Information Center

    Bartolucci, Marco; Smith, Andrew T.

    2011-01-01

    Practicing a visual task commonly results in improved performance. Often the improvement does not transfer well to a new retinal location, suggesting that it is mediated by changes occurring in early visual cortex, and indeed neuroimaging and neurophysiological studies both demonstrate that perceptual learning is associated with altered activity…

  2. Practice makes it better: A psychophysical study of visual perceptual learning and its transfer effects on aging.

    PubMed

    Li, Xuan; Allen, Philip A; Lien, Mei-Ching; Yamamoto, Naohide

    2017-02-01

    Previous studies on perceptual learning, acquiring a new skill through practice, appear to stimulate brain plasticity and enhance performance (Fiorentini & Berardi, 1981). The present study aimed to determine (a) whether perceptual learning can be used to compensate for age-related declines in perceptual abilities, and (b) whether the effect of perceptual learning can be transferred to untrained stimuli and subsequently improve capacity of visual working memory (VWM). We tested both healthy younger and older adults in a 3-day training session using an orientation discrimination task. A matching-to-sample psychophysical method was used to measure improvements in orientation discrimination thresholds and reaction times (RTs). Results showed that both younger and older adults improved discrimination thresholds and RTs with similar learning rates and magnitudes. Furthermore, older adults exhibited a generalization of improvements to 3 untrained orientations that were close to the training orientation and benefited more compared with younger adults from the perceptual learning as they transferred learning effects to the VWM performance. We conclude that through perceptual learning, older adults can partially counteract age-related perceptual declines, generalize the learning effect to other stimulus conditions, and further overcome the limitation of using VWM capacity to perform a perceptual task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Intermittent regime of brain activity at the early, bias-guided stage of perceptual learning.

    PubMed

    Nikolaev, Andrey R; Gepshtein, Sergei; van Leeuwen, Cees

    2016-11-01

    Perceptual learning improves visual performance. Among the plausible mechanisms of learning, reduction of perceptual bias has been studied the least. Perceptual bias may compensate for lack of stimulus information, but excessive reliance on bias diminishes visual discriminability. We investigated the time course of bias in a perceptual grouping task and studied the associated cortical dynamics in spontaneous and evoked EEG. Participants reported the perceived orientation of dot groupings in ambiguous dot lattices. Performance improved over a 1-hr period as indicated by the proportion of trials in which participants preferred dot groupings favored by dot proximity. The proximity-based responses were compromised by perceptual bias: Vertical groupings were sometimes preferred to horizontal ones, independent of dot proximity. In the evoked EEG activity, greater amplitude of the N1 component for horizontal than vertical responses indicated that the bias was most prominent in conditions of reduced visual discriminability. The prominence of bias decreased in the course of the experiment. Although the bias was still prominent, prestimulus activity was characterized by an intermittent regime of alternating modes of low and high alpha power. Responses were more biased in the former mode, indicating that perceptual bias was deployed actively to compensate for stimulus uncertainty. Thus, early stages of perceptual learning were characterized by episodes of greater reliance on prior visual preferences, alternating with episodes of receptivity to stimulus information. In the course of learning, the former episodes disappeared, and biases reappeared only infrequently.

  4. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning.

    PubMed

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.

  5. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning

    PubMed Central

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1–5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat. PMID:25076874

  6. The Nature of Experience Determines Object Representations in the Visual System

    ERIC Educational Resources Information Center

    Wong, Yetta K.; Folstein, Jonathan R.; Gauthier, Isabel

    2012-01-01

    Visual perceptual learning (PL) and perceptual expertise (PE) traditionally lead to different training effects and recruit different brain areas, but reasons for these differences are largely unknown. Here, we tested how the learning history influences visual object representations. Two groups were trained with tasks typically used in PL or PE…

  7. Active and Passive Perceptual Learning in the Visually Impaired.

    ERIC Educational Resources Information Center

    Conrod, Beverley E.; And Others

    1986-01-01

    Active and passive perceptual training methods were tested with 30 macular degeneration patients to improve their residual vision. The main conclusion was that perceptual training may contribute to successful visual adjustment and that the effect of training is not limited to a particular level of visual impairment. (Author/CL)

  8. Reading Performance Is Enhanced by Visual Texture Discrimination Training in Chinese-Speaking Children with Developmental Dyslexia

    PubMed Central

    Meng, Xiangzhi; Lin, Ou; Wang, Fang; Jiang, Yuzheng; Song, Yan

    2014-01-01

    Background High order cognitive processing and learning, such as reading, interact with lower-level sensory processing and learning. Previous studies have reported that visual perceptual training enlarges visual span and, consequently, improves reading speed in young and old people with amblyopia. Recently, a visual perceptual training study in Chinese-speaking children with dyslexia found that the visual texture discrimination thresholds of these children in visual perceptual training significantly correlated with their performance in Chinese character recognition, suggesting that deficits in visual perceptual processing/learning might partly underpin the difficulty in reading Chinese. Methodology/Principal Findings To further clarify whether visual perceptual training improves the measures of reading performance, eighteen children with dyslexia and eighteen typically developed readers that were age- and IQ-matched completed a series of reading measures before and after visual texture discrimination task (TDT) training. Prior to the TDT training, each group of children was split into two equivalent training and non-training groups in terms of all reading measures, IQ, and TDT. The results revealed that the discrimination threshold SOAs of TDT were significantly higher for the children with dyslexia than for the control children before training. Interestingly, training significantly decreased the discrimination threshold SOAs of TDT for both the typically developed readers and the children with dyslexia. More importantly, the training group with dyslexia exhibited significant enhancement in reading fluency, while the non-training group with dyslexia did not show this improvement. Additional follow-up tests showed that the improvement in reading fluency is a long-lasting effect and could be maintained for up to two months in the training group with dyslexia. Conclusion/Significance These results suggest that basic visual perceptual processing/learning and reading ability in Chinese might at least partially rely on overlapping mechanisms. PMID:25247602

  9. Transfer of perceptual learning between different visual tasks

    PubMed Central

    McGovern, David P.; Webb, Ben S.; Peirce, Jonathan W.

    2012-01-01

    Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this ‘perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a ‘global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks. PMID:23048211

  10. Transfer of perceptual learning between different visual tasks.

    PubMed

    McGovern, David P; Webb, Ben S; Peirce, Jonathan W

    2012-10-09

    Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this 'perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a 'global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks.

  11. Chromatic Perceptual Learning but No Category Effects without Linguistic Input.

    PubMed

    Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.

  12. Magnetic stimulation of visual cortex impairs perceptual learning.

    PubMed

    Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio

    2016-12-01

    The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Perceptual learning modifies untrained pursuit eye movements.

    PubMed

    Szpiro, Sarit F A; Spering, Miriam; Carrasco, Marisa

    2014-07-07

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. © 2014 ARVO.

  14. Perceptual learning modifies untrained pursuit eye movements

    PubMed Central

    Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa

    2014-01-01

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. PMID:25002412

  15. Internet-based perceptual learning in treating amblyopia.

    PubMed

    Zhang, Wenqiu; Yang, Xubo; Liao, Meng; Zhang, Ning; Liu, Longqian

    2013-01-01

    Amblyopia is a common childhood condition, which affects 2%-3% of the population. The efficacy of conventional treatment in amblyopia seems not to be high and recently perceptual learning has been used for treating amblyopia. The aim of this study was to address the efficacy of Internet-based perceptual learning in treating amblyopia. A total of 530 eyes of 341 patients with amblyopia presenting to the outpatient department of West China Hospital of Sichuan University between February 2011 and December 2011 were reviewed. A retrospective cohort study was conducted to compare the efficacy of Internet-based perceptual learning and conventional treatment in amblyopia. The efficacy was evaluated by the change in visual acuity between pretreatment and posttreatment. The change in visual acuity between pretreatment and posttreatment by Internet-based perceptual learning was larger than that by conventional treatment in ametropic and strabismic amblyopia (p<0.05), but smaller than that in anisometropic and other types of amblyopia (p<0.05). The improvement in visual acuity by Internet-based perceptual learning was larger for patients with amblyopia not younger than 7 years (p<0.05). The mean cure time of patients with amblyopia by Internet-based perceptual learning was 3.06 ± 1.42 months, while conventional treatment required 3.52 ± 1.67 months to reach the same improvement (p<0.05). Internet-based perceptual learning can be considered as an alternative to conventional treatment. It is especially suitable for ametropic and strabismic patients with amblyopia who are older than 7 years and can shorten the cure time of amblyopia.

  16. Learning Disabilities and the School Health Worker

    ERIC Educational Resources Information Center

    Freeman, Stephen W.

    1973-01-01

    This article offers three listings of signs and symptoms useful in detection of learning and perceptual deficiencies. The first list presents symptoms of the learning-disabled child; the second gives specific visual perceptual deficits (poor discrimination, figure-ground problems, reversals, etc.); and the third gives auditory perceptual deficits…

  17. Enhancing Academic Performance: Seven Perceptual Styles of Learning.

    ERIC Educational Resources Information Center

    Higbee, Jeanne L.; And Others

    1991-01-01

    Presents Galbraith and James's taxonomy of seven perceptual modalities (i.e., print, aural, interactive, visual, haptic, kinesthetic, and olfactory). Discusses ways educators can demonstrate perceptual modalities in the classroom and help students identify their personal style of learning. Explains how this knowledge can facilitate learning in a…

  18. Reduction in the retinotopic early visual cortex with normal aging and magnitude of perceptual learning.

    PubMed

    Chang, Li-Hung; Yotsumoto, Yuko; Salat, David H; Andersen, George J; Watanabe, Takeo; Sasaki, Yuka

    2015-01-01

    Although normal aging is known to reduce cortical structures globally, the effects of aging on local structures and functions of early visual cortex are less understood. Here, using standard retinotopic mapping and magnetic resonance imaging morphologic analyses, we investigated whether aging affects areal size of the early visual cortex, which were retinotopically localized, and whether those morphologic measures were associated with individual performance on visual perceptual learning. First, significant age-associated reduction was found in the areal size of V1, V2, and V3. Second, individual ability of visual perceptual learning was significantly correlated with areal size of V3 in older adults. These results demonstrate that aging changes local structures of the early visual cortex, and the degree of change may be associated with individual visual plasticity. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Perceptual learning effect on decision and confidence thresholds.

    PubMed

    Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano

    2016-10-01

    Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a "trained target". Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the "trained target": an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Perceptual learning in a non-human primate model of artificial vision

    PubMed Central

    Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.

    2016-01-01

    Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058

  1. Chromatic Perceptual Learning but No Category Effects without Linguistic Input

    PubMed Central

    Grandison, Alexandra; Sowden, Paul T.; Drivonikou, Vicky G.; Notman, Leslie A.; Alexander, Iona; Davies, Ian R. L.

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest. PMID:27252669

  2. Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.

    PubMed

    Byers, Anna; Serences, John T

    2014-09-01

    Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.

  3. Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.

    PubMed

    Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U

    2016-03-01

    We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Perceptual learning as improved probabilistic inference in early sensory areas.

    PubMed

    Bejjanki, Vikranth R; Beck, Jeffrey M; Lu, Zhong-Lin; Pouget, Alexandre

    2011-05-01

    Extensive training on simple tasks such as fine orientation discrimination results in large improvements in performance, a form of learning known as perceptual learning. Previous models have argued that perceptual learning is due to either sharpening and amplification of tuning curves in early visual areas or to improved probabilistic inference in later visual areas (at the decision stage). However, early theories are inconsistent with the conclusions of psychophysical experiments manipulating external noise, whereas late theories cannot explain the changes in neural responses that have been reported in cortical areas V1 and V4. Here we show that we can capture both the neurophysiological and behavioral aspects of perceptual learning by altering only the feedforward connectivity in a recurrent network of spiking neurons so as to improve probabilistic inference in early visual areas. The resulting network shows modest changes in tuning curves, in line with neurophysiological reports, along with a marked reduction in the amplitude of pairwise noise correlations.

  5. Perceptual Learning as a potential treatment for amblyopia: a mini-review

    PubMed Central

    Levi, Dennis M.; Li, Roger W.

    2009-01-01

    Amblyopia is a developmental abnormality that results from physiological alterations in the visual cortex and impairs form vision. It is a consequence of abnormal binocular visual experience during the “sensitive period” early in life. While amblyopia can often be reversed when treated early, conventional treatment is generally not undertaken in older children and adults. A number of studies over the last twelve years or so suggest that Perceptual Learning (PL) may provide an important new method for treating amblyopia. The aim of this mini-review is to provide a critical review and “meta-analysis” of perceptual learning in adults and children with amblyopia, with a view to extracting principles that might make PL more effective and efficient. Specifically we evaluate: What factors influence the outcome of perceptual learning?Specificity and generalization – two sides of the coin.Do the improvements last?How does PL improve visual function?Should PL be part of the treatment armamentarium? A review of the extant studies makes it clear that practicing a visual task results in a long-lasting improvement in performance in an amblyopic eye. The improvement is generally strongest for the trained eye, task, stimulus and orientation, but appears to have a broader spatial frequency bandwidth than in normal vision. Importantly, practicing on a variety of different tasks and stimuli seems to transfer to improved visual acuity. Perceptual learning operates via a reduction of internal neural noise and/or through more efficient use of the stimulus information by retuning the weighting of the information. The success of PL raises the question of whether it should become a standard part of the armamentarium for the clinical treatment of amblyopia, and suggests several important principles for effective perceptual learning in amblyopia. PMID:19250947

  6. Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment

    PubMed Central

    Gong, Tao; Lam, Yau W.; Shuai, Lan

    2016-01-01

    Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages. PMID:28066281

  7. Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment.

    PubMed

    Gong, Tao; Lam, Yau W; Shuai, Lan

    2016-01-01

    Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages.

  8. The application of online transcranial random noise stimulation and perceptual learning in the improvement of visual functions in mild myopia.

    PubMed

    Camilleri, Rebecca; Pavan, Andrea; Campana, Gianluca

    2016-08-01

    It has recently been demonstrated how perceptual learning, that is an improvement in a sensory/perceptual task upon practice, can be boosted by concurrent high-frequency transcranial random noise stimulation (tRNS). It has also been shown that perceptual learning can generalize and produce an improvement of visual functions in participants with mild refractive defects. By using three different groups of participants (single-blind study), we tested the efficacy of a short training (8 sessions) using a single Gabor contrast-detection task with concurrent hf-tRNS in comparison with the same training with sham stimulation or hf-tRNS with no concurrent training, in improving visual acuity (VA) and contrast sensitivity (CS) of individuals with uncorrected mild myopia. A short training with a contrast detection task is able to improve VA and CS only if coupled with hf-tRNS, whereas no effect on VA and marginal effects on CS are seen with the sole administration of hf-tRNS. Our results support the idea that, by boosting the rate of perceptual learning via the modulation of neuronal plasticity, hf-tRNS can be successfully used to reduce the duration of the perceptual training and/or to increase its efficacy in producing perceptual learning and generalization to improved VA and CS in individuals with uncorrected mild myopia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Perceptual learning through optimization of attentional weighting: human versus optimal Bayesian learner

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Abbey, Craig K.; Pham, Binh T.; Shimozaki, Steven S.

    2004-01-01

    Human performance in visual detection, discrimination, identification, and search tasks typically improves with practice. Psychophysical studies suggest that perceptual learning is mediated by an enhancement in the coding of the signal, and physiological studies suggest that it might be related to the plasticity in the weighting or selection of sensory units coding task relevant information (learning through attention optimization). We propose an experimental paradigm (optimal perceptual learning paradigm) to systematically study the dynamics of perceptual learning in humans by allowing comparisons to that of an optimal Bayesian algorithm and a number of suboptimal learning models. We measured improvement in human localization (eight-alternative forced-choice with feedback) performance of a target randomly sampled from four elongated Gaussian targets with different orientations and polarities and kept as a target for a block of four trials. The results suggest that the human perceptual learning can occur within a lapse of four trials (<1 min) but that human learning is slower and incomplete with respect to the optimal algorithm (23.3% reduction in human efficiency from the 1st-to-4th learning trials). The greatest improvement in human performance, occurring from the 1st-to-2nd learning trial, was also present in the optimal observer, and, thus reflects a property inherent to the visual task and not a property particular to the human perceptual learning mechanism. One notable source of human inefficiency is that, unlike the ideal observer, human learning relies more heavily on previous decisions than on the provided feedback, resulting in no human learning on trials following a previous incorrect localization decision. Finally, the proposed theory and paradigm provide a flexible framework for future studies to evaluate the optimality of human learning of other visual cues and/or sensory modalities.

  10. Cholinergic enhancement augments magnitude and specificity of visual perceptual learning in healthy humans

    PubMed Central

    Rokem, Ariel; Silver, Michael A.

    2010-01-01

    Summary Learning through experience underlies the ability to adapt to novel tasks and unfamiliar environments. However, learning must be regulated so that relevant aspects of the environment are selectively encoded. Acetylcholine (ACh) has been suggested to regulate learning by enhancing the responses of sensory cortical neurons to behaviorally-relevant stimuli [1]. In this study, we increased synaptic levels of ACh in the brains of healthy human subjects with the cholinesterase inhibitor donepezil (trade name: Aricept) and measured the effects of this cholinergic enhancement on visual perceptual learning. Each subject completed two five-day courses of training on a motion direction discrimination task [2], once while ingesting 5 mg of donepezil before every training session and once while placebo was administered. We found that cholinergic enhancement augmented perceptual learning for stimuli having the same direction of motion and visual field location used during training. In addition, perceptual learning under donepezil was more selective to the trained direction of motion and visual field location. These results, combined with previous studies demonstrating an increase in neuronal selectivity following cholinergic enhancement [3–5], suggest a possible mechanism by which ACh augments neural plasticity by directing activity to populations of neurons that encode behaviorally-relevant stimulus features. PMID:20850321

  11. THE USES AND ABUSES OF VISUAL TRAINING FOR CHILDREN WITH PERCEPTUAL-MOTOR LEARNING PROBLEMS.

    ERIC Educational Resources Information Center

    CARLSON, PAUL V.; GREENSPOON, MORTON K.

    THE ROLE OF THE OPTOMETRIST IN DIAGNOSING AND CORRECTING PERCEPTUAL-MOTOR LEARNING PROBLEMS IS DISCUSSED. ONE GROUP OF OPTOMETRISTS ADHERES TO STANDARD TECHNIQUES, INCLUDING THE PRESCRIPTION OF CORRECTIVE LENSES AND THE USE OF ORTHOPTIC TECHNIQUES FOR THE SAKE OF CLEAR, COMFORTABLE, AND EFFECTIVE VISUAL PERFORMANCE. OTHERS EMPLOY DIVERSE…

  12. WISC-R Scatter and Patterns in Three Types of Learning Disabled Children.

    ERIC Educational Resources Information Center

    Tabachnick, Barbara G.; Turbey, Carolyn B.

    Wechsler Intelligence Scale for Children-Revised (WISC-R) subtest scatter and Bannatyne recategorization scores were investigated with three types of learning disabilities in children 6 to 16 years old: visual-motor and visual-perceptual disability (N=66); auditory-perceptual and receptive language deficit (N=18); and memory deficit (N=12). Three…

  13. Perceptual Learning Improves Adult Amblyopic Vision Through Rule-Based Cognitive Compensation

    PubMed Central

    Zhang, Jun-Yun; Cong, Lin-Juan; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong

    2014-01-01

    Purpose. We investigated whether perceptual learning in adults with amblyopia could be enabled to transfer completely to an orthogonal orientation, which would suggest that amblyopic perceptual learning results mainly from high-level cognitive compensation, rather than plasticity in the amblyopic early visual brain. Methods. Nineteen adults (mean age = 22.5 years) with anisometropic and/or strabismic amblyopia were trained following a training-plus-exposure (TPE) protocol. The amblyopic eyes practiced contrast, orientation, or Vernier discrimination at one orientation for six to eight sessions. Then the amblyopic or nonamblyopic eyes were exposed to an orthogonal orientation via practicing an irrelevant task. Training was first performed at a lower spatial frequency (SF), then at a higher SF near the cutoff frequency of the amblyopic eye. Results. Perceptual learning was initially orientation specific. However, after exposure to the orthogonal orientation, learning transferred to an orthogonal orientation completely. Reversing the exposure and training order failed to produce transfer. Initial lower SF training led to broad improvement of contrast sensitivity, and later higher SF training led to more specific improvement at high SFs. Training improved visual acuity by 1.5 to 1.6 lines (P < 0.001) in the amblyopic eyes with computerized tests and a clinical E acuity chart. It also improved stereoacuity by 53% (P < 0.001). Conclusions. The complete transfer of learning suggests that perceptual learning in amblyopia may reflect high-level learning of rules for performing a visual discrimination task. These rules are applicable to new orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation. PMID:24550359

  14. Perceptual learning improves adult amblyopic vision through rule-based cognitive compensation.

    PubMed

    Zhang, Jun-Yun; Cong, Lin-Juan; Klein, Stanley A; Levi, Dennis M; Yu, Cong

    2014-04-01

    We investigated whether perceptual learning in adults with amblyopia could be enabled to transfer completely to an orthogonal orientation, which would suggest that amblyopic perceptual learning results mainly from high-level cognitive compensation, rather than plasticity in the amblyopic early visual brain. Nineteen adults (mean age = 22.5 years) with anisometropic and/or strabismic amblyopia were trained following a training-plus-exposure (TPE) protocol. The amblyopic eyes practiced contrast, orientation, or Vernier discrimination at one orientation for six to eight sessions. Then the amblyopic or nonamblyopic eyes were exposed to an orthogonal orientation via practicing an irrelevant task. Training was first performed at a lower spatial frequency (SF), then at a higher SF near the cutoff frequency of the amblyopic eye. Perceptual learning was initially orientation specific. However, after exposure to the orthogonal orientation, learning transferred to an orthogonal orientation completely. Reversing the exposure and training order failed to produce transfer. Initial lower SF training led to broad improvement of contrast sensitivity, and later higher SF training led to more specific improvement at high SFs. Training improved visual acuity by 1.5 to 1.6 lines (P < 0.001) in the amblyopic eyes with computerized tests and a clinical E acuity chart. It also improved stereoacuity by 53% (P < 0.001). The complete transfer of learning suggests that perceptual learning in amblyopia may reflect high-level learning of rules for performing a visual discrimination task. These rules are applicable to new orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation.

  15. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training.

    PubMed

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.

  16. Improving the performance of the amblyopic visual system

    PubMed Central

    Levi, Dennis M.; Li, Roger W.

    2008-01-01

    Experience-dependent plasticity is closely linked with the development of sensory function; however, there is also growing evidence for plasticity in the adult visual system. This review re-examines the notion of a sensitive period for the treatment of amblyopia in the light of recent experimental and clinical evidence for neural plasticity. One recently proposed method for improving the effectiveness and efficiency of treatment that has received considerable attention is ‘perceptual learning’. Specifically, both children and adults with amblyopia can improve their perceptual performance through extensive practice on a challenging visual task. The results suggest that perceptual learning may be effective in improving a range of visual performance and, importantly, the improvements may transfer to visual acuity. Recent studies have sought to explore the limits and time course of perceptual learning as an adjunct to occlusion and to investigate the neural mechanisms underlying the visual improvement. These findings, along with the results of new clinical trials, suggest that it might be time to reconsider our notions about neural plasticity in amblyopia. PMID:19008199

  17. A perceptual learning deficit in Chinese developmental dyslexia as revealed by visual texture discrimination training.

    PubMed

    Wang, Zhengke; Cheng-Lai, Alice; Song, Yan; Cutting, Laurie; Jiang, Yuzheng; Lin, Ou; Meng, Xiangzhi; Zhou, Xiaolin

    2014-08-01

    Learning to read involves discriminating between different written forms and establishing connections with phonology and semantics. This process may be partially built upon visual perceptual learning, during which the ability to process the attributes of visual stimuli progressively improves with practice. The present study investigated to what extent Chinese children with developmental dyslexia have deficits in perceptual learning by using a texture discrimination task, in which participants were asked to discriminate the orientation of target bars. Experiment l demonstrated that, when all of the participants started with the same initial stimulus-to-mask onset asynchrony (SOA) at 300 ms, the threshold SOA, adjusted according to response accuracy for reaching 80% accuracy, did not show a decrement over 5 days of training for children with dyslexia, whereas this threshold SOA steadily decreased over the training for the control group. Experiment 2 used an adaptive procedure to determine the threshold SOA for each participant during training. Results showed that both the group of dyslexia and the control group attained perceptual learning over the sessions in 5 days, although the threshold SOAs were significantly higher for the group of dyslexia than for the control group; moreover, over individual participants, the threshold SOA negatively correlated with their performance in Chinese character recognition. These findings suggest that deficits in visual perceptual processing and learning might, in part, underpin difficulty in reading Chinese. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Visual Learning Alters the Spontaneous Activity of the Resting Human Brain: An fNIRS Study

    PubMed Central

    Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan

    2014-01-01

    Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning. PMID:25243168

  19. Visual learning alters the spontaneous activity of the resting human brain: an fNIRS study.

    PubMed

    Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan

    2014-01-01

    Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning.

  20. Perceptual learning increases the strength of the earliest signals in visual cortex.

    PubMed

    Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A

    2010-11-10

    Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.

  1. Gross Motor Engrams: An Important Spatial Learning Modality for Preschool Visually Handicapped Children. Vol. 1, No. 9.

    ERIC Educational Resources Information Center

    Whitcraft, Carol

    Investigations and theories concerning interrelationships of motoric experiences, perceptual-motor skills, and learning are reviewed, with emphasis on early engramming of form and space concepts. Covered are studies on haptic perception of form, the matching of perceptual data and motor information, Kephart's perceptual-motor theory, and…

  2. Perceptual Organization of Visual Structure Requires a Flexible Learning Mechanism

    ERIC Educational Resources Information Center

    Aslin, Richard N.

    2011-01-01

    Bhatt and Quinn (2011) provide a compelling and comprehensive review of empirical evidence that supports the operation of principles of perceptual organization in young infants. They also have provided a comprehensive list of experiences that could serve to trigger the learning of at least some of these principles of perceptual organization, and…

  3. Implicit visual learning and the expression of learning.

    PubMed

    Haider, Hilde; Eberhardt, Katharina; Kunde, Alexander; Rose, Michael

    2013-03-01

    Although the existence of implicit motor learning is now widely accepted, the findings concerning perceptual implicit learning are ambiguous. Some researchers have observed perceptual learning whereas other authors have not. The review of the literature provides different reasons to explain this ambiguous picture, such as differences in the underlying learning processes, selective attention, or differences in the difficulty to express this knowledge. In three experiments, we investigated implicit visual learning within the original serial reaction time task. We used different response devices (keyboard vs. mouse) in order to manipulate selective attention towards response dimensions. Results showed that visual and motor sequence learning differed in terms of RT-benefits, but not in terms of the amount of knowledge assessed after training. Furthermore, visual sequence learning was modulated by selective attention. However, the findings of all three experiments suggest that selective attention did not alter implicit but rather explicit learning processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Reading speed in the peripheral visual field of older adults: Does it benefit from perceptual learning?

    PubMed

    Yu, Deyue; Cheung, Sing-Hang; Legge, Gordon E; Chung, Susana T L

    2010-04-21

    Enhancing reading ability in peripheral vision is important for the rehabilitation of people with central-visual-field loss from age-related macular degeneration (AMD). Previous research has shown that perceptual learning, based on a trigram letter-recognition task, improved peripheral reading speed among normally-sighted young adults (Chung, Legge, & Cheung, 2004). Here we ask whether the same happens in older adults in an age range more typical of the onset of AMD. Eighteen normally-sighted subjects, aged 55-76years, were randomly assigned to training or control groups. Visual-span profiles (plots of letter-recognition accuracy as a function of horizontal letter position) and RSVP reading speeds were measured at 10 degrees above and below fixation during pre- and post-tests for all subjects. Training consisted of repeated measurements of visual-span profiles at 10 degrees below fixation, in four daily sessions. The control subjects did not receive any training. Perceptual learning enlarged the visual spans in both trained (lower) and untrained (upper) visual fields. Reading speed improved in the trained field by 60% when the trained print size was used. The training benefits for these older subjects were weaker than the training benefits for young adults found by Chung et al. Despite the weaker training benefits, perceptual learning remains a potential option for low-vision reading rehabilitation among older adults. Copyright 2010 Elsevier Ltd. All rights reserved.

  5. Tactile perceptual learning: learning curves and transfer to the contralateral finger.

    PubMed

    Kaas, Amanda L; van de Ven, Vincent; Reithler, Joel; Goebel, Rainer

    2013-02-01

    Tactile perceptual learning has been shown to improve performance on tactile tasks, but there is no agreement about the extent of transfer to untrained skin locations. The lack of such transfer is often seen as a behavioral index of the contribution of early somatosensory brain regions. Moreover, the time course of improvements has never been described explicitly. Sixteen subjects were trained on the Ludvigh task (a tactile vernier task) on four subsequent days. On the fifth day, transfer of learning to the non-trained contralateral hand was tested. In five subjects, we explored to what extent training effects were retained approximately 1.5 years after the final training session, expecting to find long-term retention of learning effects after training. Results showed that tactile perceptual learning mainly occurred offline, between sessions. Training effects did not transfer initially, but became fully available to the untrained contralateral hand after a few additional training runs. After 1.5 years, training effects were not fully washed out and could be recuperated within a single training session. Interpreted in the light of theories of visual perceptual learning, these results suggest that tactile perceptual learning is not fundamentally different from visual perceptual learning, but might proceed at a slower pace due to procedural and task differences, thus explaining the apparent divergence in the amount of transfer and long-term retention.

  6. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training

    PubMed Central

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer. PMID:26873777

  7. How does Learning Impact Development in Infancy? The Case of Perceptual Organization

    PubMed Central

    Bhatt, Ramesh S.; Quinn, Paul C.

    2011-01-01

    Pattern perception and organization are critical functions of the visual cognition system. Many organizational processes are available early in life, such that infants as young 3 months of age are able to readily utilize a variety of cues to organize visual patterns. However, other processes are not readily evident in young infants, and their development involves perceptual learning. We describe a theoretical framework that addresses perceptual learning in infancy and the manner in which it affects visual organization and development. It identifies five kinds of experiences that induce learning, and suggests that they work via attentional and unitization mechanisms to modify visual organization. In addition, the framework proposes that this kind of learning is abstract, domain general, functional at different ages in a qualitatively similar manner, and has a long-term impact on development through a memory reactivation process. Although most models of development assume that experience is fundamental to development, very little is actually known about the process by which experience affects development. The proposed framework is an attempt to account for this process in the domain of perception. PMID:21572570

  8. Broad-based visual benefits from training with an integrated perceptual-learning video game.

    PubMed

    Deveau, Jenni; Lovcik, Gary; Seitz, Aaron R

    2014-06-01

    Perception is the window through which we understand all information about our environment, and therefore deficits in perception due to disease, injury, stroke or aging can have significant negative impacts on individuals' lives. Research in the field of perceptual learning has demonstrated that vision can be improved in both normally seeing and visually impaired individuals, however, a limitation of most perceptual learning approaches is their emphasis on isolating particular mechanisms. In the current study, we adopted an integrative approach where the goal is not to achieve highly specific learning but instead to achieve general improvements to vision. We combined multiple perceptual learning approaches that have individually contributed to increasing the speed, magnitude and generality of learning into a perceptual-learning based video-game. Our results demonstrate broad-based benefits of vision in a healthy adult population. Transfer from the game includes; improvements in acuity (measured with self-paced standard eye-charts), improvement along the full contrast sensitivity function, and improvements in peripheral acuity and contrast thresholds. The use of this type of this custom video game framework built up from psychophysical approaches takes advantage of the benefits found from video game training while maintaining a tight link to psychophysical designs that enable understanding of mechanisms of perceptual learning and has great potential both as a scientific tool and as therapy to help improve vision. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. A Mouse Model of Visual Perceptual Learning Reveals Alterations in Neuronal Coding and Dendritic Spine Density in the Visual Cortex.

    PubMed

    Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng

    2016-01-01

    Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF) for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS) and a 55% gain in visual acuity (VA). Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1) than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  10. Self-motion Perception Training: Thresholds Improve in the Light but not in the Dark

    PubMed Central

    Hartmann, Matthias; Furrer, Sarah; Herzog, Michael H.; Merfeld, Daniel M.; Mast, Fred W.

    2014-01-01

    We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform, and asked to indicate the direction of motion. A total of eleven participants underwent 3360 practice trials, distributed over twelve (Experiment 1) or six days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness. PMID:23392475

  11. Visual complexity: a review.

    PubMed

    Donderi, Don C

    2006-01-01

    The idea of visual complexity, the history of its measurement, and its implications for behavior are reviewed, starting with structuralism and Gestalt psychology at the beginning of the 20th century and ending with visual complexity theory, perceptual learning theory, and neural circuit theory at the beginning of the 21st. Evidence is drawn from research on single forms, form and texture arrays and visual displays. Form complexity and form probability are shown to be linked through their reciprocal relationship in complexity theory, which is in turn shown to be consistent with recent developments in perceptual learning and neural circuit theory. Directions for further research are suggested.

  12. Relationship between perceptual learning in speech and statistical learning in younger and older adults

    PubMed Central

    Neger, Thordis M.; Rietveld, Toni; Janse, Esther

    2014-01-01

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly. PMID:25225475

  13. Relationship between perceptual learning in speech and statistical learning in younger and older adults.

    PubMed

    Neger, Thordis M; Rietveld, Toni; Janse, Esther

    2014-01-01

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.

  14. Visual texture perception via graph-based semi-supervised learning

    NASA Astrophysics Data System (ADS)

    Zhang, Qin; Dong, Junyu; Zhong, Guoqiang

    2018-04-01

    Perceptual features, for example direction, contrast and repetitiveness, are important visual factors for human to perceive a texture. However, it needs to perform psychophysical experiment to quantify these perceptual features' scale, which requires a large amount of human labor and time. This paper focuses on the task of obtaining perceptual features' scale of textures by small number of textures with perceptual scales through a rating psychophysical experiment (what we call labeled textures) and a mass of unlabeled textures. This is the scenario that the semi-supervised learning is naturally suitable for. This is meaningful for texture perception research, and really helpful for the perceptual texture database expansion. A graph-based semi-supervised learning method called random multi-graphs, RMG for short, is proposed to deal with this task. We evaluate different kinds of features including LBP, Gabor, and a kind of unsupervised deep features extracted by a PCA-based deep network. The experimental results show that our method can achieve satisfactory effects no matter what kind of texture features are used.

  15. Effectiveness of aides in a perceptual motor training program for children with learning disabilities.

    PubMed

    Gersten, J W; Foppe, K B; Gersten, R; Maxwell, S; Mirrett, P; Gipson, M; Houston, H; Grueter, B

    1975-03-01

    A program for children with learning disabilities associated with perceptual deficits was designed that included elements of gross and fine motor coordination, visual and somatosensory perceptual training, dance, art, music and language. The effectiveness of nonprofessional "perceptual-aides," who were trained in this program, was evaluated. Twenty-eight children with learning disabilities associated with perceptual deficits were treated by occupational, physical, recreational and language therapists; and 27 similarly involved children were treated by two aides, under supervision, after training by therapists. Treatment in both groups was for four hours weekly over a four to seven month period. There was significant improvement in motor skills, visual and somatosensory perception, language and educational skills in the two programs. Although there was no significant difference between the two groups, there was a slight advantage to the aide program. The cost of the aide program was 10 percent higher than the therapist program during the first year, but 22 percent lower than the therapist program during the second year.

  16. Visual Complexity in Orthographic Learning: Modeling Learning across Writing System Variations

    ERIC Educational Resources Information Center

    Chang, Li-Yun; Plaut, David C.; Perfetti, Charles A.

    2016-01-01

    The visual complexity of orthographies varies across writing systems. Prior research has shown that complexity strongly influences the initial stage of reading development: the perceptual learning of grapheme forms. This study presents a computational simulation that examines the degree to which visual complexity leads to grapheme learning…

  17. Learning to Link Visual Contours

    PubMed Central

    Li, Wu; Piëch, Valentin; Gilbert, Charles D.

    2008-01-01

    SUMMARY In complex visual scenes, linking related contour elements is important for object recognition. This process, thought to be stimulus driven and hard wired, has substrates in primary visual cortex (V1). Here, however, we find contour integration in V1 to depend strongly on perceptual learning and top-down influences that are specific to contour detection. In naive monkeys the information about contours embedded in complex backgrounds is absent in V1 neuronal responses, and is independent of the locus of spatial attention. Training animals to find embedded contours induces strong contour-related responses specific to the trained retinotopic region. These responses are most robust when animals perform the contour detection task, but disappear under anesthesia. Our findings suggest that top-down influences dynamically adapt neural circuits according to specific perceptual tasks. This may serve as a general neuronal mechanism of perceptual learning, and reflect top-down mediated changes in cortical states. PMID:18255036

  18. Neural mechanisms of human perceptual learning: electrophysiological evidence for a two-stage process.

    PubMed

    Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco

    2011-04-26

    Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.

  19. Vernier perceptual learning transfers to completely untrained retinal locations after double training: A “piggybacking” effect

    PubMed Central

    Wang, Rui; Zhang, Jun-Yun; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong

    2014-01-01

    Perceptual learning, a process in which training improves visual discrimination, is often specific to the trained retinal location, and this location specificity is frequently regarded as an indication of neural plasticity in the retinotopic visual cortex. However, our previous studies have shown that “double training” enables location-specific perceptual learning, such as Vernier learning, to completely transfer to a new location where an irrelevant task is practiced. Here we show that Vernier learning can be actuated by less location-specific orientation or motion-direction learning to transfer to completely untrained retinal locations. This “piggybacking” effect occurs even if both tasks are trained at the same retinal location. However, piggybacking does not occur when the Vernier task is paired with a more location-specific contrast-discrimination task. This previously unknown complexity challenges the current understanding of perceptual learning and its specificity/transfer. Orientation and motion-direction learning, but not contrast and Vernier learning, appears to activate a global process that allows learning transfer to untrained locations. Moreover, when paired with orientation or motion-direction learning, Vernier learning may be “piggybacked” by the activated global process to transfer to other untrained retinal locations. How this task-specific global activation process is achieved is as yet unknown. PMID:25398974

  20. Making perceptual learning practical to improve visual functions.

    PubMed

    Polat, Uri

    2009-10-01

    Task-specific improvement in performance after training is well established. The finding that learning is stimulus-specific and does not transfer well between different stimuli, between stimulus locations in the visual field, or between the two eyes has been used to support the notion that neurons or assemblies of neurons are modified at the earliest stage of cortical processing. However, a debate regarding the proposed mechanism underlying perceptual learning is an ongoing issue. Nevertheless, generalization of a trained task to other functions is an important key, for both understanding the neural mechanisms and the practical value of the training. This manuscript describes a structured perceptual learning method that previously used (amblyopia, myopia) and a novel technique and results that were applied for presbyopia. In general, subjects were trained for contrast detection of Gabor targets under lateral masking conditions. Training improved contrast sensitivity and diminished the lateral suppression when it existed (amblyopia). The improvement was transferred to unrelated functions such as visual acuity. The new results of presbyopia show substantial improvement of the spatial and temporal contrast sensitivity, leading to improved processing speed of target detection as well as reaction time. Consequently, the subjects, who were able to eliminate the need for reading glasses, benefited. Thus, here we show that the transfer of functions indicates that the specificity of improvement in the trained task can be generalized by repetitive practice of target detection, covering a sufficient range of spatial frequencies and orientations, leading to an improvement in unrelated visual functions. Thus, perceptual learning can be a practical method to improve visual functions in people with impaired or blurred vision.

  1. Perceptual context and individual differences in the language proficiency of preschool children.

    PubMed

    Banai, Karen; Yifat, Rachel

    2016-02-01

    Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    PubMed

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

  3. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  4. Predicting perceptual learning from higher-order cortical processing.

    PubMed

    Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan

    2016-01-01

    Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Improving the Perceptual Performance of Learning Disabled Second Graders through Computer Assisted Instruction.

    ERIC Educational Resources Information Center

    Burke, James P.

    The practicum designed a perceptual activities program for learning disabled second graders using computer-assisted instruction. The program develops skills involving visual motor coordination, figure-ground differentiation, form constancy, position in space, and spatial relationships. Five behavioral objectives for each developmental area were…

  6. Factors of Predicted Learning Disorders and their Interaction with Attentional and Perceptual Training Procedures.

    ERIC Educational Resources Information Center

    Friar, John T.

    Two factors of predicted learning disorders were investigated: (1) inability to maintain appropriate classroom behavior (BEH), (2) perceptual discrimination deficit (PERC). Three groups of first-graders (BEH, PERC, normal control) were administered measures of impulse control, distractability, auditory discrimination, and visual discrimination.…

  7. Assessing the Neural Basis of Uncertainty in Perceptual Category Learning through Varying Levels of Distortion

    ERIC Educational Resources Information Center

    Daniel, Reka; Wagner, Gerd; Koch, Kathrin; Reichenbach, Jurgen R.; Sauer, Heinrich; Schlosser, Ralf G. M.

    2011-01-01

    The formation of new perceptual categories involves learning to extract that information from a wide range of often noisy sensory inputs, which is critical for selecting between a limited number of responses. To identify brain regions involved in visual classification learning under noisy conditions, we developed a task on the basis of the…

  8. Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning

    PubMed Central

    Larcombe, Stephanie J.; Kennard, Chris

    2017-01-01

    Abstract Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145–156, 2018. © 2017 Wiley Periodicals, Inc. PMID:28963815

  9. Modeling trial by trial and block feedback in perceptual learning

    PubMed Central

    Liu, Jiajuan; Dosher, Barbara; Lu, Zhong-Lin

    2014-01-01

    Feedback has been shown to play a complex role in visual perceptual learning. It is necessary for performance improvement in some conditions while not others. Different forms of feedback, such as trial-by-trial feedback or block feedback, may both facilitate learning, but with different mechanisms. False feedback can abolish learning. We account for all these results with the Augmented Hebbian Reweight Model (AHRM). Specifically, three major factors in the model advance performance improvement: the external trial-by-trial feedback when available, the self-generated output as an internal feedback when no external feedback is available, and the adaptive criterion control based on the block feedback. Through simulating a comprehensive feedback study (Herzog & Fahle 1997, Vision Research, 37 (15), 2133–2141), we show that the model predictions account for the pattern of learning in seven major feedback conditions. The AHRM can fully explain the complex empirical results on the role of feedback in visual perceptual learning. PMID:24423783

  10. Development of a vocabulary of object shapes in a child with a very-early-acquired visual agnosia: a unique case.

    PubMed

    Funnell, Elaine; Wilding, John

    2011-02-01

    We report a longitudinal study of an exceptional child (S.R.) whose early-acquired visual agnosia, following encephalitis at 8 weeks of age, did not prevent her from learning to construct an increasing vocabulary of visual object forms (drawn from different categories), albeit slowly. S.R. had problems perceiving subtle differences in shape; she was unable to segment local letters within global displays; and she would bring complex scenes close to her eyes: a symptom suggestive of an attempt to reduce visual crowding. Investigations revealed a robust ability to use the gestalt grouping factors of proximity and collinearity to detect fragmented forms in noisy backgrounds, compared with a very weak ability to segment fragmented forms on the basis of contrasts of shape. When contrasts in spatial grouping and shape were pitted against each other, shape made little contribution, consistent with problems in perceiving complex scenes, but when shape contrast was varied, and spatial grouping was held constant, S.R. showed the same hierarchy of difficulty as the controls, although her responses were slowed. This is the first report of a child's visual-perceptual development following very early neurological impairments to the visual cortex. Her ability to learn to perceive visual shape following damage at a rudimentary stage of perceptual development contrasts starkly with the loss of such ability in childhood cases of acquired visual agnosia that follow damage to the established perceptual system. Clearly, there is a critical period during which neurological damage to the highly active, early developing visual-perceptual system does not prevent but only impairs further learning.

  11. Perceptual category learning and visual processing: An exercise in computational cognitive neuroscience.

    PubMed

    Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory

    2017-05-01

    The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Feature reliability determines specificity and transfer of perceptual learning in orientation search.

    PubMed

    Yashar, Amit; Denison, Rachel N

    2017-12-01

    Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL's effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments.

  13. Feature reliability determines specificity and transfer of perceptual learning in orientation search

    PubMed Central

    2017-01-01

    Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL’s effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments. PMID:29240813

  14. Prolonged Perceptual Learning of Positional Acuity in Adult Amblyopia

    PubMed Central

    Li, Roger W; Klein, Stanley A; Levi, Dennis M

    2009-01-01

    Amblyopia is a developmental abnormality that results in physiological alterations in the visual cortex and impairs form vision. It is often successfully treated by patching the sound eye in infants and young children, but is generally considered to be untreatable in adults. However, a number of recent studies suggest that repetitive practice of a visual task using the amblyopic eye results in improved performance in both children and adults with amblyopia. These perceptual learning studies have used relatively brief periods of practice; however, clinical studies have shown that the time-constant for successful patching is long. The time-constant for perceptual learning in amblyopia is still unknown. Here we show that the time-constant for perceptual learning depends on the degree of amblyopia. Severe amblyopia requires more than 50 hours (≈35,000 trials) to reach plateau, yielding as much as a five-fold improvement in performance at a rate of ≈1.5% per hour. There is significant transfer of learning from the amblyopic to the dominant eye, suggesting that the learning reflects alterations in higher decision stages of processing. Using a reverse correlation technique, we document, for the first time, a dynamic retuning of the amblyopic perceptual decision template and a substantial reduction in internal spatial distortion. These results show that the mature amblyopic brain is surprisingly malleable, and point to more intensive treatment methods for amblyopia. PMID:19109504

  15. Project ME: A Report on the Learning Wall System.

    ERIC Educational Resources Information Center

    Heilig, Morton L.

    The learning wall system, which consists primarily of a special wall used instead of a screen for a variety of projection purposes, is described, shown diagrammatically, and pictured. Designed to provide visual perceptual motor training on a level that would fall between gross and fine motor performance for perceptually handicapped children, the…

  16. Visual-Perceptual Difficulties and the Impact on Children's Learning: Are Teachers Missing the Page?

    ERIC Educational Resources Information Center

    Boyle, Christopher; Jindal-Snape, Divya

    2012-01-01

    This article attempts to bring to the fore of educational practice the importance of considering the visual-perceptual condition of Meares-Irlen syndrome (MIS) when identifying students who have prolonged reading difficulties. Dyslexia is a frequently used term which can be used to label children who have specific difficulties with reading and/or…

  17. Exogenous Attention Enables Perceptual Learning

    PubMed Central

    Szpiro, Sarit F. A.; Carrasco, Marisa

    2015-01-01

    Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. PMID:26502745

  18. Evidence for arousal-biased competition in perceptual learning.

    PubMed

    Lee, Tae-Ho; Itti, Laurent; Mather, Mara

    2012-01-01

    Arousal-biased competition theory predicts that arousal biases competition in favor of perceptually salient stimuli and against non-salient stimuli (Mather and Sutherland, 2011). The current study tested this hypothesis by having observers complete many trials in a visual search task in which the target either always was salient (a 55° tilted line among 80° distractors) or non-salient (a 55° tilted line among 50° distractors). Each participant completed one session in an emotional condition, in which visual search trials were preceded by negative arousing images, and one session in a non-emotional condition, in which the arousing images were replaced with neutral images (with session order counterbalanced). Test trials in which the target line had to be selected from among a set of lines with different tilts revealed that the emotional condition enhanced identification of the salient target line tilt but impaired identification of the non-salient target line tilt. Thus, arousal enhanced perceptual learning of salient stimuli but impaired perceptual learning of non-salient stimuli.

  19. Evidence for Arousal-Biased Competition in Perceptual Learning

    PubMed Central

    Lee, Tae-Ho; Itti, Laurent; Mather, Mara

    2012-01-01

    Arousal-biased competition theory predicts that arousal biases competition in favor of perceptually salient stimuli and against non-salient stimuli (Mather and Sutherland, 2011). The current study tested this hypothesis by having observers complete many trials in a visual search task in which the target either always was salient (a 55° tilted line among 80° distractors) or non-salient (a 55° tilted line among 50° distractors). Each participant completed one session in an emotional condition, in which visual search trials were preceded by negative arousing images, and one session in a non-emotional condition, in which the arousing images were replaced with neutral images (with session order counterbalanced). Test trials in which the target line had to be selected from among a set of lines with different tilts revealed that the emotional condition enhanced identification of the salient target line tilt but impaired identification of the non-salient target line tilt. Thus, arousal enhanced perceptual learning of salient stimuli but impaired perceptual learning of non-salient stimuli. PMID:22833729

  20. An integrated reweighting theory of perceptual learning

    PubMed Central

    Dosher, Barbara Anne; Jeter, Pamela; Liu, Jiajuan; Lu, Zhong-Lin

    2013-01-01

    Improvements in performance on visual tasks due to practice are often specific to a retinal position or stimulus feature. Many researchers suggest that specific perceptual learning alters selective retinotopic representations in early visual analysis. However, transfer is almost always practically advantageous, and it does occur. If perceptual learning alters location-specific representations, how does it transfer to new locations? An integrated reweighting theory explains transfer over retinal locations by incorporating higher level location-independent representations into a multilevel learning system. Location transfer is mediated through location-independent representations, whereas stimulus feature transfer is determined by stimulus similarity at both location-specific and location-independent levels. Transfer to new locations/positions differs fundamentally from transfer to new stimuli. After substantial initial training on an orientation discrimination task, switches to a new location or position are compared with switches to new orientations in the same position, or switches of both. Position switches led to the highest degree of transfer, whereas orientation switches led to the highest levels of specificity. A computational model of integrated reweighting is developed and tested that incorporates the details of the stimuli and the experiment. Transfer to an identical orientation task in a new position is mediated via more broadly tuned location-invariant representations, whereas changing orientation in the same position invokes interference or independent learning of the new orientations at both levels, reflecting stimulus dissimilarity. Consistent with single-cell recording studies, perceptual learning alters the weighting of both early and midlevel representations of the visual system. PMID:23898204

  1. Accommodating Elementary Students' Learning Styles.

    ERIC Educational Resources Information Center

    Wallace, James

    1995-01-01

    Examines the perceptual learning style preferences of sixth- and seventh-grade students in the Philippines. Finds that the visual modality was the most preferred and the auditory modality was the least preferred. Offers suggestions for accommodating visual, tactile, and kinesthetic preferences. (RS)

  2. Constraints on the Transfer of Perceptual Learning in Accented Speech

    PubMed Central

    Eisner, Frank; Melinger, Alissa; Weber, Andrea

    2013-01-01

    The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., “seed”, pronounced [si:th]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as “town” facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors. PMID:23554598

  3. Geometry of the perceptual space

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Palmer, Stephen; Eghbalnia, Hamid; Carew, John

    1999-09-01

    The concept of space and geometry varies across the subjects. Following Poincare, we consider the construction of the perceptual space as a continuum equipped with a notion of magnitude. The study of the relationships of objects in the perceptual space gives rise to what we may call perceptual geometry. Computational modeling of objects and investigation of their deeper perceptual geometrical properties (beyond qualitative arguments) require a mathematical representation of the perceptual space. Within the realm of such a mathematical/computational representation, visual perception can be studied as in the well-understood logic-based geometry. This, however, does not mean that one could reduce all problems of visual perception to their geometric counterparts. Rather, visual perception as reported by a human observer, has a subjective factor that could be analytically quantified only through statistical reasoning and in the course of repetitive experiments. Thus, the desire to experimentally verify the statements in perceptual geometry leads to an additional probabilistic structure imposed on the perceptual space, whose amplitudes are measured through intervention by human observers. We propose a model for the perceptual space and the case of perception of textured surfaces as a starting point for object recognition. To rigorously present these ideas and propose computational simulations for testing the theory, we present the model of the perceptual geometry of surfaces through an amplification of theory of Riemannian foliation in differential topology, augmented by statistical learning theory. When we refer to the perceptual geometry of a human observer, the theory takes into account the Bayesian formulation of the prior state of the knowledge of the observer and Hebbian learning. We use a Parallel Distributed Connectionist paradigm for computational modeling and experimental verification of our theory.

  4. Bottom-up and top-down influences at untrained conditions determine perceptual learning specificity and transfer

    PubMed Central

    Xiong, Ying-Zi; Zhang, Jun-Yun; Yu, Cong

    2016-01-01

    Perceptual learning is often orientation and location specific, which may indicate neuronal plasticity in early visual areas. However, learning specificity diminishes with additional exposure of the transfer orientation or location via irrelevant tasks, suggesting that the specificity is related to untrained conditions, likely because neurons representing untrained conditions are neither bottom-up stimulated nor top-down attended during training. To demonstrate these top-down and bottom-up contributions, we applied a “continuous flash suppression” technique to suppress the exposure stimulus into sub-consciousness, and with additional manipulations to achieve pure bottom-up stimulation or top-down attention with the transfer condition. We found that either bottom-up or top-down influences enabled significant transfer of orientation and Vernier discrimination learning. These results suggest that learning specificity may result from under-activations of untrained visual neurons due to insufficient bottom-up stimulation and/or top-down attention during training. High-level perceptual learning thus may not functionally connect to these neurons for learning transfer. DOI: http://dx.doi.org/10.7554/eLife.14614.001 PMID:27377357

  5. Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning.

    PubMed

    Larcombe, Stephanie J; Kennard, Chris; Bridge, Holly

    2018-01-01

    Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145-156, 2018. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  6. Perceptual Learning via Modification of Cortical Top-Down Signals

    PubMed Central

    Schäfer, Roland; Vasilaki, Eleni; Senn, Walter

    2007-01-01

    The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning. PMID:17715996

  7. Thirty-Five Years of Research on Perceptual Strengths: Essential Strategies to Promote Learning

    ERIC Educational Resources Information Center

    Dunn, Rita; Dunn, Kenneth

    2005-01-01

    This article discusses the evolution of teaching approaches in concert with the findings of over three decades of researches on student perceptual strengths. Confusing reports of successes and only limited successes for students with varied perceptual strengths suggest that combined auditory, visual, tactual, and/or kinesthetic instructional…

  8. The Neural Correlates of Hierarchical Predictions for Perceptual Decisions.

    PubMed

    Weilnhammer, Veith A; Stuke, Heiner; Sterzer, Philipp; Schmack, Katharina

    2018-05-23

    Sensory information is inherently noisy, sparse, and ambiguous. In contrast, visual experience is usually clear, detailed, and stable. Bayesian theories of perception resolve this discrepancy by assuming that prior knowledge about the causes underlying sensory stimulation actively shapes perceptual decisions. The CNS is believed to entertain a generative model aligned to dynamic changes in the hierarchical states of our volatile sensory environment. Here, we used model-based fMRI to study the neural correlates of the dynamic updating of hierarchically structured predictions in male and female human observers. We devised a crossmodal associative learning task with covertly interspersed ambiguous trials in which participants engaged in hierarchical learning based on changing contingencies between auditory cues and visual targets. By inverting a Bayesian model of perceptual inference, we estimated individual hierarchical predictions, which significantly biased perceptual decisions under ambiguity. Although "high-level" predictions about the cue-target contingency correlated with activity in supramodal regions such as orbitofrontal cortex and hippocampus, dynamic "low-level" predictions about the conditional target probabilities were associated with activity in retinotopic visual cortex. Our results suggest that our CNS updates distinct representations of hierarchical predictions that continuously affect perceptual decisions in a dynamically changing environment. SIGNIFICANCE STATEMENT Bayesian theories posit that our brain entertains a generative model to provide hierarchical predictions regarding the causes of sensory information. Here, we use behavioral modeling and fMRI to study the neural underpinnings of such hierarchical predictions. We show that "high-level" predictions about the strength of dynamic cue-target contingencies during crossmodal associative learning correlate with activity in orbitofrontal cortex and the hippocampus, whereas "low-level" conditional target probabilities were reflected in retinotopic visual cortex. Our findings empirically corroborate theorizations on the role of hierarchical predictions in visual perception and contribute substantially to a longstanding debate on the link between sensory predictions and orbitofrontal or hippocampal activity. Our work fundamentally advances the mechanistic understanding of perceptual inference in the human brain. Copyright © 2018 the authors 0270-6474/18/385008-14$15.00/0.

  9. Perceptual and academic patterns of learning-disabled/gifted students.

    PubMed

    Waldron, K A; Saphire, D G

    1992-04-01

    This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.

  10. Exogenous Attention Enables Perceptual Learning.

    PubMed

    Szpiro, Sarit F A; Carrasco, Marisa

    2015-12-01

    Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. © The Author(s) 2015.

  11. Fluoxetine Does Not Enhance Visual Perceptual Learning and Triazolam Specifically Impairs Learning Transfer

    PubMed Central

    Lagas, Alice K.; Black, Joanna M.; Byblow, Winston D.; Fleming, Melanie K.; Goodman, Lucy K.; Kydd, Robert R.; Russell, Bruce R.; Stinear, Cathy M.; Thompson, Benjamin

    2016-01-01

    The selective serotonin reuptake inhibitor fluoxetine significantly enhances adult visual cortex plasticity within the rat. This effect is related to decreased gamma-aminobutyric acid (GABA) mediated inhibition and identifies fluoxetine as a potential agent for enhancing plasticity in the adult human brain. We tested the hypothesis that fluoxetine would enhance visual perceptual learning of a motion direction discrimination (MDD) task in humans. We also investigated (1) the effect of fluoxetine on visual and motor cortex excitability and (2) the impact of increased GABA mediated inhibition following a single dose of triazolam on post-training MDD task performance. Within a double blind, placebo controlled design, 20 healthy adult participants completed a 19-day course of fluoxetine (n = 10, 20 mg per day) or placebo (n = 10). Participants were trained on the MDD task over the final 5 days of fluoxetine administration. Accuracy for the trained MDD stimulus and an untrained MDD stimulus configuration was assessed before and after training, after triazolam and 1 week after triazolam. Motor and visual cortex excitability were measured using transcranial magnetic stimulation. Fluoxetine did not enhance the magnitude or rate of perceptual learning and full transfer of learning to the untrained stimulus was observed for both groups. After training was complete, trazolam had no effect on trained task performance but significantly impaired untrained task performance. No consistent effects of fluoxetine on cortical excitability were observed. The results do not support the hypothesis that fluoxetine can enhance learning in humans. However, the specific effect of triazolam on MDD task performance for the untrained stimulus suggests that learning and learning transfer rely on dissociable neural mechanisms. PMID:27807412

  12. Limited transfer of long-term motion perceptual learning with double training.

    PubMed

    Liang, Ju; Zhou, Yifeng; Fahle, Manfred; Liu, Zili

    2015-01-01

    A significant recent development in visual perceptual learning research is the double training technique. With this technique, Xiao, Zhang, Wang, Klein, Levi, and Yu (2008) have found complete transfer in tasks that had previously been shown to be stimulus specific. The significance of this finding is that this technique has since been successful in all tasks tested, including motion direction discrimination. Here, we investigated whether or not this technique could generalize to longer-term learning, using the method of constant stimuli. Our task was learning to discriminate motion directions of random dots. The second leg of training was contrast discrimination along a new average direction of the same moving dots. We found that, although exposure of moving dots along a new direction facilitated motion direction discrimination, this partial transfer was far from complete. We conclude that, although perceptual learning is transferrable under certain conditions, stimulus specificity also remains an inherent characteristic of motion perceptual learning.

  13. Age-related declines of stability in visual perceptual learning.

    PubMed

    Chang, Li-Hung; Shibata, Kazuhisa; Andersen, George J; Sasaki, Yuka; Watanabe, Takeo

    2014-12-15

    One of the biggest questions in learning is how a system can resolve the plasticity and stability dilemma. Specifically, the learning system needs to have not only a high capability of learning new items (plasticity) but also a high stability to retain important items or processing in the system by preventing unimportant or irrelevant information from being learned. This dilemma should hold true for visual perceptual learning (VPL), which is defined as a long-term increase in performance on a visual task as a result of visual experience. Although it is well known that aging influences learning, the effect of aging on the stability and plasticity of the visual system is unclear. To address the question, we asked older and younger adults to perform a task while a task-irrelevant feature was merely exposed. We found that older individuals learned the task-irrelevant features that younger individuals did not learn, both the features that were sufficiently strong for younger individuals to suppress and the features that were too weak for younger individuals to learn. At the same time, there was no plasticity reduction in older individuals within the task tested. These results suggest that the older visual system is less stable to unimportant information than the younger visual system. A learning problem with older individuals may be due to a decrease in stability rather than a decrease in plasticity, at least in VPL. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Improving visual perception through neurofeedback

    PubMed Central

    Scharnowski, Frank; Hutton, Chloe; Josephs, Oliver; Weiskopf, Nikolaus; Rees, Geraint

    2012-01-01

    Perception depends on the interplay of ongoing spontaneous activity and stimulus-evoked activity in sensory cortices. This raises the possibility that training ongoing spontaneous activity alone might be sufficient for enhancing perceptual sensitivity. To test this, we trained human participants to control ongoing spontaneous activity in circumscribed regions of retinotopic visual cortex using real-time functional MRI based neurofeedback. After training, we tested participants using a new and previously untrained visual detection task that was presented at the visual field location corresponding to the trained region of visual cortex. Perceptual sensitivity was significantly enhanced only when participants who had previously learned control over ongoing activity were now exercising control, and only for that region of visual cortex. Our new approach allows us to non-invasively and non-pharmacologically manipulate regionally specific brain activity, and thus provide ‘brain training’ to deliver particular perceptual enhancements. PMID:23223302

  15. Perceptual learning: toward a comprehensive theory.

    PubMed

    Watanabe, Takeo; Sasaki, Yuka

    2015-01-03

    Visual perceptual learning (VPL) is long-term performance increase resulting from visual perceptual experience. Task-relevant VPL of a feature results from training of a task on the feature relevant to the task. Task-irrelevant VPL arises as a result of exposure to the feature irrelevant to the trained task. At least two serious problems exist. First, there is the controversy over which stage of information processing is changed in association with task-relevant VPL. Second, no model has ever explained both task-relevant and task-irrelevant VPL. Here we propose a dual plasticity model in which feature-based plasticity is a change in a representation of the learned feature, and task-based plasticity is a change in processing of the trained task. Although the two types of plasticity underlie task-relevant VPL, only feature-based plasticity underlies task-irrelevant VPL. This model provides a new comprehensive framework in which apparently contradictory results could be explained.

  16. Aversive learning shapes neuronal orientation tuning in human visual cortex.

    PubMed

    McTeague, Lisa M; Gruss, L Forest; Keil, Andreas

    2015-07-28

    The responses of sensory cortical neurons are shaped by experience. As a result perceptual biases evolve, selectively facilitating the detection and identification of sensory events that are relevant for adaptive behaviour. Here we examine the involvement of human visual cortex in the formation of learned perceptual biases. We use classical aversive conditioning to associate one out of a series of oriented gratings with a noxious sound stimulus. After as few as two grating-sound pairings, visual cortical responses to the sound-paired grating show selective amplification. Furthermore, as learning progresses, responses to the orientations with greatest similarity to the sound-paired grating are increasingly suppressed, suggesting inhibitory interactions between orientation-selective neuronal populations. Changes in cortical connectivity between occipital and fronto-temporal regions mirror the changes in visuo-cortical response amplitudes. These findings suggest that short-term behaviourally driven retuning of human visual cortical neurons involves distal top-down projections as well as local inhibitory interactions.

  17. Structured Activities in Perceptual Training to Aid Retention of Visual and Auditory Images.

    ERIC Educational Resources Information Center

    Graves, James W.; And Others

    The experimental program in structured activities in perceptual training was said to have two main objectives: to train children in retention of visual and auditory images and to increase the children's motivation to learn. Eight boys and girls participated in the program for two hours daily for a 10-week period. The age range was 7.0 to 12.10…

  18. Category learning increases discriminability of relevant object dimensions in visual cortex.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2013-04-01

    Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.

  19. Deep neural networks for modeling visual perceptual learning.

    PubMed

    Wenliang, Li; Seitz, Aaron R

    2018-05-23

    Understanding visual perceptual learning (VPL) has become increasingly more challenging as new phenomena are discovered with novel stimuli and training paradigms. While existing models aid our knowledge of critical aspects of VPL, the connections shown by these models between behavioral learning and plasticity across different brain areas are typically superficial. Most models explain VPL as readout from simple perceptual representations to decision areas and are not easily adaptable to explain new findings. Here, we show that a well-known instance of deep neural network (DNN), while not designed specifically for VPL, provides a computational model of VPL with enough complexity to be studied at many levels of analyses. After learning a Gabor orientation discrimination task, the DNN model reproduced key behavioral results, including increasing specificity with higher task precision, and also suggested that learning precise discriminations could asymmetrically transfer to coarse discriminations when the stimulus conditions varied. In line with the behavioral findings, the distribution of plasticity moved towards lower layers when task precision increased, and this distribution was also modulated by tasks with different stimulus types. Furthermore, learning in the network units demonstrated close resemblance to extant electrophysiological recordings in monkey visual areas. Altogether, the DNN fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. Although the comparisons were mostly qualitative, the DNN provides a new method of studying VPL and can serve as a testbed for theories and assist in generating predictions for physiological investigations. SIGNIFICANCE STATEMENT Visual perceptual learning (VPL) has been found to cause changes at multiple stages of the visual hierarchy. We found that training a deep neural network (DNN) on an orientation discrimination task produced similar behavioral and physiological patterns found in human and monkey experiments. Unlike existing VPL models, the DNN was pre-trained on natural images to reach high performance in object recognition but was not designed specifically for VPL, and yet it fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. When used with care, this unbiased and deep-hierarchical model can provide new ways of studying VPL from behavior to physiology. Copyright © 2018 the authors.

  20. Exogenous and endogenous attention during perceptual learning differentially affect post-training target thresholds

    PubMed Central

    Mukai, Ikuko; Bahadur, Kandy; Kesavabhotla, Kartik; Ungerleider, Leslie G.

    2012-01-01

    There is conflicting evidence in the literature regarding the role played by attention in perceptual learning. To further examine this issue, we independently manipulated exogenous and endogenous attention and measured the rate of perceptual learning of oriented Gabor patches presented in different quadrants of the visual field. In this way, we could track learning at attended, divided-attended, and unattended locations. We also measured contrast thresholds of the Gabor patches before and after training. Our results showed that, for both exogenous and endogenous attention, accuracy in performing the orientation discrimination improved to a greater extent at attended than at unattended locations. Importantly, however, only exogenous attention resulted in improved contrast thresholds. These findings suggest that both exogenous and endogenous attention facilitate perceptual learning, but that these two types of attention may be mediated by different neural mechanisms. PMID:21282340

  1. Learning viewpoint invariant perceptual representations from cluttered images.

    PubMed

    Spratling, Michael W

    2005-05-01

    In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations.

  2. Adult Visual Cortical Plasticity

    PubMed Central

    Gilbert, Charles D.; Li, Wu

    2012-01-01

    The visual cortex has the capacity for experience dependent change, or cortical plasticity, that is retained throughout life. Plasticity is invoked for encoding information during perceptual learning, by internally representing the regularities of the visual environment, which is useful for facilitating intermediate level vision - contour integration and surface segmentation. The same mechanisms have adaptive value for functional recovery after CNS damage, such as that associated with stroke or neurodegenerative disease. A common feature to plasticity in primary visual cortex (V1) is an association field that links contour elements across the visual field. The circuitry underlying the association field includes a plexus of long range horizontal connections formed by cortical pyramidal cells. These connections undergo rapid and exuberant sprouting and pruning in response to removal of sensory input, which can account for the topographic reorganization following retinal lesions. Similar alterations in cortical circuitry may be involved in perceptual learning, and the changes observed in V1 may be representative of how learned information is encoded throughout the cerebral cortex. PMID:22841310

  3. Look, Snap, See: Visual Literacy through the Camera.

    ERIC Educational Resources Information Center

    Spoerner, Thomas M.

    1981-01-01

    Activities involving photographs stimulate visual perceptual awareness. Children understand visual stimuli before having verbal capacity to deal with the world. Vision becomes the primary means for learning, understanding, and adjusting to the environment. Photography can provide an effective avenue to visual literacy. (Author)

  4. Generalization of perceptual and motor learning: a causal link with memory encoding and consolidation?

    PubMed

    Censor, N

    2013-10-10

    In both perceptual and motor learning, numerous studies have shown specificity of learning to the trained eye or hand and to the physical features of the task. However, generalization of learning is possible in both perceptual and motor domains. Here, I review evidence for perceptual and motor learning generalization, suggesting that generalization patterns are affected by the way in which the original memory is encoded and consolidated. Generalization may be facilitated during fast learning, with possible engagement of higher-order brain areas recurrently interacting with the primary visual or motor cortices encoding the stimuli or movements' memories. Such generalization may be supported by sleep, involving functional interactions between low and higher-order brain areas. Repeated exposure to the task may alter generalization patterns of learning and overall offline learning. Development of unifying frameworks across learning modalities and better understanding of the conditions under which learning can generalize may enable to gain insight regarding the neural mechanisms underlying procedural learning and have useful clinical implications. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Concept cells through associative learning of high-level representations.

    PubMed

    Reddy, Leila; Thorpe, Simon J

    2014-10-22

    In this issue of Neuron, Quian Quiroga et al. (2014) show that neurons in the human medial temporal lobe (MTL) follow subjects' perceptual states rather than the features of the visual input. Patients with MTL damage however have intact perceptual abilities but suffer instead from extreme forgetfulness. Thus, the reported MTL neurons could create new memories of the current perceptual state.

  6. Feature saliency and feedback information interactively impact visual category learning

    PubMed Central

    Hammer, Rubi; Sloutsky, Vladimir; Grill-Spector, Kalanit

    2015-01-01

    Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object’s features most relevant for categorization, while ‘filtering out’ irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously. PMID:25745404

  7. CYCLOPS-3 System Research.

    ERIC Educational Resources Information Center

    Marill, Thomas; And Others

    The aim of the CYCLOPS Project research is the development of techniques for allowing computers to perform visual scene analysis, pre-processing of visual imagery, and perceptual learning. Work on scene analysis and learning has previously been described. The present report deals with research on pre-processing and with further work on scene…

  8. Perceptual Learning in Children With Infantile Nystagmus: Effects on Visual Performance.

    PubMed

    Huurneman, Bianca; Boonstra, F Nienke; Goossens, Jeroen

    2016-08-01

    To evaluate whether computerized training with a crowded or uncrowded letter-discrimination task reduces visual impairment (VI) in 6- to 11-year-old children with infantile nystagmus (IN) who suffer from increased foveal crowding, reduced visual acuity, and reduced stereopsis. Thirty-six children with IN were included. Eighteen had idiopathic IN and 18 had oculocutaneous albinism. These children were divided in two training groups matched on age and diagnosis: a crowded training group (n = 18) and an uncrowded training group (n = 18). Training occurred two times per week during 5 weeks (3500 trials per training). Eleven age-matched children with normal vision were included to assess baseline differences in task performance and test-retest learning. Main outcome measures were task-specific performance, distance and near visual acuity (DVA and NVA), intensity and extent of (foveal) crowding at 5 m and 40 cm, and stereopsis. Training resulted in task-specific improvements. Both training groups also showed uncrowded and crowded DVA improvements (0.10 ± 0.02 and 0.11 ± 0.02 logMAR) and improved stereopsis (670 ± 249″). Crowded NVA improved only in the crowded training group (0.15 ± 0.02 logMAR), which was also the only group showing a reduction in near crowding intensity (0.08 ± 0.03 logMAR). Effects were not due to test-retest learning. Perceptual learning with or without distractors reduces the extent of crowding and improves visual acuity in children with IN. Training with distractors improves near vision more than training with single optotypes. Perceptual learning also transfers to DVA and NVA under uncrowded and crowded conditions and even stereopsis. Learning curves indicated that improvements may be larger after longer training.

  9. Perceptual learning in visual search: fast, enduring, but non-specific.

    PubMed

    Sireteanu, R; Rettenbach, R

    1995-07-01

    Visual search has been suggested as a tool for isolating visual primitives. Elementary "features" were proposed to involve parallel search, while serial search is necessary for items without a "feature" status, or, in some cases, for conjunctions of "features". In this study, we investigated the role of practice in visual search tasks. We found that, under some circumstances, initially serial tasks can become parallel after a few hundred trials. Learning in visual search is far less specific than learning of visual discriminations and hyperacuity, suggesting that it takes place at another level in the central visual pathway, involving different neural circuits.

  10. Relationships between academic performance, SES school type and perceptual-motor skills in first grade South African learners: NW-CHILD study.

    PubMed

    Pienaar, A E; Barhorst, R; Twisk, J W R

    2014-05-01

    Perceptual-motor skills contribute to a variety of basic learning skills associated with normal academic success. This study aimed to determine the relationship between academic performance and perceptual-motor skills in first grade South African learners and whether low SES (socio-economic status) school type plays a role in such a relationship. This cross-sectional study of the baseline measurements of the NW-CHILD longitudinal study included a stratified random sample of first grade learners (n = 812; 418 boys and 394 boys), with a mean age of 6.78 years ± 0.49 living in the North West Province (NW) of South Africa. The Beery-Buktenica Developmental Test of Visual-Motor Integration-4 (VMI) was used to assess visual-motor integration, visual perception and hand control while the Bruininks Oseretsky Test of Motor Proficiency, short form (BOT2-SF) assessed overall motor proficiency. Academic performance in math, reading and writing was assessed with the Mastery of Basic Learning Areas Questionnaire. Linear mixed models analysis was performed with spss to determine possible differences between the different VMI and BOT2-SF standard scores in different math, reading and writing mastery categories ranging from no mastery to outstanding mastery. A multinomial multilevel logistic regression analysis was performed to assess the relationship between a clustered score of academic performance and the different determinants. A strong relationship was established between academic performance and VMI, visual perception, hand control and motor proficiency with a significant relationship between a clustered academic performance score, visual-motor integration and visual perception. A negative association was established between low SES school types on academic performance, with a common perceptual motor foundation shared by all basic learning areas. Visual-motor integration, visual perception, hand control and motor proficiency are closely related to basic academic skills required in the first formal school year, especially among learners in low SES type schools. © 2013 John Wiley & Sons Ltd.

  11. Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2014-01-01

    Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.

  12. Sharpening coarse-to-fine stereo vision by perceptual learning: asymmetric transfer across the spatial frequency spectrum

    PubMed Central

    Tran, Truyet T.; Craven, Ashley P.; Leung, Tsz-Wing; Chat, Sandy W.; Levi, Dennis M.

    2016-01-01

    Neurons in the early visual cortex are finely tuned to different low-level visual features, forming a multi-channel system analysing the visual image formed on the retina in a parallel manner. However, little is known about the potential ‘cross-talk’ among these channels. Here, we systematically investigated whether stereoacuity, over a large range of target spatial frequencies, can be enhanced by perceptual learning. Using narrow-band visual stimuli, we found that practice with coarse (low spatial frequency) targets substantially improves performance, and that the improvement spreads from coarse to fine (high spatial frequency) three-dimensional perception, generalizing broadly across untrained spatial frequencies and orientations. Notably, we observed an asymmetric transfer of learning across the spatial frequency spectrum. The bandwidth of transfer was broader when training was at a high spatial frequency than at a low spatial frequency. Stereoacuity training is most beneficial when trained with fine targets. This broad transfer of stereoacuity learning contrasts with the highly specific learning reported for other basic visual functions. We also revealed strategies to boost learning outcomes ‘beyond-the-plateau’. Our investigations contribute to understanding the functional properties of the network subserving stereovision. The ability to generalize may provide a key principle for restoring impaired binocular vision in clinical situations. PMID:26909178

  13. Improvement of uncorrected visual acuity and contrast sensitivity with perceptual learning and transcranial random noise stimulation in individuals with mild myopia

    PubMed Central

    Camilleri, Rebecca; Pavan, Andrea; Ghin, Filippo; Battaglini, Luca; Campana, Gianluca

    2014-01-01

    Perceptual learning has been shown to produce an improvement of visual acuity (VA) and contrast sensitivity (CS) both in subjects with amblyopia and refractive defects such as myopia or presbyopia. Transcranial random noise stimulation (tRNS) has proven to be efficacious in accelerating neural plasticity and boosting perceptual learning in healthy participants. In this study, we investigated whether a short behavioral training regime using a contrast detection task combined with online tRNS was as effective in improving visual functions in participants with mild myopia compared to a 2-month behavioral training regime without tRNS (Camilleri et al., 2014). After 2 weeks of perceptual training in combination with tRNS, participants showed an improvement of 0.15 LogMAR in uncorrected VA (UCVA) that was comparable with that obtained after 8 weeks of training with no tRNS, and an improvement in uncorrected CS (UCCS) at various spatial frequencies (whereas no UCCS improvement was seen after 8 weeks of training with no tRNS). On the other hand, a control group that trained for 2 weeks without stimulation did not show any significant UCVA or UCCS improvement. These results suggest that the combination of behavioral and neuromodulatory techniques can be fast and efficacious in improving sight in individuals with mild myopia. PMID:25400610

  14. Frequent video game players resist perceptual interference.

    PubMed

    Berard, Aaron V; Cain, Matthew S; Watanabe, Takeo; Sasaki, Yuka

    2015-01-01

    Playing certain types of video games for a long time can improve a wide range of mental processes, from visual acuity to cognitive control. Frequent gamers have also displayed generalized improvements in perceptual learning. In the Texture Discrimination Task (TDT), a widely used perceptual learning paradigm, participants report the orientation of a target embedded in a field of lines and demonstrate robust over-night improvement. However, changing the orientation of the background lines midway through TDT training interferes with overnight improvements in overall performance on TDT. Interestingly, prior research has suggested that this effect will not occur if a one-hour break is allowed in between the changes. These results have suggested that after training is over, it may take some time for learning to become stabilized and resilient against interference. Here, we tested whether frequent gamers have faster stabilization of perceptual learning compared to non-gamers and examined the effect of daily video game playing on interference of training of TDT with one background orientation on perceptual learning of TDT with a different background orientation. As a result, we found that non-gamers showed overnight performance improvement only on one background orientation, replicating previous results with the interference in TDT. In contrast, frequent gamers demonstrated overnight improvements in performance with both background orientations, suggesting that they are better able to overcome interference in perceptual learning. This resistance to interference suggests that video game playing not only enhances the amplitude and speed of perceptual learning but also leads to faster and/or more robust stabilization of perceptual learning.

  15. Topographic generalization of tactile perceptual learning.

    PubMed

    Harrar, Vanessa; Spence, Charles; Makin, Tamar R

    2014-02-01

    Perceptual learning can improve our sensory abilities. Understanding its underlying mechanisms, in particular, when perceptual learning generalizes, has become a focus of research and controversy. Specifically, there is little consensus regarding the extent to which tactile perceptual learning generalizes across fingers. We measured tactile orientation discrimination abilities on 4 fingers (index and middle fingers of both hands), using psychophysical measures, before and after 4 training sessions on 1 finger. Given the somatotopic organization of the hand representation in the somatosensory cortex, the topography of the cortical areas underlying tactile perceptual learning can be inferred from the pattern of generalization across fingers; only fingers sharing cortical representation with the trained finger ought to improve with it. Following training, performance improved not only for the trained finger but also for its adjacent and homologous fingers. Although these fingers were not exposed to training, they nevertheless demonstrated similar levels of learning as the trained finger. Conversely, the performance of the finger that was neither adjacent nor homologous to the trained finger was unaffected by training, despite the fact that our procedure was designed to enhance generalization, as described in recent visual perceptual learning research. This pattern of improved performance is compatible with previous reports of neuronal receptive fields (RFs) in the primary somatosensory cortex (SI) spanning adjacent and homologous digits. We conclude that perceptual learning rooted in low-level cortex can still generalize, and suggest potential applications for the neurorehabilitation of syndromes associated with maladaptive plasticity in SI. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. How the baby learns to see: Donald O. Hebb Award Lecture, Canadian Society for Brain, Behaviour, and Cognitive Science, Ottawa, June 2015.

    PubMed

    Maurer, Daphne

    2016-09-01

    Hebb's (1949) book The Organisation of Behaviour presented a novel hypothesis about how the baby learns to see. This article summarizes the results of my research program that evaluated Hebb's hypothesis: first, by studying infants' eye movements and initial perceptual abilities and second, by studying the effect of visual deprivation (e.g., congenital cataracts) on later perceptual development. Collectively, the results support Hebb's hypothesis that the baby does indeed learn to see. Early visual experience not only drives the baby's initial scanning of objects, but also sets up the neural architecture that will come to underlie adults' perception. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Learning what to expect (in visual perception)

    PubMed Central

    Seriès, Peggy; Seitz, Aaron R.

    2013-01-01

    Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual’s priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues. We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors. PMID:24187536

  18. Sharpened cortical tuning and enhanced cortico-cortical communication contribute to the long-term neural mechanisms of visual motion perceptual learning.

    PubMed

    Chen, Nihong; Bi, Taiyong; Zhou, Tiangang; Li, Sheng; Liu, Zili; Fang, Fang

    2015-07-15

    Much has been debated about whether the neural plasticity mediating perceptual learning takes place at the sensory or decision-making stage in the brain. To investigate this, we trained human subjects in a visual motion direction discrimination task. Behavioral performance and BOLD signals were measured before, immediately after, and two weeks after training. Parallel to subjects' long-lasting behavioral improvement, the neural selectivity in V3A and the effective connectivity from V3A to IPS (intraparietal sulcus, a motion decision-making area) exhibited a persistent increase for the trained direction. Moreover, the improvement was well explained by a linear combination of the selectivity and connectivity increases. These findings suggest that the long-term neural mechanisms of motion perceptual learning are implemented by sharpening cortical tuning to trained stimuli at the sensory processing stage, as well as by optimizing the connections between sensory and decision-making areas in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Learning Style Preferences of Southeast Asian Students.

    ERIC Educational Resources Information Center

    Park, Clara C.

    2000-01-01

    Investigated the perceptual learning style preferences (auditory, visual, kinesthetic, and tactile) and preferences for group and individual learning of Southeast Asian students compared to white students. Surveys indicated significant differences in learning style preferences between Southeast Asian and white students and between the diverse…

  20. Adaptation, perceptual learning, and plasticity of brain functions.

    PubMed

    Horton, Jonathan C; Fahle, Manfred; Mulder, Theo; Trauzettel-Klosinski, Susanne

    2017-03-01

    The capacity for functional restitution after brain damage is quite different in the sensory and motor systems. This series of presentations highlights the potential for adaptation, plasticity, and perceptual learning from an interdisciplinary perspective. The chances for restitution in the primary visual cortex are limited. Some patterns of visual field loss and recovery after stroke are common, whereas others are impossible, which can be explained by the arrangement and plasticity of the cortical map. On the other hand, compensatory mechanisms are effective, can occur spontaneously, and can be enhanced by training. In contrast to the human visual system, the motor system is highly flexible. This is based on special relationships between perception and action and between cognition and action. In addition, the healthy adult brain can learn new functions, e.g. increasing resolution above the retinal one. The significance of these studies for rehabilitation after brain damage will be discussed.

  1. The Role of Visual Speech Information in Supporting Perceptual Learning of Degraded Speech

    ERIC Educational Resources Information Center

    Wayne, Rachel V.; Johnsrude, Ingrid S.

    2012-01-01

    Following cochlear implantation, hearing-impaired listeners must adapt to speech as heard through their prosthesis. Visual speech information (VSI; the lip and facial movements of speech) is typically available in everyday conversation. Here, we investigate whether learning to understand a popular auditory simulation of speech as transduced by a…

  2. Nicotine facilitates memory consolidation in perceptual learning.

    PubMed

    Beer, Anton L; Vartak, Devavrat; Greenlee, Mark W

    2013-01-01

    Perceptual learning is a special type of non-declarative learning that involves experience-dependent plasticity in sensory cortices. The cholinergic system is known to modulate declarative learning. In particular, reduced levels or efficacy of the neurotransmitter acetylcholine were found to facilitate declarative memory consolidation. However, little is known about the role of the cholinergic system in memory consolidation of non-declarative learning. Here we compared two groups of non-smoking men who learned a visual texture discrimination task (TDT). One group received chewing tobacco containing nicotine for 1 h directly following the TDT training. The other group received a similar tasting control substance without nicotine. Electroencephalographic recordings during substance consumption showed reduced alpha activity and P300 latencies in the nicotine group compared to the control group. When re-tested on the TDT the following day, both groups responded more accurately and more rapidly than during training. These improvements were specific to the retinal location and orientation of the texture elements of the TDT suggesting that learning involved early visual cortex. A group comparison showed that learning effects were more pronounced in the nicotine group than in the control group. These findings suggest that oral consumption of nicotine enhances the efficacy of nicotinic acetylcholine receptors. Our findings further suggest that enhanced efficacy of the cholinergic system facilitates memory consolidation in perceptual learning (and possibly other types of non-declarative learning). In that regard acetylcholine seems to affect consolidation processes in perceptual learning in a different manner than in declarative learning. Alternatively, our findings might reflect dose-dependent cholinergic modulation of memory consolidation. This article is part of a Special Issue entitled 'Cognitive Enhancers'. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Suggested Activities to Use With Children Who Present Symptoms of Visual Perception Problems, Elementary Level.

    ERIC Educational Resources Information Center

    Washington County Public Schools, Washington, PA.

    Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…

  4. Does perceptual learning require consciousness or attention?

    PubMed

    Meuwese, Julia D I; Post, Ruben A G; Scholte, H Steven; Lamme, Victor A F

    2013-10-01

    It has been proposed that visual attention and consciousness are separate [Koch, C., & Tsuchiya, N. Attention and consciousness: Two distinct brain processes. Trends in Cognitive Sciences, 11, 16-22, 2007] and possibly even orthogonal processes [Lamme, V. A. F. Why visual attention and awareness are different. Trends in Cognitive Sciences, 7, 12-18, 2003]. Attention and consciousness converge when conscious visual percepts are attended and hence become available for conscious report. In such a view, a lack of reportability can have two causes: the absence of attention or the absence of a conscious percept. This raises an important question in the field of perceptual learning. It is known that learning can occur in the absence of reportability [Gutnisky, D. A., Hansen, B. J., Iliescu, B. F., & Dragoi, V. Attention alters visual plasticity during exposure-based learning. Current Biology, 19, 555-560, 2009; Seitz, A. R., Kim, D., & Watanabe, T. Rewards evoke learning of unconsciously processed visual stimuli in adult humans. Neuron, 61, 700-707, 2009; Seitz, A. R., & Watanabe, T. Is subliminal learning really passive? Nature, 422, 36, 2003; Watanabe, T., Náñez, J. E., & Sasaki, Y. Perceptual learning without perception. Nature, 413, 844-848, 2001], but it is unclear which of the two ingredients-consciousness or attention-is not necessary for learning. We presented textured figure-ground stimuli and manipulated reportability either by masking (which only interferes with consciousness) or with an inattention paradigm (which only interferes with attention). During the second session (24 hr later), learning was assessed neurally and behaviorally, via differences in figure-ground ERPs and via a detection task. Behavioral and neural learning effects were found for stimuli presented in the inattention paradigm and not for masked stimuli. Interestingly, the behavioral learning effect only became apparent when performance feedback was given on the task to measure learning, suggesting that the memory trace that is formed during inattention is latent until accessed. The results suggest that learning requires consciousness, and not attention, and further strengthen the idea that consciousness is separate from attention.

  5. Handwriting generates variable visual output to facilitate symbol learning.

    PubMed

    Li, Julia X; James, Karin H

    2016-03-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Handwriting generates variable visual input to facilitate symbol learning

    PubMed Central

    Li, Julia X.; James, Karin H.

    2015-01-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913

  7. Perceptual learning improves contrast sensitivity, visual acuity, and foveal crowding in amblyopia.

    PubMed

    Barollo, Michele; Contemori, Giulio; Battaglini, Luca; Pavan, Andrea; Casco, Clara

    2017-01-01

    Amblyopic observers present abnormal spatial interactions between a low-contrast sinusoidal target and high-contrast collinear flankers. It has been demonstrated that perceptual learning (PL) can modulate these low-level lateral interactions, resulting in improved visual acuity and contrast sensitivity. We measured the extent and duration of generalization effects to various spatial tasks (i.e., visual acuity, Vernier acuity, and foveal crowding) through PL on the target's contrast detection. Amblyopic observers were trained on a contrast-detection task for a central target (i.e., a Gabor patch) flanked above and below by two high-contrast Gabor patches. The pre- and post-learning tasks included lateral interactions at different target-to-flankers separations (i.e., 2, 3, 4, 8λ) and included a range of spatial frequencies and stimulus durations as well as visual acuity, Vernier acuity, contrast-sensitivity function, and foveal crowding. The results showed that perceptual training reduced the target's contrast-detection thresholds more for the longest target-to-flanker separation (i.e., 8λ). We also found generalization of PL to different stimuli and tasks: contrast sensitivity for both trained and untrained spatial frequencies, visual acuity for Sloan letters, and foveal crowding, and partially for Vernier acuity. Follow-ups after 5-7 months showed not only complete maintenance of PL effects on visual acuity and contrast sensitivity function but also further improvement in these tasks. These results suggest that PL improves facilitatory lateral interactions in amblyopic observers, which usually extend over larger separations than in typical foveal vision. The improvement in these basic visual spatial operations leads to a more efficient capability of performing spatial tasks involving high levels of visual processing, possibly due to the refinement of bottom-up and top-down networks of visual areas.

  8. Action video game play facilitates the development of better perceptual templates.

    PubMed

    Bejjanki, Vikranth R; Zhang, Ruyuan; Li, Renjie; Pouget, Alexandre; Green, C Shawn; Lu, Zhong-Lin; Bavelier, Daphne

    2014-11-25

    The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play.

  9. Action video game play facilitates the development of better perceptual templates

    PubMed Central

    Bejjanki, Vikranth R.; Zhang, Ruyuan; Li, Renjie; Pouget, Alexandre; Green, C. Shawn; Lu, Zhong-Lin; Bavelier, Daphne

    2014-01-01

    The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play. PMID:25385590

  10. Socio-cognitive profiles for visual learning in young and older adults

    PubMed Central

    Christian, Julie; Goldstone, Aimee; Kuai, Shu-Guang; Chin, Wynne; Abrams, Dominic; Kourtzi, Zoe

    2015-01-01

    It is common wisdom that practice makes perfect; but why do some adults learn better than others? Here, we investigate individuals’ cognitive and social profiles to test which variables account for variability in learning ability across the lifespan. In particular, we focused on visual learning using tasks that test the ability to inhibit distractors and select task-relevant features. We tested the ability of young and older adults to improve through training in the discrimination of visual global forms embedded in a cluttered background. Further, we used a battery of cognitive tasks and psycho-social measures to examine which of these variables predict training-induced improvement in perceptual tasks and may account for individual variability in learning ability. Using partial least squares regression modeling, we show that visual learning is influenced by cognitive (i.e., cognitive inhibition, attention) and social (strategic and deep learning) factors rather than an individual’s age alone. Further, our results show that independent of age, strong learners rely on cognitive factors such as attention, while weaker learners use more general cognitive strategies. Our findings suggest an important role for higher-cognitive circuits involving executive functions that contribute to our ability to improve in perceptual tasks after training across the lifespan. PMID:26113820

  11. Incidental orthographic learning during a color detection task.

    PubMed

    Protopapas, Athanassios; Mitsi, Anna; Koustoumbardis, Miltiadis; Tsitsopoulou, Sofia M; Leventi, Marianna; Seitz, Aaron R

    2017-09-01

    Orthographic learning refers to the acquisition of knowledge about specific spelling patterns forming words and about general biases and constraints on letter sequences. It is thought to occur by strengthening simultaneously activated visual and phonological representations during reading. Here we demonstrate that a visual perceptual learning procedure that leaves no time for articulation can result in orthographic learning evidenced in improved reading and spelling performance. We employed task-irrelevant perceptual learning (TIPL), in which the stimuli to be learned are paired with an easy task target. Assorted line drawings and difficult-to-spell words were presented in red color among sequences of other black-colored words and images presented in rapid succession, constituting a fast-TIPL procedure with color detection being the explicit task. In five experiments, Greek children in Grades 4-5 showed increased recognition of words and images that had appeared in red, both during and after the training procedure, regardless of within-training testing, and also when targets appeared in blue instead of red. Significant transfer to reading and spelling emerged only after increased training intensity. In a sixth experiment, children in Grades 2-3 showed generalization to words not presented during training that carried the same derivational affixes as in the training set. We suggest that reinforcement signals related to detection of the target stimuli contribute to the strengthening of orthography-phonology connections beyond earlier levels of visually-based orthographic representation learning. These results highlight the potential of perceptual learning procedures for the reinforcement of higher-level orthographic representations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia.

    PubMed

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-02-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination.

  13. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia

    PubMed Central

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-01-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination. PMID:26829898

  14. Computational model for perception of objects and motions.

    PubMed

    Yang, WenLu; Zhang, LiQing; Ma, LiBo

    2008-06-01

    Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.

  15. Adaptive optics vision simulation and perceptual learning system based on a 35-element bimorph deformable mirror.

    PubMed

    Dai, Yun; Zhao, Lina; Xiao, Fei; Zhao, Haoxin; Bao, Hua; Zhou, Hong; Zhou, Yifeng; Zhang, Yudong

    2015-02-10

    An adaptive optics visual simulation combined with a perceptual learning (PL) system based on a 35-element bimorph deformable mirror (DM) was established. The larger stroke and smaller size of the bimorph DM made the system have larger aberration correction or superposition ability and be more compact. By simply modifying the control matrix or the reference matrix, select correction or superposition of aberrations was realized in real time similar to a conventional adaptive optics closed-loop correction. PL function was first integrated in addition to conventional adaptive optics visual simulation. PL training undertaken with high-order aberrations correction obviously improved the visual function of adult anisometropic amblyopia. The preliminary application of high-order aberrations correction with PL training on amblyopia treatment was being validated with a large scale population, which might have great potential in amblyopia treatment and visual performance maintenance.

  16. Out of sight, out of mind: Categorization learning and normal aging.

    PubMed

    Schenk, Sabrina; Minda, John P; Lech, Robert K; Suchan, Boris

    2016-10-01

    The present combined EEG and eye tracking study examined the process of categorization learning at different age ranges and aimed to investigate to which degree categorization learning is mediated by visual attention and perceptual strategies. Seventeen young subjects and ten elderly subjects had to perform a visual categorization task with two abstract categories. Each category consisted of prototypical stimuli and an exception. The categorization of prototypical stimuli was learned very early during the experiment, while the learning of exceptions was delayed. The categorization of exceptions was accompanied by higher P150, P250 and P300 amplitudes. In contrast to younger subjects, elderly subjects had problems in the categorization of exceptions, but showed an intact categorization performance for prototypical stimuli. Moreover, elderly subjects showed higher fixation rates for important stimulus features and higher P150 amplitudes, which were positively correlated with the categorization performances. These results indicate that elderly subjects compensate for cognitive decline through enhanced perceptual and attentional processing of individual stimulus features. Additionally, a computational approach has been applied and showed a transition away from purely abstraction-based learning to an exemplar-based learning in the middle block for both groups. However, the calculated models provide a better fit for younger subjects than for elderly subjects. The current study demonstrates that human categorization learning is based on early abstraction-based processing followed by an exemplar-memorization stage. This strategy combination facilitates the learning of real world categories with a nuanced category structure. In addition, the present study suggests that categorization learning is affected by normal aging and modulated by perceptual processing and visual attention. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Making Connections among Multiple Visual Representations: How Do Sense-Making Skills and Perceptual Fluency Relate to Learning of Chemistry Knowledge?

    ERIC Educational Resources Information Center

    Rau, Martina A.

    2018-01-01

    To learn content knowledge in science, technology, engineering, and math domains, students need to make connections among visual representations. This article considers two kinds of connection-making skills: (1) "sense-making skills" that allow students to verbally explain mappings among representations and (2) "perceptual…

  18. The Relationship Between Selected Subtests of the Detroit Tests of Learning Aptitude and Second Grade Reading Achievement.

    ERIC Educational Resources Information Center

    Sherwood, Charles; Chambless, Martha

    Relationships between reading achievement and perceptual skills as measured by selected subtests of the Detroit Tests of Learning Aptitude were investigated in a sample of 73 second graders. Verbal opposites, visual memory for designs, and visual attention span for letters were significantly correlated with both word meaning and vocabulary…

  19. Visual perceptual learning by operant conditioning training follows rules of contingency.

    PubMed

    Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo

    2015-01-01

    Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning.

  20. Visual perceptual learning by operant conditioning training follows rules of contingency

    PubMed Central

    Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo

    2015-01-01

    Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning. PMID:26028984

  1. Perceptual Learning Style and Learning Proficiency: A Test of the Hypothesis

    ERIC Educational Resources Information Center

    Kratzig, Gregory P.; Arbuthnott, Katherine D.

    2006-01-01

    Given the potential importance of using modality preference with instruction, the authors tested whether learning style preference correlated with memory performance in each of 3 sensory modalities: visual, auditory, and kinesthetic. In Study 1, participants completed objective measures of pictorial, auditory, and tactile learning and learning…

  2. Visual Complexity: A Review

    ERIC Educational Resources Information Center

    Donderi, Don C.

    2006-01-01

    The idea of visual complexity, the history of its measurement, and its implications for behavior are reviewed, starting with structuralism and Gestalt psychology at the beginning of the 20th century and ending with visual complexity theory, perceptual learning theory, and neural circuit theory at the beginning of the 21st. Evidence is drawn from…

  3. Predictors of Sensitivity to Perceptual Learning in Children With Infantile Nystagmus.

    PubMed

    Huurneman, Bianca; Boonstra, F Nienke; Goossens, Jeroen

    2017-08-01

    To identify predictors of sensitivity to perceptual learning on a computerized, near-threshold letter discrimination task in children with infantile nystagmus (idiopathic IN: n = 18; oculocutaneous albinism accompanied by IN: n = 18). Children were divided into two age-, acuity-, and diagnosis-matched training groups: a crowded (n = 18) and an uncrowded training group (n = 18). Training consisted of 10 sessions spread out over 5 weeks (grand total of 3500 trials). Baseline performance, age, diagnosis, training condition, and perceived pleasantness of training (training joy) were entered as linear regression predictors of training-induced changes on a single- and a crowded-letter task. An impressive 57% of the variability in improvements of single-letter visual acuity was explained by age, training condition, and training joy. Being older and training with uncrowded letters were associated with larger single-letter visual acuity improvements. More training joy was associated with a larger gain from the uncrowded training and a smaller gain from the crowded training. Fifty-six percent of the variability in crowded-letter task improvements was explained by baseline performance, age, diagnosis, and training condition. After regressing out the variability induced by training condition, baseline performance, and age, perceptual learning proved more effective for children with idiopathic IN than for children with albinism accompanied by IN. Training gains increased with poorer baseline performance in idiopaths, but not in children with albinism accompanied by IN. Age and baseline performance, but not training joy, are important prognostic factors for the effect of perceptual learning in children with IN. However, their predictive value for achieving improvements in single-letter acuity and crowded letter acuity, respectively, differs between diagnostic subgroups and training condition. These findings may help with personalized treatment of individuals likely to benefit from perceptual learning.

  4. Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.

    PubMed

    Chang, Acer Y C; Kanai, Ryota; Seth, Anil K

    2015-01-01

    Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Perceptual Learning Selectively Refines Orientation Representations in Early Visual Cortex

    PubMed Central

    Jehee, Janneke F.M.; Ling, Sam; Swisher, Jascha D.; van Bergen, Ruben S.; Tong, Frank

    2013-01-01

    Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily one-hour training sessions. Training on average led to a two-fold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1–V4) using signal detection measures, both pre- and post-training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2–V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information. PMID:23175828

  6. Perceptual learning selectively refines orientation representations in early visual cortex.

    PubMed

    Jehee, Janneke F M; Ling, Sam; Swisher, Jascha D; van Bergen, Ruben S; Tong, Frank

    2012-11-21

    Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily 1 h training sessions. Training on average led to a twofold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1-V4) using signal detection measures, both before and after training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2-V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information.

  7. Implicit recognition based on lateralized perceptual fluency.

    PubMed

    Vargas, Iliana M; Voss, Joel L; Paller, Ken A

    2012-02-06

    In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this "implicit recognition" results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.

  8. Intact Visual Discrimination of Complex and Feature-Ambiguous Stimuli in the Absence of Perirhinal Cortex

    ERIC Educational Resources Information Center

    Squire, Larry R.; Levy, Daniel A.; Shrager, Yael

    2005-01-01

    The perirhinal cortex is known to be important for memory, but there has recently been interest in the possibility that it might also be involved in visual perceptual functions. In four experiments, we assessed visual discrimination ability and visual discrimination learning in severely amnesic patients with large medial temporal lobe lesions that…

  9. Brief daily exposures to Asian females reverses perceptual narrowing for Asian faces in Caucasian infants

    PubMed Central

    Anzures, Gizelle; Wheeler, Andrea; Quinn, Paul C.; Pascalis, Olivier; Slater, Alan M.; Heron-Delaney, Michelle; Tanaka, James W.; Lee, Kang

    2012-01-01

    Perceptual narrowing in the visual, auditory, and multisensory domains has its developmental origins in infancy. The present study shows that experimentally induced experience can reverse the effects of perceptual narrowing on infants’ visual recognition memory of other-race faces. Caucasian 8- to 10-month-olds who could not discriminate between novel and familiarized Asian faces at the beginning of testing were given brief daily experience with Asian female faces in the experimental condition and Caucasian female faces in the control condition. At the end of three weeks, only infants who received daily experience with Asian females showed above-chance recognition of novel Asian female and male faces. Further, infants in the experimental condition showed greater efficiency in learning novel Asian females compared to infants in the control condition. Thus, visual experience with a novel stimulus category can reverse the effects of perceptual narrowing in infancy via improved stimulus recognition and encoding. PMID:22625845

  10. Perceptual Learning Improves Stereoacuity in Amblyopia

    PubMed Central

    Xi, Jie; Jia, Wu-Li; Feng, Li-Xia; Lu, Zhong-Lin; Huang, Chang-Bing

    2014-01-01

    Purpose. Amblyopia is a developmental disorder that results in both monocular and binocular deficits. Although traditional treatment in clinical practice (i.e., refractive correction, or occlusion by patching and penalization of the fellow eye) is effective in restoring monocular visual acuity, there is little information on how binocular function, especially stereopsis, responds to traditional amblyopia treatment. We aim to evaluate the effects of perceptual learning on stereopsis in observers with amblyopia in the current study. Methods. Eleven observers (21.1 ± 5.1 years, six females) with anisometropic or ametropic amblyopia were trained to judge depth in 10 to 13 sessions. Red–green glasses were used to present three different texture anaglyphs with different disparities but a fixed exposure duration. Stereoacuity was assessed with the Fly Stereo Acuity Test and visual acuity was assessed with the Chinese Tumbling E Chart before and after training. Results. Averaged across observers, training significantly reduced disparity threshold from 776.7″ to 490.4″ (P < 0.01) and improved stereoacuity from 200.3″ to 81.6″ (P < 0.01). Interestingly, visual acuity also significantly improved from 0.44 to 0.35 logMAR (approximately 0.9 lines, P < 0.05) in the amblyopic eye after training. Moreover, the learning effects in two of the three retested observers were largely retained over a 5-month period. Conclusions. Perceptual learning is effective in improving stereo vision in observers with amblyopia. These results, together with previous evidence, suggest that structured monocular and binocular training might be necessary to fully recover degraded visual functions in amblyopia. Chinese Abstract PMID:24508791

  11. Perceptual learning improves stereoacuity in amblyopia.

    PubMed

    Xi, Jie; Jia, Wu-Li; Feng, Li-Xia; Lu, Zhong-Lin; Huang, Chang-Bing

    2014-04-15

    Amblyopia is a developmental disorder that results in both monocular and binocular deficits. Although traditional treatment in clinical practice (i.e., refractive correction, or occlusion by patching and penalization of the fellow eye) is effective in restoring monocular visual acuity, there is little information on how binocular function, especially stereopsis, responds to traditional amblyopia treatment. We aim to evaluate the effects of perceptual learning on stereopsis in observers with amblyopia in the current study. Eleven observers (21.1 ± 5.1 years, six females) with anisometropic or ametropic amblyopia were trained to judge depth in 10 to 13 sessions. Red-green glasses were used to present three different texture anaglyphs with different disparities but a fixed exposure duration. Stereoacuity was assessed with the Fly Stereo Acuity Test and visual acuity was assessed with the Chinese Tumbling E Chart before and after training. Averaged across observers, training significantly reduced disparity threshold from 776.7″ to 490.4″ (P < 0.01) and improved stereoacuity from 200.3″ to 81.6″ (P < 0.01). Interestingly, visual acuity also significantly improved from 0.44 to 0.35 logMAR (approximately 0.9 lines, P < 0.05) in the amblyopic eye after training. Moreover, the learning effects in two of the three retested observers were largely retained over a 5-month period. Perceptual learning is effective in improving stereo vision in observers with amblyopia. These results, together with previous evidence, suggest that structured monocular and binocular training might be necessary to fully recover degraded visual functions in amblyopia. Chinese Abstract.

  12. Visual memory and learning in extremely low-birth-weight/extremely preterm adolescents compared with controls: a geographic study.

    PubMed

    Molloy, Carly S; Wilson-Ching, Michelle; Doyle, Lex W; Anderson, Vicki A; Anderson, Peter J

    2014-04-01

    Contemporary data on visual memory and learning in survivors born extremely preterm (EP; <28 weeks gestation) or with extremely low birth weight (ELBW; <1,000 g) are lacking. Geographically determined cohort study of 298 consecutive EP/ELBW survivors born in 1991 and 1992, and 262 randomly selected normal-birth-weight controls. Visual learning and memory data were available for 221 (74.2%) EP/ELBW subjects and 159 (60.7%) controls. EP/ELBW adolescents exhibited significantly poorer performance across visual memory and learning variables compared with controls. Visual learning and delayed visual memory were particularly problematic and remained so after controlling for visual-motor integration and visual perception and excluding adolescents with neurosensory disability, and/or IQ <70. Male EP/ELBW adolescents or those treated with corticosteroids had poorer outcomes. EP/ELBW adolescents have poorer visual memory and learning outcomes compared with controls, which cannot be entirely explained by poor visual perceptual or visual constructional skills or intellectual impairment.

  13. Object-based implicit learning in visual search: perceptual segmentation constrains contextual cueing.

    PubMed

    Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian

    2013-07-09

    In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.

  14. Decoding Reveals Plasticity in V3A as a Result of Motion Perceptual Learning

    PubMed Central

    Shibata, Kazuhisa; Chang, Li-Hung; Kim, Dongho; Náñez, José E.; Kamitani, Yukiyasu; Watanabe, Takeo; Sasaki, Yuka

    2012-01-01

    Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area. PMID:22952849

  15. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing.

    PubMed

    Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal

    2017-01-01

    Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.

  16. The Interplay between Perceptual Organization and Categorization in the Representation of Complex Visual Patterns by Young Infants

    ERIC Educational Resources Information Center

    Quinn, Paul C.; Schyns, Philippe G.; Goldstone, Robert L.

    2006-01-01

    The relation between perceptual organization and categorization processes in 3- and 4-month-olds was explored. The question was whether an invariant part abstracted during category learning could interfere with Gestalt organizational processes. A 2003 study by Quinn and Schyns had reported that an initial category familiarization experience in…

  17. A Learning Print Approach Toward Perceptual Training and Reading in Kindergarten.

    ERIC Educational Resources Information Center

    D'Annunzio, Anthony

    The purpose of this research study was to compare two kinds of perceptual training for kindergarteners. A control group was grouped for instruction in visual or auditory perception. The children whose weaker modality was auditory received an "Open Court" program which stressed the acquisition of phonetic skills. The Frostig-Horne program was given…

  18. Visual training improves perceptual grouping based on basic stimulus features.

    PubMed

    Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M

    2017-10-01

    Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.

  19. The Perceptual Basis of the Modality Effect in Multimedia Learning

    ERIC Educational Resources Information Center

    Rummer, Ralf; Schweppe, Judith; Furstenberg, Anne; Scheiter, Katharina; Zindler, Antje

    2011-01-01

    Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial…

  20. Perceptual Learning Style Matching and L2 Vocabulary Acquisition

    ERIC Educational Resources Information Center

    Tight, Daniel G.

    2010-01-01

    This study explored learning and retention of concrete nouns in second language Spanish by first language English undergraduates (N = 128). Each completed a learning style (visual, auditory, tactile/kinesthetic, mixed) assessment, took a vocabulary pretest, and then studied 12 words each through three conditions (matching, mismatching, mixed…

  1. Learning Style Preferences of Asian American (Chinese, Filipino, Korean, and Vietnamese) Students in Secondary Schools.

    ERIC Educational Resources Information Center

    Park, Clara C.

    1997-01-01

    Investigates for perceptual learning style preferences (auditory, visual, kinesthetic, and tactile) and preferences for group and individual leaning of Chinese, Filipino, Korean, and Vietnamese secondary education students. Comparison analysis reveals diverse learning style preferences between Anglo and Asian American students and also between…

  2. Visual Aversive Learning Compromises Sensory Discrimination.

    PubMed

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.

  3. Interactions between attention, context and learning in primary visual cortex.

    PubMed

    Gilbert, C; Ito, M; Kapadia, M; Westheimer, G

    2000-01-01

    Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.

  4. Can perceptual learning be used to treat amblyopia beyond the critical period of visual development?

    PubMed

    Astle, Andrew T; Webb, Ben S; McGraw, Paul V

    2011-11-01

    Amblyopia presents early in childhood and affects approximately 3% of western populations. The monocular visual acuity loss is conventionally treated during the 'critical periods' of visual development by occluding or penalising the fellow eye to encourage use of the amblyopic eye. Despite the measurable success of this approach in many children, substantial numbers of people still suffer with amblyopia later in life because either they were never diagnosed in childhood, did not respond to the original treatment, the amblyopia was only partially remediated, or their acuity loss returned after cessation of treatment. In this review, we consider whether the visual deficits of this largely overlooked amblyopic group are amenable to conventional and innovative therapeutic interventions later in life, well beyond the age at which treatment is thought to be effective. There is a considerable body of evidence that residual plasticity is present in the adult visual brain and this can be harnessed to improve function in adults with amblyopia. Perceptual training protocols have been developed to optimise visual gains in this clinical population. Results thus far are extremely encouraging; marked visual improvements have been demonstrated, the perceptual benefits transfer to new visual tasks and appear to be relatively enduring. The essential ingredients of perceptual training protocols are being incorporated into video game formats, facilitating home-based interventions. Many studies support perceptual training as a tool for improving vision in amblyopes beyond the critical period. Should this novel form of treatment stand up to the scrutiny of a randomised controlled trial, clinicians may need to re-evaluate their therapeutic approach to adults with amblyopia. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  5. Can perceptual learning be used to treat amblyopia beyond the critical period of visual development?

    PubMed Central

    Astle, Andrew T.; Webb, Ben S.; McGraw, Paul V.

    2012-01-01

    Background Amblyopia presents early in childhood and affects approximately 3% of western populations. The monocular visual acuity loss is conventionally treated during the “critical periods” of visual development by occluding or penalising the fellow eye to encourage use of the amblyopic eye. Despite the measurable success of this approach in many children, substantial numbers of people still suffer with amblyopia later in life because either they were never diagnosed in childhood, did not respond to the original treatment, the amblyopia was only partially remediated, or their acuity loss returned after cessation of treatment. Purpose In this review, we consider whether the visual deficits of this largely overlooked amblyopic group are amenable to conventional and innovative therapeutic interventions later in life, well beyond the age at which treatment is thought to be effective. Recent findings There is a considerable body of evidence that residual plasticity is present in the adult visual brain and this can be harnessed to improve function in adults with amblyopia. Perceptual training protocols have been developed to optimise visual gains in this clinical population. Results thus far are extremely encouraging: marked visual improvements have been demonstrated, the perceptual benefits transfer to new visual tasks and appear to be relatively enduring. The essential ingredients of perceptual training protocols are being incorporated into video game formats, facilitating home-based interventions. Summary Many studies support perceptual training as a tool for improving vision in amblyopes beyond the critical period. Should this novel form of treatment stand up to the scrutiny of a randomised controlled trial, clinicians may need to re-evaluate their therapeutic approach to adults with amblyopia. PMID:21981034

  6. Perceptual Learning Improves Contrast Sensitivity of V1 Neurons in Cats

    PubMed Central

    Hua, Tianmiao; Bao, Pinglei; Huang, Chang-Bing; Wang, Zhenhua; Xu, Jinwang

    2010-01-01

    Summary Background Perceptual learning has been documented in adult humans over a wide range of tasks. Although the often observed specificity of learning is generally interpreted as evidence for training-induced plasticity in early cortical areas, physiological evidence for training-induced changes in early visual cortical areas is modest, despite reports of learning-induced changes of cortical activities in fMRI studies. To reveal the physiological bases of perceptual learning, we combined psychophysical measurements with extracellular single-unit recording under anesthetized preparations, and examined the effects of training in grating orientation identification on both perceptual and neuronal contrast sensitivity functions of cats. Results We have found that training significantly improved perceptual contrast sensitivity of the cats to gratings with the spatial frequencies near the ‘trained’ spatial frequency, with stronger effects in the trained eye. Consistent with behavioral assessments, the mean contrast sensitivity of neurons recorded from V1 of the trained cats was significantly higher than that of neurons recorded from the untrained cats. Furthermore, in the trained cats, the contrast sensitivity of V1 neurons responding preferentially to stimuli presented via the trained eyes was significantly greater than that of neurons responding preferentially to stimuli presented via the ‘untrained’ eyes. The effect was confined to the trained spatial frequencies. In both trained and untrained cats, the neuronal contrast sensitivity functions derived from the contrast sensitivity of the individual neurons were highly correlated with behaviorally determined perceptual contrast sensitivity functions. Conclusions We suggest that training-induced neuronal contrast-gain in area V1 underlies behaviorally determined perceptual contrast sensitivity improvements. PMID:20451388

  7. Using Learning Preferences to Improve Coaching and Athletic Performance

    ERIC Educational Resources Information Center

    Dunn, Julia L.

    2009-01-01

    Each individual learns in a different manner, depending on his or her perceptual or learning preferences (visual, auditory, read/write, or kinesthetic). In sport, coaches and athletes must speak a common language of instructions, verbal cues, and appropriate motor responses. Thus, developing a clear understanding of how to use students' learning…

  8. Learning to identify crowded letters: Does the learning depend on the frequency of training?

    PubMed Central

    Chung, Susana T. L.; Truong, Sandy R.

    2012-01-01

    Performance for many visual tasks improves with training. The magnitude of improvement following training depends on the training task, number of trials per training session and the total amount of training. Does the magnitude of improvement also depend on the frequency of training sessions? In this study, we compared the learning effect for three groups of normally sighted observers who repeatedly practiced the task of identifying crowded letters in the periphery for six sessions (1000 trials per session), according to three different training schedules — one group received one session of training everyday, the second group received a training session once a week and the third group once every two weeks. Following six sessions of training, all observers improved in their performance of identifying crowded letters in the periphery. Most importantly, the magnitudes of improvement were similar across the three training groups. The improvement was accompanied by a reduction in the spatial extent of crowding, an increase in the size of visual span and a reduction in letter-size threshold. The magnitudes of these accompanied improvements were also similar across the three training groups. Our finding that the effectiveness of visual perceptual learning is similar for daily, weekly and biweekly training has significant implication for adopting perceptual learning as an option to improve visual functions for clinical patients. PMID:23206551

  9. [Visual perceptual abilities of children with low motor abilities--a pilot study].

    PubMed

    Werpup-Stüwe, Lina; Petermann, Franz

    2015-01-01

    The results of many studies show visual perceptual deficits in children with low motor abilities. This study aims to indicate the correlation between visual-perceptual and motor abilities. The correlation of visual-perceptual and motor abilities of 41 children is measured by using the German versions of the Developmental Test of Visual Perception--Adolescent and Adult (DTVP-A) and the Movement Assessment Battery for Children--Second Edition (M-ABC-2). The visual-perceptual abilities of children with low motor abilities (n=21) are also compared to the visual-perceptual abilities of children with normal motor abilities (the control group, n=20). High correlations between the visual-perceptual and motor abilities are found. The perceptual abilities of the groups differ significantly. Nearly half of the children with low motor abilities show visual-perceptual deficits. Visual perceptual abilities of children suffering coordination disorders should always be assessed. The DTVP-A is useful, because it provides the possibilities to compare motor-reduced visual-perceptual abilities and visualmotor integration abilities and to estimate the deficit's degree.

  10. Visual field differences in visual word recognition can emerge purely from perceptual learning: evidence from modeling Chinese character pronunciation.

    PubMed

    Hsiao, Janet Hui-Wen

    2011-11-01

    In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is skewed to the right, whereas in PS characters it is skewed to the left. Through training a computational model for SP and PS character recognition that takes into account of the locations in which the characters appear in the visual field during learning, but does not assume any fundamental hemispheric processing difference, we show that visual field differences can emerge as a consequence of the fundamental structural differences in information between SP and PS characters, as opposed to the fundamental processing differences between the two hemispheres. This modeling result is also consistent with behavioral naming performance. This work provides strong evidence that perceptual learning, i.e., the information structure of word stimuli to which the readers have long been exposed, is one of the factors that accounts for hemispheric asymmetry effects in visual word recognition. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Mildly Handicapped Students in the Social Studies Class: Facilitating Learning.

    ERIC Educational Resources Information Center

    Simms, Rochelle B.

    1984-01-01

    Problems experienced by mildly handicapped students include visual perceptual and visual motor problems, inability to use and organize time, poor notetaking and outlining skills, and deficient reading vocabulary and writing skills. What the social studies teacher can do to alleviate each of these problems is discussed. (RM)

  12. Two Dozen-Plus Ideas That Will Help Special Needs Kids.

    ERIC Educational Resources Information Center

    Boyle, Martha; Korn-Rothschild, Sarah

    1994-01-01

    Contains 27 specific suggestions for teachers with special needs children mainstreamed in their classroom, particularly children with visual and auditory perceptual difficulties and poor motor skills. Notes that teachers need to make sure that directions, visual and verbal cues, learning materials, and computers are appropriate for children with…

  13. Perceptual Learning in Children With Infantile Nystagmus: Effects on Reading Performance.

    PubMed

    Huurneman, Bianca; Boonstra, F Nienke; Goossens, Jeroen

    2016-08-01

    Perceptual learning improves visual acuity and reduces crowding in children with infantile nystagmus (IN). Here, we compare reading performance of 6- to 11-year-old children with IN with normal controls, and evaluate whether perceptual learning improves their reading. Children with IN were divided in two training groups: a crowded training group (n = 18; albinism: n = 8; idiopathic IN: n = 10) and an uncrowded training group (n = 17; albinism: n = 9; idiopathic IN: n = 8). Also 11 children with normal vision participated. Outcome measures were: reading acuity (the smallest readable font size), maximum reading speed, critical print size (font size below which reading is suboptimal), and acuity reserve (difference between reading acuity and critical print size). We used multiple regression analyses to test if these reading parameters were related to the children's uncrowded distance acuity and/or crowding scores. Reading acuity and critical print size were 0.65 ± 0.04 and 0.69 ± 0.08 log units larger for children with IN than for children with normal vision. Maximum reading speed and acuity reserve did not differ between these groups. After training, reading acuity improved by 0.12 ± 0.02 logMAR and critical print size improved by 0.11 ± 0.04 logMAR in both IN training groups. The changes in reading acuity, critical print size, and acuity reserve of children with IN were tightly related to changes in their uncrowded distance acuity and the changes in magnitude and extent of crowding. Our findings are the first to show that visual acuity is not the only factor that restricts reading in children with IN, but that crowding also limits their reading performance. By targeting both of these spatial bottlenecks in children with IN, our perceptual learning paradigms significantly improved their reading acuity and critical print size. This shows that perceptual learning can effectively transfer to reading.

  14. The Relationship Between the Learning Style Perceptual Preferences of Urban Fourth Grade Children and the Acquisition of Selected Physical Science Concepts Through Learning Cycle Instructional Methodology.

    NASA Astrophysics Data System (ADS)

    Adams, Kenneth Mark

    The purpose of this research was to investigate the relationship between the learning style perceptual preferences of fourth grade urban students and the attainment of selected physical science concepts for three simple machines as taught using learning cycle methodology. The sample included all fourth grade children from one urban elementary school (N = 91). The research design followed a quasi-experimental format with a single group, equivalent teacher demonstration and student investigation materials, and identical learning cycle instructional treatment. All subjects completed the Understanding Simple Machines Test (USMT) prior to instructional treatment, and at the conclusion of treatment to measure student concept attainment related to the pendulum, the lever and fulcrum, and the inclined plane. USMT pre and post-test scores, California Achievement Test (CAT-5) percentile scores, and Learning Style Inventory (LSI) standard scores for four perceptual elements for each subject were held in a double blind until completion of the USMT post-test. The hypothesis tested in this study was: Learning style perceptual preferences of fourth grade students as measured by the Dunn, Dunn, and Price Learning Style Inventory (LSI) are significant predictors of success in the acquisition of physical science concepts taught through use of the learning cycle. Analysis of pre and post USMT scores, 18.18 and 30.20 respectively, yielded a significant mean gain of +12.02. A controlled stepwise regression was employed to identify significant predictors of success on the USMT post-test from among USMT pre-test, four CAT-5 percentile scores, and four LSI perceptual standard scores. The CAT -5 Total Math and Total Reading accounted for 64.06% of the variance in the USMT post-test score. The only perceptual element to act as a significant predictor was the Kinesthetic standard score, accounting for 1.72% of the variance. The study revealed that learning cycle instruction does not appear to be sensitive to different perceptual preferences. Students with different preferences for auditory, visual, and tactile modalities, when learning, seem to benefit equally from learning cycle exposure. Increased use of a double blind for future learning styles research was recommended.

  15. Neural Evidence of Statistical Learning: Efficient Detection of Visual Regularities without Awareness

    ERIC Educational Resources Information Center

    Turk-Browne, Nicholas B.; Scholl, Brian J.; Chun, Marvin M.; Johnson, Marcia K.

    2009-01-01

    Our environment contains regularities distributed in space and time that can be detected by way of statistical learning. This unsupervised learning occurs without intent or awareness, but little is known about how it relates to other types of learning, how it affects perceptual processing, and how quickly it can occur. Here we use fMRI during…

  16. Perceptual Visual Distortions in Adult Amblyopia and Their Relationship to Clinical Features

    PubMed Central

    Piano, Marianne E. F.; Bex, Peter J.; Simmers, Anita J.

    2015-01-01

    Purpose Develop a paradigm to map binocular perceptual visual distortions in adult amblyopes and visually normal controls, measure their stability over time, and determine the relationship between strength of binocular single vision and distortion magnitude. Methods Perceptual visual distortions were measured in 24 strabismic, anisometropic, or microtropic amblyopes (interocular acuity difference ≥ 0.200 logMAR or history of amblyopia treatment) and 10 controls (mean age 27.13 ± 10.20 years). The task was mouse-based target alignment on a stereoscopic liquid crystal display monitor, measured binocularly five times during viewing dichoptically through active shutter glasses, amblyopic eye viewing cross-hairs, fellow eye viewing single target dots (16 locations within central 5°), and five times nondichoptically, with all stimuli visible to either eye. Measurements were repeated over time (1 week, 1 month) in eight amblyopic subjects, evaluating test–retest reliability. Measurements were also correlated against logMAR visual acuity, horizontal prism motor fusion range, Frisby/Preschool Randot stereoacuity, and heterophoria/heterotropia prism cover test measurement. Results Sixty-seven percent (16/24) of amblyopes had significant perceptual visual distortions under dichoptic viewing conditions compared to nondichoptic viewing conditions and dichoptic control group performance. Distortions correlated with the strength of motor fusion (r = −0.417, P = 0.043) and log stereoacuity (r = 0.492, P = 0.015), as well as near angle of heterotropic/heterophoric deviation (r = 0.740, P < 0.001), and, marginally, amblyopia depth (r = 0.405, P = 0.049). Global distortion index (GDI, mean displacement) remained, overall, consistent over time (median change in GDI between baseline and 1 week = −0.03°, 1 month = −0.08°; x-axis Z = 4.4256, P < 0.001; y-axis Z = 5.0547, P < 0.001). Conclusions Perceptual visual distortions are stable over time and associated with poorer binocular function, greater amblyopia depth, and larger angles of ocular deviation. Assessment of distortions may be relevant for recent perceptual learning paradigms specifically targeting binocular vision. PMID:26284559

  17. Multisensory perceptual learning is dependent upon task difficulty.

    PubMed

    De Niear, Matthew A; Koo, Bonhwang; Wallace, Mark T

    2016-11-01

    There has been a growing interest in developing behavioral tasks to enhance temporal acuity as recent findings have demonstrated changes in temporal processing in a number of clinical conditions. Prior research has demonstrated that perceptual training can enhance temporal acuity both within and across different sensory modalities. Although certain forms of unisensory perceptual learning have been shown to be dependent upon task difficulty, this relationship has not been explored for multisensory learning. The present study sought to determine the effects of task difficulty on multisensory perceptual learning. Prior to and following a single training session, participants completed a simultaneity judgment (SJ) task, which required them to judge whether a visual stimulus (flash) and auditory stimulus (beep) presented in synchrony or at various stimulus onset asynchronies (SOAs) occurred synchronously or asynchronously. During the training session, participants completed the same SJ task but received feedback regarding the accuracy of their responses. Participants were randomly assigned to one of three levels of difficulty during training: easy, moderate, and hard, which were distinguished based on the SOAs used during training. We report that only the most difficult (i.e., hard) training protocol enhanced temporal acuity. We conclude that perceptual training protocols for enhancing multisensory temporal acuity may be optimized by employing audiovisual stimuli for which it is difficult to discriminate temporal synchrony from asynchrony.

  18. Two-stage perceptual learning to break visual crowding.

    PubMed

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  19. Guiding attention aids the acquisition of anticipatory skill in novice soccer goalkeepers.

    PubMed

    Ryu, Donghyun; Kim, Seonjin; Abernethy, Bruce; Mann, David L

    2013-06-01

    The ability to anticipate the actions of opponents can be enhanced through perceptual-skill training, though there is doubt regarding the most effective form of doing so. We sought to evaluate whether perceptual-skill learning would be enhanced when supplemented with guiding visual information. Twenty-eight participants without soccer-playing experience were assigned to a guided perceptual-training group (n = 9), an unguided perceptual-training group (n = 10), or a control group (n = 9). The guided perceptual-training group received half of their trials with color cueing that highlighted either the key kinematic changes in the kicker's action or the known visual search strategy of expert goalkeepers. The unguided perceptual-training group undertook an equal number of trials of practice, but all trials were without guidance. The control group undertook no training intervention. All participants completed an anticipation test immediately before and after the 7-day training intervention, as well as a 24-hr retention test. The guided perceptual-training group significantly improved their response accuracy for anticipating the direction of soccer penalty kicks from preintervention to postintervention, whereas no change in performance was evident at posttest for either the unguided perceptual-training group or the control group. The superior performance of the guided perceptual-training group was preserved in the retention test and was confirmed when relative changes in response time were controlled using a covariate analysis. Perceptual training supplemented with guiding information provides a level of improvement in perceptual anticipatory skill that is not seen without guidance.

  20. Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments.

    PubMed

    Effenberg, Alfred O; Fehse, Ursula; Schmitz, Gerd; Krueger, Bjoern; Mechling, Heinz

    2016-01-01

    Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill-even exceeding usually expected acoustic rhythmic effects on motor learning.

  1. Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments

    PubMed Central

    Effenberg, Alfred O.; Fehse, Ursula; Schmitz, Gerd; Krueger, Bjoern; Mechling, Heinz

    2016-01-01

    Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill—even exceeding usually expected acoustic rhythmic effects on motor learning. PMID:27303255

  2. The effectiveness of multimedia visual perceptual training groups for the preschool children with developmental delay.

    PubMed

    Chen, Yi-Nan; Lin, Chin-Kai; Wei, Ta-Sen; Liu, Chi-Hsin; Wuang, Yee-Pay

    2013-12-01

    This study compared the effectiveness of three approaches to improving visual perception among preschool children 4-6 years old with developmental delays: multimedia visual perceptual group training, multimedia visual perceptual individual training, and paper visual perceptual group training. A control group received no special training. This study employed a pretest-posttest control group of true experimental design. A total of 64 children 4-6 years old with developmental delays were randomized into four groups: (1) multimedia visual perceptual group training (15 subjects); (2) multimedia visual perceptual individual training group (15 subjects); paper visual perceptual group training (19 subjects); and (4) a control group (15 subjects) with no visual perceptual training. Forty minute training sessions were conducted once a week for 14 weeks. The Test of Visual Perception Skills, third edition, was used to evaluate the effectiveness of the intervention. Paired-samples t-test showed significant differences pre- and post-test among the three groups, but no significant difference was found between the pre-test and post-test scores among the control group. ANOVA results showed significant differences in improvement levels among the four study groups. Scheffe post hoc test results showed significant differences between: group 1 and group 2; group 1 and group 3; group 1 and the control group; and group 2 and the control group. No significant differences were reported between group 2 and group 3, and group 3 and the control group. The results showed all three therapeutic programs produced significant differences between pretest and posttest scores. The training effect on the multimedia visual perceptual group program and the individual program was greater than the developmental effect Both the multimedia visual perceptual group training program and the multimedia visual perceptual individual training program produced significant effects on visual perception. The multimedia visual perceptual group training program was more effective for improving visual perception than was multimedia visual perceptual individual training program. The multimedia visual perceptual group training program was more effective than was the paper visual perceptual group training program. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Learning from vision-to-touch is different than learning from touch-to-vision.

    PubMed

    Wismeijer, Dagmar A; Gegenfurtner, Karl R; Drewing, Knut

    2012-01-01

    We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 × 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two "natural" tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two "novel" tasks, in which participants were either asked to haptically judge glossiness or to visually judge softness, we investigated how perceptual estimates transfer from one sense to the other. Our results showed that vision does not teach touch as efficient as touch seems to teach vision.

  4. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    PubMed

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.

  5. Learning foreign sounds in an alien world: Videogame training improves non-native speech categorization

    PubMed Central

    Lim, Sung-joo; Holt, Lori L.

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually-weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information and players’ responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5 hours across 5 days exhibited improvements in /r/-/l/ perception on par with 2–4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. PMID:21827533

  6. Perceptual Training in Beach Volleyball Defence: Different Effects of Gaze-Path Cueing on Gaze and Decision-Making

    PubMed Central

    Klostermann, André; Vater, Christian; Kredel, Ralf; Hossner, Ernst-Joachim

    2015-01-01

    For perceptual-cognitive skill training, a variety of intervention methods has been proposed, including the so-called “color-cueing method” which aims on superior gaze-path learning by applying visual markers. However, recent findings challenge this method, especially, with regards to its actual effects on gaze behavior. Consequently, after a preparatory study on the identification of appropriate visual cues for life-size displays, a perceptual-training experiment on decision-making in beach volleyball was conducted, contrasting two cueing interventions (functional vs. dysfunctional gaze path) with a conservative control condition (anticipation-related instructions). Gaze analyses revealed learning effects for the dysfunctional group only. Regarding decision-making, all groups showed enhanced performance with largest improvements for the control group followed by the functional and the dysfunctional group. Hence, the results confirm cueing effects on gaze behavior, but they also question its benefit for enhancing decision-making. However, before completely denying the method’s value, optimisations should be checked regarding, for instance, cueing-pattern characteristics and gaze-related feedback. PMID:26648894

  7. Acquisition of Visual Perceptual Skills from Worked Examples: Learning to Interpret Electrocardiograms (ECGs)

    ERIC Educational Resources Information Center

    van den Berge, Kees; van Gog, Tamara; Mamede, Silvia; Schmidt, Henk G.; van Saase, Jan L. C. M.; Rikers, Remy M. J. P.

    2013-01-01

    Research has shown that for acquiring problem-solving skills, instruction consisting of studying worked examples is more effective and efficient for novice learners than instruction consisting of problem-solving. This study investigated whether worked examples would also be a useful instructional format for the acquisition of visual perceptual…

  8. Effects of Online Augmented Kinematic and Perceptual Feedback on Treatment of Speech Movements in Apraxia of Speech

    PubMed Central

    McNeil, M.R.; Katz, W.F.; Fossett, T.R.D.; Garst, D.M.; Szuminsky, N.J.; Carter, G.; Lim, K.Y.

    2010-01-01

    Apraxia of speech (AOS) is a motor speech disorder characterized by disturbed spatial and temporal parameters of movement. Research on motor learning suggests that augmented feedback may provide a beneficial effect for training movement. This study examined the effects of the presence and frequency of online augmented visual kinematic feedback (AVKF) and clinician-provided perceptual feedback on speech accuracy in 2 adults with acquired AOS. Within a single-subject multiple-baseline design, AVKF was provided using electromagnetic midsagittal articulography (EMA) in 2 feedback conditions (50 or 100%). Articulator placement was specified for speech motor targets (SMTs). Treated and baselined SMTs were in the initial or final position of single-syllable words, in varying consonant-vowel or vowel-consonant contexts. SMTs were selected based on each participant's pre-assessed erred productions. Productions were digitally recorded and online perceptual judgments of accuracy (including segment and intersegment distortions) were made. Inter- and intra-judge reliability for perceptual accuracy was high. Results measured by visual inspection and effect size revealed positive acquisition and generalization effects for both participants. Generalization occurred across vowel contexts and to untreated probes. Results of the frequency manipulation were confounded by presentation order. Maintenance of learned and generalized effects were demonstrated for 1 participant. These data provide support for the role of augmented feedback in treating speech movements that result in perceptually accurate speech production. Future investigations will explore the independent contributions of each feedback type (i.e. kinematic and perceptual) in producing efficient and effective training of SMTs in persons with AOS. PMID:20424468

  9. Effect of tDCS on task relevant and irrelevant perceptual learning of complex objects.

    PubMed

    Van Meel, Chayenne; Daniels, Nicky; de Beeck, Hans Op; Baeck, Annelies

    2016-01-01

    During perceptual learning the visual representations in the brain are altered, but these changes' causal role has not yet been fully characterized. We used transcranial direct current stimulation (tDCS) to investigate the role of higher visual regions in lateral occipital cortex (LO) in perceptual learning with complex objects. We also investigated whether object learning is dependent on the relevance of the objects for the learning task. Participants were trained in two tasks: object recognition using a backward masking paradigm and an orientation judgment task. During both tasks, an object with a red line on top of it were presented in each trial. The crucial difference between both tasks was the relevance of the object: the object was relevant for the object recognition task, but not for the orientation judgment task. During training, half of the participants received anodal tDCS stimulation targeted at the lateral occipital cortex (LO). Afterwards, participants were tested on how well they recognized the trained objects, the irrelevant objects presented during the orientation judgment task and a set of completely new objects. Participants stimulated with tDCS during training showed larger improvements of performance compared to participants in the sham condition. No learning effect was found for the objects presented during the orientation judgment task. To conclude, this study suggests a causal role of LO in relevant object learning, but given the rather low spatial resolution of tDCS, more research on the specificity of this effect is needed. Further, mere exposure is not sufficient to train object recognition in our paradigm.

  10. A Behavioral Treatment for Traumatic Brain Injury-associated Visual Dysfunction Based on Adult Cortical Plasticity

    DTIC Science & Technology

    2011-10-01

    protocol for training the TBI patients.    References    Bonneh, Y.S., Sagi, D., & Polat, U. (2007). Spatial and temporal crowding in  amblyopia ...evidence from treatment  of adult  amblyopia . Restor Neurol Neurosci, 26 (4‐5), 413‐424.  Polat, U. (2009). Making perceptual learning practical to improve...visual functions. Vision Res,   13    Polat, U., Ma‐Naim, T., Belkin, M., & Sagi, D. (2004). Improving vision in adult  amblyopia  by  perceptual learning

  11. Perceptual learning improves visual performance in juvenile amblyopia.

    PubMed

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  12. Transfer and scaffolding of perceptual grouping occurs across organizing principles in 3- to 7-month-old infants.

    PubMed

    Quinn, Paul C; Bhatt, Ramesh S

    2009-08-01

    Previous research has demonstrated that organizational principles become functional over different time courses of development: Lightness similarity is available at 3 months of age, but form similarity is not readily in evidence until 6 months of age. We investigated whether organization would transfer across principles and whether perceptual scaffolding can occur from an already functional principle to a not-yet-operational principle. Six- to 7-month-old infants (Experiment 1) and 3- to 4-month-old infants (Experiment 2) who were familiarized with arrays of elements organized by lightness similarity displayed a subsequent visual preference for a novel organization defined by form similarity. Results with the older infants demonstrate transfer in perceptual grouping: The organization defined by one grouping principle can direct a visual preference for a novel organization defined by a different grouping principle. Findings with the younger infants suggest that learning based on an already functional organizational process enables an organizational process that is not yet functional through perceptual scaffolding.

  13. Perceptual learning in Williams syndrome: looking beyond averages.

    PubMed

    Gervan, Patricia; Gombos, Ferenc; Kovacs, Ilona

    2012-01-01

    Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline) and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  14. Reading faces: investigating the use of a novel face-based orthography in acquired alexia.

    PubMed

    Moore, Michelle W; Brendel, Paul C; Fiez, Julie A

    2014-02-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic "FaceFont" orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a "linguistic bridge" into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Reading faces: Investigating the use of a novel face-based orthography in acquired alexia

    PubMed Central

    Moore, Michelle W.; Brendel, Paul C.; Fiez, Julie A.

    2014-01-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic “FaceFont” orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a “linguistic bridge” into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. PMID:24463310

  16. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  17. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  18. Short-term perceptual learning in visual conjunction search.

    PubMed

    Su, Yuling; Lai, Yunpeng; Huang, Wanyi; Tan, Wei; Qu, Zhe; Ding, Yulong

    2014-08-01

    Although some studies showed that training can improve the ability of cross-dimension conjunction search, less is known about the underlying mechanism. Specifically, it remains unclear whether training of visual conjunction search can successfully bind different features of separated dimensions into a new function unit at early stages of visual processing. In the present study, we utilized stimulus specificity and generalization to provide a new approach to investigate the mechanisms underlying perceptual learning (PL) in visual conjunction search. Five experiments consistently showed that after 40 to 50 min of training of color-shape/orientation conjunction search, the ability to search for a certain conjunction target improved significantly and the learning effects did not transfer to a new target that differed from the trained target in both color and shape/orientation features. However, the learning effects were not strictly specific. In color-shape conjunction search, although the learning effect could not transfer to a same-shape different-color target, it almost completely transferred to a same-color different-shape target. In color-orientation conjunction search, the learning effect partly transferred to a new target that shared same color or same orientation with the trained target. Moreover, the sum of transfer effects for the same color target and the same orientation target in color-orientation conjunction search was algebraically equivalent to the learning effect for trained target, showing an additive transfer effect. The different transfer patterns in color-shape and color-orientation conjunction search learning might reflect the different complexity and discriminability between feature dimensions. These results suggested a feature-based attention enhancement mechanism rather than a unitization mechanism underlying the short-term PL of color-shape/orientation conjunction search.

  19. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    PubMed

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  20. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    PubMed Central

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  1. How Does Learning Impact Development in Infancy? The Case of Perceptual Organization

    ERIC Educational Resources Information Center

    Bhatt, Ramesh S.; Quinn, Paul C.

    2011-01-01

    Pattern perception and organization are critical functions of the visual cognition system. Many organizational processes are available early in life, such that infants as young 3 months of age are able to readily utilize a variety of cues to organize visual patterns. However, other processes are not readily evident in young infants, and their…

  2. Learning from vision-to-touch is different than learning from touch-to-vision

    PubMed Central

    Wismeijer, Dagmar A.; Gegenfurtner, Karl R.; Drewing, Knut

    2012-01-01

    We studied whether vision can teach touch to the same extent as touch seems to teach vision. In a 2 × 2 between-participants learning study, we artificially correlated visual gloss cues with haptic compliance cues. In two “natural” tasks, we tested whether visual gloss estimations have an influence on haptic estimations of softness and vice versa. In two “novel” tasks, in which participants were either asked to haptically judge glossiness or to visually judge softness, we investigated how perceptual estimates transfer from one sense to the other. Our results showed that vision does not teach touch as efficient as touch seems to teach vision. PMID:23181012

  3. Visual perceptual training reconfigures post-task resting-state functional connectivity with a feature-representation region.

    PubMed

    Sarabi, Mitra Taghizadeh; Aoki, Ryuta; Tsumura, Kaho; Keerativittayayut, Ruedeerat; Jimura, Koji; Nakahara, Kiyoshi

    2018-01-01

    The neural mechanisms underlying visual perceptual learning (VPL) have typically been studied by examining changes in task-related brain activation after training. However, the relationship between post-task "offline" processes and VPL remains unclear. The present study examined this question by obtaining resting-state functional magnetic resonance imaging (fMRI) scans of human brains before and after a task-fMRI session involving visual perceptual training. During the task-fMRI session, participants performed a motion coherence discrimination task in which they judged the direction of moving dots with a coherence level that varied between trials (20, 40, and 80%). We found that stimulus-induced activation increased with motion coherence in the middle temporal cortex (MT+), a feature-specific region representing visual motion. On the other hand, stimulus-induced activation decreased with motion coherence in the dorsal anterior cingulate cortex (dACC) and bilateral insula, regions involved in decision making under perceptual ambiguity. Moreover, by comparing pre-task and post-task rest periods, we revealed that resting-state functional connectivity (rs-FC) with the MT+ was significantly increased after training in widespread cortical regions including the bilateral sensorimotor and temporal cortices. In contrast, rs-FC with the MT+ was significantly decreased in subcortical regions including the thalamus and putamen. Importantly, the training-induced change in rs-FC was observed only with the MT+, but not with the dACC or insula. Thus, our findings suggest that perceptual training induces plastic changes in offline functional connectivity specifically in brain regions representing the trained visual feature, emphasising the distinct roles of feature-representation regions and decision-related regions in VPL.

  4. Differentiating aversive conditioning in bistable perception: Avoidance of a percept vs. salience of a stimulus.

    PubMed

    Wilbertz, Gregor; Sterzer, Philipp

    2018-05-01

    Alternating conscious visual perception of bistable stimuli is influenced by several factors. In order to understand the effect of negative valence, we tested the effect of two types of aversive conditioning on dominance durations in binocular rivalry. Participants received either aversive classical conditioning of the stimuli shown alone between rivalry blocks, or aversive percept conditioning of one of the two possible perceptual choices during rivalry. Both groups showed successful aversive conditioning according to skin conductance responses and affective valence ratings. However, while classical conditioning led to an immediate but short-lived increase in dominance durations of the conditioned stimulus, percept conditioning yielded no significant immediate effect but tended to decrease durations of the conditioned percept during extinction. These results show dissociable effects of value learning on perceptual inference in situations of perceptual conflict, depending on whether learning relates to the decision between conflicting perceptual choices or the sensory stimuli per se. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. NMF-Based Image Quality Assessment Using Extreme Learning Machine.

    PubMed

    Wang, Shuigen; Deng, Chenwei; Lin, Weisi; Huang, Guang-Bin; Zhao, Baojun

    2017-01-01

    Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.g., luminance, contrast, and gradient) do not seem to be consistent with human perception, and the effects pooling is often done in ad-hoc ways. In this paper, we propose a novel full-reference IQA metric. It applies non-negative matrix factorization (NMF) to measure image degradations by making use of the parts-based representation of NMF. On the other hand, a new machine learning technique [extreme learning machine (ELM)] is employed to address the limitations of the existing pooling techniques. Compared with neural networks and support vector regression, ELM can achieve higher learning accuracy with faster learning speed. Extensive experimental results demonstrate that the proposed metric has better performance and lower computational complexity in comparison with the relevant state-of-the-art approaches.

  6. Visual prediction and perceptual expertise

    PubMed Central

    Cheung, Olivia S.; Bar, Moshe

    2012-01-01

    Making accurate predictions about what may happen in the environment requires analogies between perceptual input and associations in memory. These elements of predictions are based on cortical representations, but little is known about how these processes can be enhanced by experience and training. On the other hand, studies on perceptual expertise have revealed that the acquisition of expertise leads to strengthened associative processing among features or objects, suggesting that predictions and expertise may be tightly connected. Here we review the behavioral and neural findings regarding the mechanisms involving prediction and expert processing, and highlight important possible overlaps between them. Future investigation should examine the relations among perception, memory and prediction skills as a function of expertise. The knowledge gained by this line of research will have implications for visual cognition research, and will advance our understanding of how the human brain can improve its ability to predict by learning from experience. PMID:22123523

  7. A dichoptic custom-made action video game as a treatment for adult amblyopia.

    PubMed

    Vedamurthy, Indu; Nahum, Mor; Huang, Samuel J; Zheng, Frank; Bayliss, Jessica; Bavelier, Daphne; Levi, Dennis M

    2015-09-01

    Previous studies have employed different experimental approaches to enhance visual function in adults with amblyopia including perceptual learning, videogame play, and dichoptic training. Here, we evaluated the efficacy of a novel dichoptic action videogame combining all three approaches. This experimental intervention was compared to a conventional, yet unstudied method of supervised occlusion while watching movies. Adults with unilateral amblyopia were assigned to either play the dichoptic action game (n=23; 'game' group), or to watch movies monocularly while the fellow eye was patched (n=15; 'movies' group) for a total of 40hours. Following training, visual acuity (VA) improved on average by ≈0.14logMAR (≈28%) in the game group, with improvements noted in both anisometropic and strabismic patients. This improvement is similar to that obtained following perceptual learning, video game play or dichoptic training. Surprisingly, patients with anisometropic amblyopia in the movies group showed similar improvement, revealing a greater impact of supervised occlusion in adults than typically thought. Stereoacuity, reading speed, and contrast sensitivity improved more for game group participants compared with movies group participants. Most improvements were largely retained following a 2-month no-contact period. This novel video game, which combines action gaming, perceptual learning and dichoptic presentation, results in VA improvements equivalent to those previously documented with each of these techniques alone. Our game intervention led to greater improvement than control training in a variety of visual functions, thus suggesting that this approach has promise for the treatment of adult amblyopia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A dichoptic custom-made action video game as a treatment for adult amblyopia

    PubMed Central

    Vedamurthy, Indu; Nahum, Mor; Huang, Samuel J.; Zheng, Frank; Bayliss, Jessica; Bavelier, Daphne; Levi, Dennis M.

    2015-01-01

    Previous studies have employed different experimental approaches to enhance visual function in adults with amblyopia including perceptual learning, videogame play, and dichoptic training. Here, we evaluated the efficacy of a novel dichoptic action videogame combining all three approaches. This experimental intervention was compared to a conventional, yet unstudied method of supervised occlusion while watching movies. Adults with unilateral amblyopia were assigned to either playing the dichoptic action game (n = 23; ‘game’ group), or to watching movies monocularly while the fellow eye was patched (n = 15; ‘movies’ group) for a total of 40 h. Following training, visual acuity (VA) improved on average by ≈0.14 logMAR (≈27%) in the game group, with improvements noted in both anisometropic and strabismic patients. This improvement is similar to that described after perceptual learning, video game play or dichoptic training. Surprisingly, patients with anisometropic amblyopia in the movies group showed similar improvement, revealing a greater impact of supervised occlusion in adults than typically thought. Stereoacuity, reading speed, and contrast sensitivity improved more for game group participants compared with movies group participants. Most improvements were largely retained following a 2-month no-contact period. This novel video game, which combines action gaming, perceptual learning and dichoptic presentation, results in VA improvements equivalent to those previously documented with each of these techniques alone. Interestingly, however, our game intervention led to greater improvement than control training in a variety of visual functions, thus suggesting that this approach has promise for the treatment of adult amblyopia. PMID:25917239

  9. Learning to perceive differences in solid shape through vision and touch.

    PubMed

    Norman, J Farley; Clayton, Anna Marie; Norman, Hideko F; Crabtree, Charles E

    2008-01-01

    A single experiment was designed to investigate perceptual learning and the discrimination of 3-D object shape. Ninety-six observers were presented with naturally shaped solid objects either visually, haptically, or across the modalities of vision and touch. The observers' task was to judge whether the two sequentially presented objects on any given trial possessed the same or different 3-D shapes. The results of the experiment revealed that significant perceptual learning occurred in all modality conditions, both unimodal and cross-modal. The amount of the observers' perceptual learning, as indexed by increases in hit rate and d', was similar for all of the modality conditions. The observers' hit rates were highest for the unimodal conditions and lowest in the cross-modal conditions. Lengthening the inter-stimulus interval from 3 to 15 s led to increases in hit rates and decreases in response bias. The results also revealed the existence of an asymmetry between two otherwise equivalent cross-modal conditions: in particular, the observers' perceptual sensitivity was higher for the vision-haptic condition and lower for the haptic-vision condition. In general, the results indicate that effective cross-modal shape comparisons can be made between the modalities of vision and active touch, but that complete information transfer does not occur.

  10. Taking Attention Away from the Auditory Modality: Context-dependent Effects on Early Sensory Encoding of Speech.

    PubMed

    Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath

    2018-05-24

    Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Perceptual learning during action video game playing.

    PubMed

    Green, C Shawn; Li, Renjie; Bavelier, Daphne

    2010-04-01

    Action video games have been shown to enhance behavioral performance on a wide variety of perceptual tasks, from those that require effective allocation of attentional resources across the visual scene, to those that demand the successful identification of fleetingly presented stimuli. Importantly, these effects have not only been shown in expert action video game players, but a causative link has been established between action video game play and enhanced processing through training studies. Although an account based solely on attention fails to capture the variety of enhancements observed after action game playing, a number of models of perceptual learning are consistent with the observed results, with behavioral modeling favoring the hypothesis that avid video game players are better able to form templates for, or extract the relevant statistics of, the task at hand. This may suggest that the neural site of learning is in areas where information is integrated and actions are selected; yet changes in low-level sensory areas cannot be ruled out. Copyright © 2009 Cognitive Science Society, Inc.

  12. Reading Habits, Perceptual Learning, and Recognition of Printed Words

    ERIC Educational Resources Information Center

    Nazir, Tatjana A.; Ben-Boutayab, Nadia; Decoppet, Nathalie; Deutsch, Avital; Frost, Ram

    2004-01-01

    The present work aims at demonstrating that visual training associated with the act of reading modifies the way we perceive printed words. As reading does not train all parts of the retina in the same way but favors regions on the side in the direction of scanning, visual word recognition should be better at retinal locations that are frequently…

  13. STDP in lateral connections creates category-based perceptual cycles for invariance learning with multiple stimuli.

    PubMed

    Evans, Benjamin D; Stringer, Simon M

    2015-04-01

    Learning to recognise objects and faces is an important and challenging problem tackled by the primate ventral visual system. One major difficulty lies in recognising an object despite profound differences in the retinal images it projects, due to changes in view, scale, position and other identity-preserving transformations. Several models of the ventral visual system have been successful in coping with these issues, but have typically been privileged by exposure to only one object at a time. In natural scenes, however, the challenges of object recognition are typically further compounded by the presence of several objects which should be perceived as distinct entities. In the present work, we explore one possible mechanism by which the visual system may overcome these two difficulties simultaneously, through segmenting unseen (artificial) stimuli using information about their category encoded in plastic lateral connections. We demonstrate that these experience-guided lateral interactions robustly organise input representations into perceptual cycles, allowing feed-forward connections trained with spike-timing-dependent plasticity to form independent, translation-invariant output representations. We present these simulations as a functional explanation for the role of plasticity in the lateral connectivity of visual cortex.

  14. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid

    PubMed Central

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot. PMID:26998923

  15. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid.

    PubMed

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.

  16. Supramodal processing optimizes visual perceptual learning and plasticity.

    PubMed

    Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie

    2014-06-01

    Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory-invariant representations - here, global coherence levels across sensory modalities. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Visual perceptual abilities of Chinese-speaking and English-speaking children.

    PubMed

    Lai, Mun Yee; Leung, Frederick Koon Shing

    2012-04-01

    This paper reports an investigation of Chinese-speaking and English-speaking children's general visual perceptual abilities. The Developmental Test of Visual Perception was administered to 41 native Chinese-speaking children of mean age 5 yr. 4 mo. in Hong Kong and 35 English-speaking children of mean age 5 yr. 2 mo. in Melbourne. Of interest were the two interrelated components of visual perceptual abilities, namely, motor-reduced visual perceptual and visual-motor integration perceptual abilities, which require either verbal or motoric responses in completing visual tasks. Chinese-speaking children significantly outperformed the English-speaking children on general visual perceptual abilities. When comparing the results of each of the two different components, the Chinese-speaking students' performance on visual-motor integration was far better than that of their counterparts (ES = 2.70), while the two groups of students performed similarly on motor-reduced visual perceptual abilities. Cultural factors such as written language format may be contributing to the enhanced performance of Chinese-speaking children's visual-motor integration abilities, but there may be validity questions in the Chinese version.

  18. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    PubMed Central

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (< 50 years) normally sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  19. Short-term plasticity in auditory cognition.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  20. Learning during processing Word learning doesn’t wait for word recognition to finish

    PubMed Central

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  1. Real-Time Strategy Video Game Experience and Visual Perceptual Learning.

    PubMed

    Kim, Yong-Hwan; Kang, Dong-Wha; Kim, Dongho; Kim, Hye-Jin; Sasaki, Yuka; Watanabe, Takeo

    2015-07-22

    Visual perceptual learning (VPL) is defined as long-term improvement in performance on a visual-perception task after visual experiences or training. Early studies have found that VPL is highly specific for the trained feature and location, suggesting that VPL is associated with changes in the early visual cortex. However, the generality of visual skills enhancement attributable to action video-game experience suggests that VPL can result from improvement in higher cognitive skills. If so, experience in real-time strategy (RTS) video-game play, which may heavily involve cognitive skills, may also facilitate VPL. To test this hypothesis, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and elucidated underlying structural and functional neural mechanisms. Healthy young human subjects underwent six training sessions on a texture discrimination task. Diffusion-tensor and functional magnetic resonance imaging were performed before and after training. VGPs performed better than NVGPs in the early phase of training. White-matter connectivity between the right external capsule and visual cortex and neuronal activity in the right inferior frontal gyrus (IFG) and anterior cingulate cortex (ACC) were greater in VGPs than NVGPs and were significantly correlated with RTS video-game experience. In both VGPs and NVGPs, there was task-related neuronal activity in the right IFG, ACC, and striatum, which was strengthened after training. These results indicate that RTS video-game experience, associated with changes in higher-order cognitive functions and connectivity between visual and cognitive areas, facilitates VPL in early phases of training. The results support the hypothesis that VPL can occur without involvement of only visual areas. Significance statement: Although early studies found that visual perceptual learning (VPL) is associated with involvement of the visual cortex, generality of visual skills enhancement by action video-game experience suggests that higher-order cognition may be involved in VPL. If so, real-time strategy (RTS) video-game experience may facilitate VPL as a result of heavy involvement of cognitive skills. Here, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and investigated the underlying neural mechanisms. VGPs showed better performance in the early phase of training on the texture discrimination task and greater level of neuronal activity in cognitive areas and structural connectivity between visual and cognitive areas than NVGPs. These results support the hypothesis that VPL can occur beyond the visual cortex. Copyright © 2015 the authors 0270-6474/15/3510485-08$15.00/0.

  2. Vision improvement in pilots with presbyopia following perceptual learning.

    PubMed

    Sterkin, Anna; Levy, Yuval; Pokroy, Russell; Lev, Maria; Levian, Liora; Doron, Ravid; Yehezkel, Oren; Fried, Moshe; Frenkel-Nir, Yael; Gordon, Barak; Polat, Uri

    2017-11-24

    Israeli Air Force (IAF) pilots continue flying combat missions after the symptoms of natural near-vision deterioration, termed presbyopia, begin to be noticeable. Because modern pilots rely on the displays of the aircraft control and performance instruments, near visual acuity (VA) is essential in the cockpit. We aimed to apply a method previously shown to improve visual performance of presbyopes, and test whether presbyopic IAF pilots can overcome the limitation imposed by presbyopia. Participants were selected by the IAF aeromedical unit as having at least initial presbyopia and trained using a structured personalized perceptual learning method (GlassesOff application), based on detecting briefly presented low-contrast Gabor stimuli, under the conditions of spatial and temporal constraints, from a distance of 40 cm. Our results show that despite their initial visual advantage over age-matched peers, training resulted in robust improvements in various basic visual functions, including static and temporal VA, stereoacuity, spatial crowding, contrast sensitivity and contrast discrimination. Moreover, improvements generalized to higher-level tasks, such as sentence reading and aerial photography interpretation (specifically designed to reflect IAF pilots' expertise in analyzing noisy low-contrast input). In concert with earlier suggestions, gains in visual processing speed are plausible to account, at least partially, for the observed training-induced improvements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Learning and disrupting invariance in visual recognition with a temporal association rule

    PubMed Central

    Isik, Leyla; Leibo, Joel Z.; Poggio, Tomaso

    2012-01-01

    Learning by temporal association rules such as Foldiak's trace rule is an attractive hypothesis that explains the development of invariance in visual recognition. Consistent with these rules, several recent experiments have shown that invariance can be broken at both the psychophysical and single cell levels. We show (1) that temporal association learning provides appropriate invariance in models of object recognition inspired by the visual cortex, (2) that we can replicate the “invariance disruption” experiments using these models with a temporal association learning rule to develop and maintain invariance, and (3) that despite dramatic single cell effects, a population of cells is very robust to these disruptions. We argue that these models account for the stability of perceptual invariance despite the underlying plasticity of the system, the variability of the visual world and expected noise in the biological mechanisms. PMID:22754523

  4. Effect of Implicit Perceptual-Motor Training on Decision-Making Skills and Underpinning Gaze Behavior in Combat Athletes.

    PubMed

    Milazzo, Nicolas; Farrow, Damian; Fournier, Jean F

    2016-08-01

    This study investigated the effect of a 12-session, implicit perceptual-motor training program on decision-making skills and visual search behavior of highly skilled junior female karate fighters (M age = 15.7 years, SD = 1.2). Eighteen participants were required to make (physical or verbal) reaction decisions to various attacks within different fighting scenarios. Fighters' performance and eye movements were assessed before and after the intervention, and during acquisition through the use of video-based and on-mat decision-making tests. The video-based test revealed that following training, only the implicit perceptual-motor group (n = 6) improved their decision-making accuracy significantly compared to a matched motor training (placebo, n = 6) group and a control group (n = 6). Further, the implicit training group significantly changed their visual search behavior by focusing on fewer locations for longer durations. In addition, the session-by-session analysis showed no significant improvement in decision accuracy between training session 1 and all the other sessions, except the last one. Coaches should devote more practice time to implicit learning approaches during perceptual-motor training program to achieve significant decision-making improvements and more efficient visual search strategy with elite athletes. © The Author(s) 2016.

  5. Optimization of visual training for full recovery from severe amblyopia in adults

    PubMed Central

    Eaton, Nicolette C.; Sheehan, Hanna Marie

    2016-01-01

    The severe amblyopia induced by chronic monocular deprivation is highly resistant to reversal in adulthood. Here we use a rodent model to show that recovery from deprivation amblyopia can be achieved in adults by a two-step sequence, involving enhancement of synaptic plasticity in the visual cortex by dark exposure followed immediately by visual training. The perceptual learning induced by visual training contributes to the recovery of vision and can be optimized to drive full recovery of visual acuity in severely amblyopic adults. PMID:26787781

  6. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks

    PubMed Central

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.

    2015-01-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies. PMID:26496502

  7. Perceptual memory drives learning of retinotopic biases for bistable stimuli.

    PubMed

    Murphy, Aidan P; Leopold, David A; Welchman, Andrew E

    2014-01-01

    The visual system exploits past experience at multiple timescales to resolve perceptual ambiguity in the retinal image. For example, perception of a bistable stimulus can be biased toward one interpretation over another when preceded by a brief presentation of a disambiguated version of the stimulus (positive priming) or through intermittent presentations of the ambiguous stimulus (stabilization). Similarly, prior presentations of unambiguous stimuli can be used to explicitly "train" a long-lasting association between a percept and a retinal location (perceptual association). These phenonema have typically been regarded as independent processes, with short-term biases attributed to perceptual memory and longer-term biases described as associative learning. Here we tested for interactions between these two forms of experience-dependent perceptual bias and demonstrate that short-term processes strongly influence long-term outcomes. We first demonstrate that the establishment of long-term perceptual contingencies does not require explicit training by unambiguous stimuli, but can arise spontaneously during the periodic presentation of brief, ambiguous stimuli. Using rotating Necker cube stimuli, we observed enduring, retinotopically specific perceptual biases that were expressed from the outset and remained stable for up to 40 min, consistent with the known phenomenon of perceptual stabilization. Further, bias was undiminished after a break period of 5 min, but was readily reset by interposed periods of continuous, as opposed to periodic, ambiguous presentation. Taken together, the results demonstrate that perceptual biases can arise naturally and may principally reflect the brain's tendency to favor recent perceptual interpretation at a given retinal location. Further, they suggest that an association between retinal location and perceptual state, rather than a physical stimulus, is sufficient to generate long-term biases in perceptual organization.

  8. Mesolimbic confidence signals guide perceptual learning in the absence of external feedback

    PubMed Central

    Guggenmos, Matthias; Wilbertz, Gregor; Hebart, Martin N; Sterzer, Philipp

    2016-01-01

    It is well established that learning can occur without external feedback, yet normative reinforcement learning theories have difficulties explaining such instances of learning. Here, we propose that human observers are capable of generating their own feedback signals by monitoring internal decision variables. We investigated this hypothesis in a visual perceptual learning task using fMRI and confidence reports as a measure for this monitoring process. Employing a novel computational model in which learning is guided by confidence-based reinforcement signals, we found that mesolimbic brain areas encoded both anticipation and prediction error of confidence—in remarkable similarity to previous findings for external reward-based feedback. We demonstrate that the model accounts for choice and confidence reports and show that the mesolimbic confidence prediction error modulation derived through the model predicts individual learning success. These results provide a mechanistic neurobiological explanation for learning without external feedback by augmenting reinforcement models with confidence-based feedback. DOI: http://dx.doi.org/10.7554/eLife.13388.001 PMID:27021283

  9. [Visual perception abilities in children with reading disabilities].

    PubMed

    Werpup-Stüwe, Lina; Petermann, Franz

    2015-05-01

    Visual perceptual abilities are increasingly being neglected in research concerning reading disabilities. This study measures the visual perceptual abilities of children with disabilities in reading. The visual perceptual abilities of 35 children with specific reading disorder and 30 controls were compared using the German version of the Developmental Test of Visual Perception – Adolescent and Adult (DTVP-A). 11 % of the children with specific reading disorder show clinically relevant performance on the DTVP-A. The perceptual abilities of both groups differ significantly. No significant group differences exist after controlling for general IQ or Perceptional Reasoning Index, but they do remain after controlling for Verbal Comprehension, Working Memory, and Processing Speed Index. The number of children with reading difficulties suffering from visual perceptual disorders has been underestimated. For this reason, visual perceptual abilities should always be tested when making a reading disorder diagnosis. Profiles of IQ-test results of children suffering from reading and visual perceptual disorders should be interpreted carefully.

  10. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Caudate nucleus reactivity predicts perceptual learning rate for visual feature conjunctions.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2015-04-15

    Useful information in the visual environment is often contained in specific conjunctions of visual features (e.g., color and shape). The ability to quickly and accurately process such conjunctions can be learned. However, the neural mechanisms responsible for such learning remain largely unknown. It has been suggested that some forms of visual learning might involve the dopaminergic neuromodulatory system (Roelfsema et al., 2010; Seitz and Watanabe, 2005), but this hypothesis has not yet been directly tested. Here we test the hypothesis that learning visual feature conjunctions involves the dopaminergic system, using functional neuroimaging, genetic assays, and behavioral testing techniques. We use a correlative approach to evaluate potential associations between individual differences in visual feature conjunction learning rate and individual differences in dopaminergic function as indexed by neuroimaging and genetic markers. We find a significant correlation between activity in the caudate nucleus (a component of the dopaminergic system connected to visual areas of the brain) and visual feature conjunction learning rate. Specifically, individuals who showed a larger difference in activity between positive and negative feedback on an unrelated cognitive task, indicative of a more reactive dopaminergic system, learned visual feature conjunctions more quickly than those who showed a smaller activity difference. This finding supports the hypothesis that the dopaminergic system is involved in visual learning, and suggests that visual feature conjunction learning could be closely related to associative learning. However, no significant, reliable correlations were found between feature conjunction learning and genotype or dopaminergic activity in any other regions of interest. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Fine-grained temporal coding of visually-similar categories in the ventral visual pathway and prefrontal cortex

    PubMed Central

    Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.

    2013-01-01

    Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656

  13. Learning to See by Learning to Draw: Probing the Perceptual Bases and Consequences of Highly Skilled Artistic Drawing

    ERIC Educational Resources Information Center

    Kozbelt, Aaron

    2017-01-01

    In this paper, I review the empirical evidence for advantages in visual perception and attention that may be associated with high levels of drawing skill. Particularly in the last few decades, some substantial progress on these issues has been made, although frequently with inconsistent or even contradictory results across studies, some…

  14. Attempted Validation of the Scores of the VARK: Learning Styles Inventory with Multitrait-Multimethod Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Leite, Walter L.; Svinicki, Marilla; Shi, Yuying

    2010-01-01

    The authors examined the dimensionality of the VARK learning styles inventory. The VARK measures four perceptual preferences: visual (V), aural (A), read/write (R), and kinesthetic (K). VARK questions can be viewed as testlets because respondents can select multiple items within a question. The correlations between items within testlets are a type…

  15. Perceptual Drawing as a Learning Tool in a College Biology Laboratory

    NASA Astrophysics Data System (ADS)

    Landin, Jennifer

    2011-12-01

    The use of drawing in the classroom has a contentious history in the U.S. education system. While most instructors and students agree that the activity helps students focus and observe more details, there is a lack of empirical data to support these positions. This study examines the use of three treatments (writing a description, drawing a perceptual image, or drawing a perceptual image after participating in a short instructional lesson on perceptual drawing) each week over the course of a semester. The students in the "Drawing with Instruction" group exhibit a small but significantly higher level of content knowledge by the end of the semester. When comparing Attitude Toward Biology and Observational Skills among the three groups, inconclusive results restrict making any conclusions. Student perceptions of the task are positive, although not as strong as indicated by other studies. A student behavior observed during the first study led to another question regarding student cognitive processes, and demonstrated cognitive change in student-rendered drawings. The data from the second study indicate that hemispheric dominance or visual/verbal learning do not impact learning from perceptual drawing activities. However, conservatism and need for closure are inversely proportional to the change seen in student drawings over the course of a lesson. Further research is needed to verify these conclusions, as the second study has a small number of participants.

  16. Flexible Visual Processing in Young Adults with Autism: The Effects of Implicit Learning on a Global-Local Task

    ERIC Educational Resources Information Center

    Hayward, Dana A.; Shore, David I.; Ristic, Jelena; Kovshoff, Hanna; Iarocci, Grace; Mottron, Laurent; Burack, Jacob A.

    2012-01-01

    We utilized a hierarchical figures task to determine the default level of perceptual processing and the flexibility of visual processing in a group of high-functioning young adults with autism (n = 12) and a typically developing young adults, matched by chronological age and IQ (n = 12). In one task, participants attended to one level of the…

  17. Visual Field Differences in Visual Word Recognition Can Emerge Purely from Perceptual Learning: Evidence from Modeling Chinese Character Pronunciation

    ERIC Educational Resources Information Center

    Hsiao, Janet Hui-wen

    2011-01-01

    In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is…

  18. Investigation of Perceptual-Motor Behavior Across the Expert Athlete to Disabled Patient Skill Continuum can Advance Theory and Practical Application.

    PubMed

    Müller, Sean; Vallence, Ann-Maree; Winstein, Carolee

    2017-12-14

    A framework is presented of how theoretical predictions can be tested across the expert athlete to disabled patient skill continuum. Common-coding theory is used as the exemplar to discuss sensory and motor system contributions to perceptual-motor behavior. Behavioral and neural studies investigating expert athletes and patients recovering from cerebral stroke are reviewed. They provide evidence of bi-directional contributions of visual and motor systems to perceptual-motor behavior. Majority of this research is focused on perceptual-motor performance or learning, with less on transfer. The field is ripe for research designed to test theoretical predictions across the expert athlete to disabled patient skill continuum. Our view has implications for theory and practice in sports science, physical education, and rehabilitation.

  19. SPLASH Down to Reading.

    ERIC Educational Resources Information Center

    Hoopes, Amy T.

    Research into visual, perceptual, and motor coordination suggests that the kind of physical activity and coordination involved in swimming might prevent some cases of dyslexia and improve the academic performance of many learning disabled children. Early neurological development shows a relationship among the creeping period, later communication…

  20. Development of visual-motor perception in pupils with expressive writing disorder and pupils without expressive writing disorder: a comparative statistical analysis.

    PubMed

    Mesrahi, Tahereh; Sedighi, Mohammadreza

    2013-08-01

    Learning disability is one of the most noticed subjects for behavioral specialists. Most of thelearning difficulties are caused by senso-motor development and neurological organization. The main purposeof the present research is to examine the role of delayed perceptual-motor development and brain damage inorigination of expressive writing disorder (EWD). The studied sample is 89 pupils divided into two groups, one of which is pupils with expressivewriting disorder (n=43) and the other is pupils without expressive writing disorder (n=46), consisted of secondand third grade elementary school students. First of all, students with EWD are selected through dictation testand intelligence test, and then the two groups, students with and without EWD, would take the Bender Gestalttest. The average score of perceptual visual-motor development and brain damage of two groups is comparedusing T test for independent groups and χ2 test. Results show that there is a significant difference in perceptual visual-motor development betweenstudents with EWD and students without EWD (p<0.01). Based on the results, perceptual-motor development ofstudents with EWD is lower than students without EWD. There is no significant difference in brain damagebetween those with EWD and healthy people, (p> 0.05). Based on our findings it could be concluded that those who are relatively more developed thantheir peers, in terms of visual-motor perception, are more successful in education, especially in expressive writing.

  1. Visual discrimination transfer and modulation by biogenic amines in honeybees.

    PubMed

    Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo

    2018-05-10

    For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.

  2. Mastering algebra retrains the visual system to perceive hierarchical structure in equations.

    PubMed

    Marghetis, Tyler; Landy, David; Goldstone, Robert L

    2016-01-01

    Formal mathematics is a paragon of abstractness. It thus seems natural to assume that the mathematical expert should rely more on symbolic or conceptual processes, and less on perception and action. We argue instead that mathematical proficiency relies on perceptual systems that have been retrained to implement mathematical skills. Specifically, we investigated whether the visual system-in particular, object-based attention-is retrained so that parsing algebraic expressions and evaluating algebraic validity are accomplished by visual processing. Object-based attention occurs when the visual system organizes the world into discrete objects, which then guide the deployment of attention. One classic signature of object-based attention is better perceptual discrimination within, rather than between, visual objects. The current study reports that object-based attention occurs not only for simple shapes but also for symbolic mathematical elements within algebraic expressions-but only among individuals who have mastered the hierarchical syntax of algebra. Moreover, among these individuals, increased object-based attention within algebraic expressions is associated with a better ability to evaluate algebraic validity. These results suggest that, in mastering the rules of algebra, people retrain their visual system to represent and evaluate abstract mathematical structure. We thus argue that algebraic expertise involves the regimentation and reuse of evolutionarily ancient perceptual processes. Our findings implicate the visual system as central to learning and reasoning in mathematics, leading us to favor educational approaches to mathematics and related STEM fields that encourage students to adapt, not abandon, their use of perception.

  3. Outlining face processing skills of portrait artists: Perceptual experience with faces predicts performance.

    PubMed

    Devue, Christel; Barsics, Catherine

    2016-10-01

    Most humans seem to demonstrate astonishingly high levels of skill in face processing if one considers the sophisticated level of fine-tuned discrimination that face recognition requires. However, numerous studies now indicate that the ability to process faces is not as fundamental as once thought and that performance can range from despairingly poor to extraordinarily high across people. Here we studied people who are super specialists of faces, namely portrait artists, to examine how their specific visual experience with faces relates to a range of face processing skills (perceptual discrimination, short- and longer term recognition). Artists show better perceptual discrimination and, to some extent, recognition of newly learned faces than controls. They are also more accurate on other perceptual tasks (i.e., involving non-face stimuli or mental rotation). By contrast, artists do not display an advantage compared to controls on longer term face recognition (i.e., famous faces) nor on person recognition from other sensorial modalities (i.e., voices). Finally, the face inversion effect exists in artists and controls and is not modulated by artistic practice. Advantages in face processing for artists thus seem to closely mirror perceptual and visual short term memory skills involved in portraiture. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Perceptual organization and visual attention.

    PubMed

    Kimchi, Ruth

    2009-01-01

    Perceptual organization--the processes structuring visual information into coherent units--and visual attention--the processes by which some visual information in a scene is selected--are crucial for the perception of our visual environment and to visuomotor behavior. Recent research points to important relations between attentional and organizational processes. Several studies demonstrated that perceptual organization constrains attentional selectivity, and other studies suggest that attention can also constrain perceptual organization. In this chapter I focus on two aspects of the relationship between perceptual organization and attention. The first addresses the question of whether or not perceptual organization can take place without attention. I present findings demonstrating that some forms of grouping and figure-ground segmentation can occur without attention, whereas others require controlled attentional processing, depending on the processes involved and the conditions prevailing for each process. These findings challenge the traditional view, which assumes that perceptual organization is a unitary entity that operates preattentively. The second issue addresses the question of whether perceptual organization can affect the automatic deployment of attention. I present findings showing that the mere organization of some elements in the visual field by Gestalt factors into a coherent perceptual unit (an "object"), with no abrupt onset or any other unique transient, can capture attention automatically in a stimulus-driven manner. Taken together, the findings discussed in this chapter demonstrate the multifaceted, interactive relations between perceptual organization and visual attention.

  5. Implicit and Explicit Contributions to Object Recognition: Evidence from Rapid Perceptual Learning

    PubMed Central

    Hassler, Uwe; Friese, Uwe; Gruber, Thomas

    2012-01-01

    The present study investigated implicit and explicit recognition processes of rapidly perceptually learned objects by means of steady-state visual evoked potentials (SSVEP). Participants were initially exposed to object pictures within an incidental learning task (living/non-living categorization). Subsequently, degraded versions of some of these learned pictures were presented together with degraded versions of unlearned pictures and participants had to judge, whether they recognized an object or not. During this test phase, stimuli were presented at 15 Hz eliciting an SSVEP at the same frequency. Source localizations of SSVEP effects revealed for implicit and explicit processes overlapping activations in orbito-frontal and temporal regions. Correlates of explicit object recognition were additionally found in the superior parietal lobe. These findings are discussed to reflect facilitation of object-specific processing areas within the temporal lobe by an orbito-frontal top-down signal as proposed by bi-directional accounts of object recognition. PMID:23056558

  6. Learning to associate orientation with color in early visual areas by associative decoded fMRI neurofeedback

    PubMed Central

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2016-01-01

    Summary Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded functional magnetic resonance imaging (fMRI) neurofeedback, termed “DecNef” [9], we tested whether associative learning of color and orientation can be created in early visual areas. During three days' training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive “red” significantly more frequently than “green” in an achromatic vertical grating. This effect was also observed 3 to 5 months after the training. These results suggest that long-term associative learning of the two different visual features such as color and orientation was created most likely in early visual areas. This newly extended technique that induces associative learning is called “A(ssociative)-DecNef” and may be used as an important tool for understanding and modifying brain functions, since associations are fundamental and ubiquitous functions in the brain. PMID:27374335

  7. Learning to Associate Orientation with Color in Early Visual Areas by Associative Decoded fMRI Neurofeedback.

    PubMed

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2016-07-25

    Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded fMRI neurofeedback termed "DecNef" [9], we tested whether associative learning of orientation and color can be created in early visual areas. During 3 days of training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive "red" significantly more frequently than "green" in an achromatic vertical grating. This effect was also observed 3-5 months after the training. These results suggest that long-term associative learning of two different visual features such as orientation and color was created, most likely in early visual areas. This newly extended technique that induces associative learning is called "A-DecNef," and it may be used as an important tool for understanding and modifying brain functions because associations are fundamental and ubiquitous functions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Learning optimal eye movements to unusual faces

    PubMed Central

    Peterson, Matthew F.; Eckstein, Miguel P.

    2014-01-01

    Eye movements, which guide the fovea’s high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer’s default face identification eye movement behavior to the new optimal fixation point and the observer’s peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. PMID:24291712

  9. Individual Differences in Learning and Cognitive Abilities

    DTIC Science & Technology

    1989-09-15

    conducted by Sir Francis Galton . Galton’s view of intelligence was that it distinguished those individuals who had genius (e.g., demonstrated by making...genius must have more refined sensory and motor faculties. Thus, Galton argued, intelligence could be measured by assessing constructs such as visual...block number) FIELD GROUP SUB-GROUP Learning, individual differences, cognitive abilities, 05 09 intelligence , skill acquisition, perceptual speed, - i

  10. Evidence for Feature and Location Learning in Human Visual Perceptual Learning

    ERIC Educational Resources Information Center

    Moreno-Fernández, María Manuela; Salleh, Nurizzati Mohd; Prados, Jose

    2015-01-01

    In Experiment 1, human participants were pre-exposed to two similar checkerboard grids (AX and X) in alternation, and to a third grid (BX) in a separate block of trials. In a subsequent test, the unique feature A was better detected than the feature B when they were presented in the same location during the pre-exposure and test phases. However,…

  11. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli.

    PubMed

    Kumar, Neeraj; Mutha, Pratik K

    2016-03-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function-perception-are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the "natural" sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. Copyright © 2016 the American Physiological Society.

  12. Perceptual Learning Induces Persistent Attentional Capture by Nonsalient Shapes.

    PubMed

    Qu, Zhe; Hillyard, Steven A; Ding, Yulong

    2017-02-01

    Visual attention can be attracted automatically by salient simple features, but whether and how nonsalient complex stimuli such as shapes may capture attention in humans remains unclear. Here, we present strong electrophysiological evidence that a nonsalient shape presented among similar shapes can provoke a robust and persistent capture of attention as a consequence of extensive training in visual search (VS) for that shape. Strikingly, this attentional capture that followed perceptual learning (PL) was evident even when the trained shape was task-irrelevant, was presented outside the focus of top-down spatial attention, and was undetected by the observer. Moreover, this attentional capture persisted for at least 3-5 months after training had been terminated. This involuntary capture of attention was indexed by electrophysiological recordings of the N2pc component of the event-related brain potential, which was localized to ventral extrastriate visual cortex, and was highly predictive of stimulus-specific improvement in VS ability following PL. These findings provide the first evidence that nonsalient shapes can capture visual attention automatically following PL and challenge the prominent view that detection of feature conjunctions requires top-down focal attention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Local and global processing in block design tasks in children with dyslexia or nonverbal learning disability.

    PubMed

    Cardillo, Ramona; Mammarella, Irene C; Garcia, Ricardo Basso; Cornoldi, Cesare

    2017-05-01

    Visuo-constructive and perceptual abilities have been poorly investigated in children with learning disabilities. The present study focused on local or global visuospatial processing in children with nonverbal learning disability (NLD) and dyslexia compared with typically-developing (TD) controls. Participants were presented with a modified block design task (BDT), in both a typical visuo-constructive version that involves reconstructing figures from blocks, and a perceptual version in which respondents must rapidly match unfragmented figures with a corresponding fragmented target figure. The figures used in the tasks were devised by manipulating two variables: the perceptual cohesiveness and the task uncertainty, stimulating global or local processes. Our results confirmed that children with NLD had more problems with the visuo-constructive version of the task, whereas those with dyslexia showed only a slight difficulty with the visuo-constructive version, but were in greater difficulty with the perceptual version, especially in terms of response times. These findings are interpreted in relation to the slower visual processing speed of children with dyslexia, and to the visuo-constructive problems and difficulty in using flexibly-experienced global vs local processes of children with NLD. The clinical and educational implications of these findings are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Neuroimaging Evidence for 2 Types of Plasticity in Association with Visual Perceptual Learning.

    PubMed

    Shibata, Kazuhisa; Sasaki, Yuka; Kawato, Mitsuo; Watanabe, Takeo

    2016-09-01

    Visual perceptual learning (VPL) is long-term performance improvement as a result of perceptual experience. It is unclear whether VPL is associated with refinement in representations of the trained feature (feature-based plasticity), improvement in processing of the trained task (task-based plasticity), or both. Here, we provide empirical evidence that VPL of motion detection is associated with both types of plasticity which occur predominantly in different brain areas. Before and after training on a motion detection task, subjects' neural responses to the trained motion stimuli were measured using functional magnetic resonance imaging. In V3A, significant response changes after training were observed specifically to the trained motion stimulus but independently of whether subjects performed the trained task. This suggests that the response changes in V3A represent feature-based plasticity in VPL of motion detection. In V1 and the intraparietal sulcus, significant response changes were found only when subjects performed the trained task on the trained motion stimulus. This suggests that the response changes in these areas reflect task-based plasticity. These results collectively suggest that VPL of motion detection is associated with the 2 types of plasticity, which occur in different areas and therefore have separate mechanisms at least to some degree. © The Author 2016. Published by Oxford University Press.

  15. Visual perceptual skills in children born with very low birth weights.

    PubMed

    Davis, Deborah Winders; Burns, Barbara M; Wilkerson, Shirley A; Steichen, Jean J

    2005-01-01

    A disproportionate number of very low birth weight (VLBW; < or =1500 g) children require special education services and have school-related problems even when they are free from major disabilities and have average intelligence quotient scores. Visual-perceptual problems have been suggested as contributors to deficits in academic performance, but few data are available describing specific visual-perceptual problems. This study was designed to identify specific visual-perceptual skills in VLBW children. Participants were 92 VLBW children aged 4 through 5 years who were free from major disability and appropriate for gestational age at birth. The Test of Visual-Perceptual Skills (non-motor)-Revised was used. Despite intelligent quotient scores in the average range, the majority (63% to 78.3%) of the children performed below age level on all seven subscales of a normed assessment of visual perceptual skills. Results suggest that visual perceptual screening should be considered as a part of routine evaluations of preschool-aged children born prematurely. Early identification of specific deficits could lead to interventions to improve achievement trajectories for these high-risk children.

  16. Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.

    PubMed

    Feldmann-Wüstefeld, Tobias; Schubö, Anna

    2014-04-01

    Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Sensorimotor Learning in a Computerized Athletic Training Battery.

    PubMed

    Krasich, Kristina; Ramger, Ben; Holton, Laura; Wang, Lingling; Mitroff, Stephen R; Gregory Appelbaum, L

    2016-01-01

    Sensorimotor abilities are crucial for performance in athletic, military, and other occupational activities, and there is great interest in understanding learning in these skills. Here, behavioral performance was measured over three days as twenty-seven participants practiced multiple sessions on the Nike SPARQ Sensory Station (Nike, Inc., Beaverton, Oregon), a computerized visual and motor assessment battery. Wrist-worn actigraphy was recorded to monitor sleep-wake cycles. Significant learning was observed in tasks with high visuomotor control demands but not in tasks of visual sensitivity. Learning was primarily linear, with up to 60% improvement, but did not relate to sleep quality in this normal-sleeping population. These results demonstrate differences in the rate and capacity for learning across perceptual and motor domains, indicating potential targets for sensorimotor training interventions.

  19. Perceptual quality prediction on authentically distorted images using a bag of features approach

    PubMed Central

    Ghadiyaram, Deepti; Bovik, Alan C.

    2017-01-01

    Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417

  20. Profiling Perceptual Learning Styles of Chinese as a Second Language Learners in University Settings.

    PubMed

    Sun, Peijian Paul; Teng, Lin Sophie

    2017-12-01

    This study revisited Reid's (1987) perceptual learning style preference questionnaire (PLSPQ) in an attempt to answer whether the PLSPQ fits in the Chinese-as-a-second-language (CSL) context. If not, what are CSL learners' learning styles drawing on the PLSPQ? The PLSPQ was first re-examined through reliability analysis and confirmatory factor analysis (CFA) with 224 CSL learners. The results showed that Reid's six-factor PLSPQ could not satisfactorily explain the CSL learners' learning styles. Exploratory factor analyses were, therefore, performed to explore the dimensionality of the PLSPQ in the CSL context. A four-factor PLSPQ was successfully constructed including auditory/visual, kinaesthetic/tactile, group, and individual styles. Such a measurement model was cross-validated through CFAs with 118 CSL learners. The study not only lends evidence to the literature that Reid's PLSPQ lacks construct validity, but also provides CSL teachers and learners with insightful and practical guidance concerning learning styles. Implications and limitations of the present study are discussed.

  1. Perceptual learning and human expertise

    NASA Astrophysics Data System (ADS)

    Kellman, Philip J.; Garrigan, Patrick

    2009-06-01

    We consider perceptual learning: experience-induced changes in the way perceivers extract information. Often neglected in scientific accounts of learning and in instruction, perceptual learning is a fundamental contributor to human expertise and is crucial in domains where humans show remarkable levels of attainment, such as language, chess, music, and mathematics. In Section 2, we give a brief history and discuss the relation of perceptual learning to other forms of learning. We consider in Section 3 several specific phenomena, illustrating the scope and characteristics of perceptual learning, including both discovery and fluency effects. We describe abstract perceptual learning, in which structural relationships are discovered and recognized in novel instances that do not share constituent elements or basic features. In Section 4, we consider primary concepts that have been used to explain and model perceptual learning, including receptive field change, selection, and relational recoding. In Section 5, we consider the scope of perceptual learning, contrasting recent research, focused on simple sensory discriminations, with earlier work that emphasized extraction of invariance from varied instances in more complex tasks. Contrary to some recent views, we argue that perceptual learning should not be confined to changes in early sensory analyzers. Phenomena at various levels, we suggest, can be unified by models that emphasize discovery and selection of relevant information. In a final section, we consider the potential role of perceptual learning in educational settings. Most instruction emphasizes facts and procedures that can be verbalized, whereas expertise depends heavily on implicit pattern recognition and selective extraction skills acquired through perceptual learning. We consider reasons why perceptual learning has not been systematically addressed in traditional instruction, and we describe recent successful efforts to create a technology of perceptual learning in areas such as aviation, mathematics, and medicine. Research in perceptual learning promises to advance scientific accounts of learning, and perceptual learning technology may offer similar promise in improving education.

  2. The Development of a Visual-Perceptual Chemistry Specific (VPCS) Assessment Tool

    ERIC Educational Resources Information Center

    Oliver-Hoyo, Maria; Sloan, Caroline

    2014-01-01

    The development of the Visual-Perceptual Chemistry Specific (VPCS) assessment tool is based on items that align to eight visual-perceptual skills considered as needed by chemistry students. This tool includes a comprehensive range of visual operations and presents items within a chemistry context without requiring content knowledge to solve…

  3. Rapid learning in visual cortical networks.

    PubMed

    Wang, Ye; Dragoi, Valentin

    2015-08-26

    Although changes in brain activity during learning have been extensively examined at the single neuron level, the coding strategies employed by cell populations remain mysterious. We examined cell populations in macaque area V4 during a rapid form of perceptual learning that emerges within tens of minutes. Multiple single units and LFP responses were recorded as monkeys improved their performance in an image discrimination task. We show that the increase in behavioral performance during learning is predicted by a tight coordination of spike timing with local population activity. More spike-LFP theta synchronization is correlated with higher learning performance, while high-frequency synchronization is unrelated with changes in performance, but these changes were absent once learning had stabilized and stimuli became familiar, or in the absence of learning. These findings reveal a novel mechanism of plasticity in visual cortex by which elevated low-frequency synchronization between individual neurons and local population activity accompanies the improvement in performance during learning.

  4. Evolutionary relevance facilitates visual information processing.

    PubMed

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  5. Perceptually Guided Photo Retargeting.

    PubMed

    Xia, Yingjie; Zhang, Luming; Hong, Richang; Nie, Liqiang; Yan, Yan; Shao, Ling

    2016-04-22

    We propose perceptually guided photo retargeting, which shrinks a photo by simulating a human's process of sequentially perceiving visually/semantically important regions in a photo. In particular, we first project the local features (graphlets in this paper) onto a semantic space, wherein visual cues such as global spatial layout and rough geometric context are exploited. Thereafter, a sparsity-constrained learning algorithm is derived to select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path which simulates how a human actively perceives semantics in a photo. Furthermore, we learn the prior distribution of such active graphlet paths (AGPs) from training photos that are marked as esthetically pleasing by multiple users. The learned priors enforce the corresponding AGP of a retargeted photo to be maximally similar to those from the training photos. On top of the retargeting model, we further design an online learning scheme to incrementally update the model with new photos that are esthetically pleasing. The online update module makes the algorithm less dependent on the number and contents of the initial training data. Experimental results show that: 1) the proposed AGP is over 90% consistent with human gaze shifting path, as verified by the eye-tracking data, and 2) the retargeting algorithm outperforms its competitors significantly, as AGP is more indicative of photo esthetics than conventional saliency maps.

  6. Design of Training Systems, Phase II-A Report. An Educational Technology Assessment Model (ETAM)

    DTIC Science & Technology

    1975-07-01

    34format" for the perceptual tasks. This is applicable to auditory as well as visual tasks. Student Participation in Learning Route. When a student enters...skill formats Skill training 05.05 Vehicle properties Instructional functions: Type of stimulus presented to student visual auditory ...Subtask 05.05. For example, a trainer to identify and interpret auditory signals would not be represented in the above list. Trainers in the vehicle

  7. The Effect of Auditory and Visual Motion Picture Descriptive Modalities in Teaching Perceptual-Motor Skills Used in the Grading of Cereal Grains.

    ERIC Educational Resources Information Center

    Hannemann, James William

    This study was designed to discover whether a student learns to imitate the skills demonstrated in a motion picture more accurately when the supportive descriptive terminology is presented in an auditory (spoken) form or in a visual (captions) form. A six-minute color 16mm film was produced--"Determining the Test Weight per Bushel of Yellow Corn".…

  8. Perceptual learning.

    PubMed

    Seitz, Aaron R

    2017-07-10

    Perceptual learning refers to how experience can change the way we perceive sights, sounds, smells, tastes, and touch. Examples abound: music training improves our ability to discern tones; experience with food and wines can refine our pallet (and unfortunately more quickly empty our wallet), and with years of training radiologists learn to save lives by discerning subtle details of images that escape the notice of untrained viewers. We often take perceptual learning for granted, but it has a profound impact on how we perceive the world. In this Primer, I will explain how perceptual learning is transformative in guiding our perceptual processes, how research into perceptual learning provides insight into fundamental mechanisms of learning and brain processes, and how knowledge of perceptual learning can be used to develop more effective training approaches for those requiring expert perceptual skills or those in need of perceptual rehabilitation (such as individuals with poor vision). I will make a case that perceptual learning is ubiquitous, scientifically interesting, and has substantial practical utility to us all. Copyright © 2017. Published by Elsevier Ltd.

  9. DIAGNOSIS AND TREATMENT OF READING DIFFICULTIES IN PUERTO RICAN AND NEGRO COMMUNITIES.

    ERIC Educational Resources Information Center

    COHEN, S. ALAN

    READING DISABILITIES ARE DIVIDED INTO THREE CATEGORIES--THOSE CAUSED BY PERCEPTUAL FACTORS, THOSE CAUSED BY PSYCHOSOCIAL FACTORS, AND THOSE CAUSED BY PSYCHOEDUCATIONAL FACTORS. POOR DEVELOPMENT OF VISUAL PERCEPTION CONSTITUTES A DISPROPORTIONATE PERCENTAGE OF LEARNING DISABILITY AMONG NEGROES AND PUERTO RICANS IN CENTRAL CITIES. EARLY CHILDHOOD…

  10. Aspects of Motor Performance and Preacademic Learning.

    ERIC Educational Resources Information Center

    Feder, Katya; Kerr, Robert

    1996-01-01

    The Miller Assessment for Preschoolers (MAP) and a number/counting test were given to 50 4- and 5-year-olds. Low performance on counting was related to significantly slower average response time, overshoot movement time, and reaction time, indicating perceptual-motor difficulty. Low MAP scores indicated difficulty processing visual spatial…

  11. Teaching Suggestions: Exceptional Child Program.

    ERIC Educational Resources Information Center

    Burrows, Patricia G., Ed.

    A variety of activities to improve auditory, visual, motor, and academic skills of learning disabled children are presented for teachers' use. Activities are grouped under perceptual skills and color coded for easy access. Given for each activity are the names (such as Milkman Mixup), idea or purpose (one example is improvement of fine motor…

  12. Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition.

    PubMed

    Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M

    2011-05-01

    The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.

  13. Spatial frequency discrimination learning in normal and developmentally impaired human vision

    PubMed Central

    Astle, Andrew T.; Webb, Ben S.; McGraw, Paul V.

    2010-01-01

    Perceptual learning effects demonstrate that the adult visual system retains neural plasticity. If perceptual learning holds any value as a treatment tool for amblyopia, trained improvements in performance must generalise. Here we investigate whether spatial frequency discrimination learning generalises within task to other spatial frequencies, and across task to contrast sensitivity. Before and after training, we measured contrast sensitivity and spatial frequency discrimination (at a range of reference frequencies 1, 2, 4, 8, 16 c/deg). During training, normal and amblyopic observers were divided into three groups. Each group trained on a spatial frequency discrimination task at one reference frequency (2, 4, or 8 c/deg). Normal and amblyopic observers who trained at lower frequencies showed a greater rate of within task learning (at their reference frequency) compared to those trained at higher frequencies. Compared to normals, amblyopic observers showed greater within task learning, at the trained reference frequency. Normal and amblyopic observers showed asymmetrical transfer of learning from high to low spatial frequencies. Both normal and amblyopic subjects showed transfer to contrast sensitivity. The direction of transfer for contrast sensitivity measurements was from the trained spatial frequency to higher frequencies, with the bandwidth and magnitude of transfer greater in the amblyopic observers compared to normals. The findings provide further support for the therapeutic efficacy of this approach and establish general principles that may help develop more effective protocols for the treatment of developmental visual deficits. PMID:20832416

  14. Anodal tDCS to V1 blocks visual perceptual learning consolidation.

    PubMed

    Peters, Megan A K; Thompson, Benjamin; Merabet, Lotfi B; Wu, Allan D; Shams, Ladan

    2013-06-01

    This study examined the effects of visual cortex transcranial direct current stimulation (tDCS) on visual processing and learning. Participants performed a contrast detection task on two consecutive days. Each session consisted of a baseline measurement followed by measurements made during active or sham stimulation. On the first day, one group received anodal stimulation to primary visual cortex (V1), while another received cathodal stimulation. Stimulation polarity was reversed for these groups on the second day. The third (control) group of subjects received sham stimulation on both days. No improvements or decrements in contrast sensitivity relative to the same-day baseline were observed during real tDCS, nor was any within-session learning trend observed. However, task performance improved significantly from Day 1 to Day 2 for the participants who received cathodal tDCS on Day 1 and for the sham group. No such improvement was found for the participants who received anodal stimulation on Day 1, indicating that anodal tDCS blocked overnight consolidation of visual learning, perhaps through engagement of inhibitory homeostatic plasticity mechanisms or alteration of the signal-to-noise ratio within stimulated cortex. These results show that applying tDCS to the visual cortex can modify consolidation of visual learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Anomalous visual experiences, negative symptoms, perceptual organization and the magnocellular pathway in schizophrenia: a shared construct?

    PubMed

    Kéri, Szabolcs; Kiss, Imre; Kelemen, Oguz; Benedek, György; Janka, Zoltán

    2005-10-01

    Schizophrenia is associated with impaired visual information processing. The aim of this study was to investigate the relationship between anomalous perceptual experiences, positive and negative symptoms, perceptual organization, rapid categorization of natural images and magnocellular (M) and parvocellular (P) visual pathway functioning. Thirty-five unmedicated patients with schizophrenia and 20 matched healthy control volunteers participated. Anomalous perceptual experiences were assessed with the Bonn Scale for the Assessment Basic Symptoms (BSABS). General intellectual functions were evaluated with the revised version of the Wechsler Adult Intelligence Scale. The 1-9 version of the Continuous Performance Test (CPT) was used to investigate sustained attention. The following psychophysical tests were used: detection of Gabor patches with collinear and orthogonal flankers (perceptual organization), categorization of briefly presented natural scenes (rapid visual processing), low-contrast and frequency-doubling vernier threshold (M pathway functioning), isoluminant colour vernier threshold and high spatial frequency discrimination (P pathway functioning). The patients with schizophrenia were impaired on test of perceptual organization, rapid visual processing and M pathway functioning. There was a significant correlation between BSABS scores, negative symptoms, perceptual organization, rapid visual processing and M pathway functioning. Positive symptoms, IQ, CPT and P pathway measures did not correlate with these parameters. The best predictor of the BSABS score was the perceptual organization deficit. These results raise the possibility that multiple facets of visual information processing deficits can be explained by M pathway dysfunctions in schizophrenia, resulting in impaired attentional modulation of perceptual organization and of natural image categorization.

  16. Android application for handwriting segmentation using PerTOHS theory

    NASA Astrophysics Data System (ADS)

    Akouaydi, Hanen; Njah, Sourour; Alimi, Adel M.

    2017-03-01

    The paper handles the problem of segmentation of handwriting on mobile devices. Many applications have been developed in order to facilitate the recognition of handwriting and to skip the limited numbers of keys in keyboards and try to introduce a space of drawing for writing instead of using keyboards. In this one, we will present a mobile theory for the segmentation of for handwriting uses PerTOHS theory, Perceptual Theory of On line Handwriting Segmentation, where handwriting is defined as a sequence of elementary and perceptual codes. In fact, the theory analyzes the written script and tries to learn the handwriting visual codes features in order to generate new ones via the generated perceptual sequences. To get this classification we try to apply the Beta-elliptic model, fuzzy detector and also genetic algorithms in order to get the EPCs (Elementary Perceptual Codes) and GPCs (Global Perceptual Codes) that composed the script. So, we will present our Android application M-PerTOHS for segmentation of handwriting.

  17. Examining Chemistry Students Visual-Perceptual Skills Using the VSCS Tool and Interview Data

    ERIC Educational Resources Information Center

    Christian, Caroline

    2010-01-01

    The Visual-Spatial Chemistry Specific (VSCS) assessment tool was developed to test students' visual-perceptual skills, which are required to form a mental image of an object. The VSCS was designed around the theoretical framework of Rochford and Archer that provides eight distinct and well-defined visual-perceptual skills with identified problems…

  18. Perceptual category learning of photographic and painterly stimuli in rhesus macaques (Macaca mulatta) and humans

    PubMed Central

    Jensen, Greg; Terrace, Herbert

    2017-01-01

    Humans are highly adept at categorizing visual stimuli, but studies of human categorization are typically validated by verbal reports. This makes it difficult to perform comparative studies of categorization using non-human animals. Interpretation of comparative studies is further complicated by the possibility that animal performance may merely reflect reinforcement learning, whereby discrete features act as discriminative cues for categorization. To assess and compare how humans and monkeys classified visual stimuli, we trained 7 rhesus macaques and 41 human volunteers to respond, in a specific order, to four simultaneously presented stimuli at a time, each belonging to a different perceptual category. These exemplars were drawn at random from large banks of images, such that the stimuli presented changed on every trial. Subjects nevertheless identified and ordered these changing stimuli correctly. Three monkeys learned to order naturalistic photographs; four others, close-up sections of paintings with distinctive styles. Humans learned to order both types of stimuli. All subjects classified stimuli at levels substantially greater than that predicted by chance or by feature-driven learning alone, even when stimuli changed on every trial. However, humans more closely resembled monkeys when classifying the more abstract painting stimuli than the photographic stimuli. This points to a common classification strategy in both species, one that humans can rely on in the absence of linguistic labels for categories. PMID:28961270

  19. Perceptual Load Alters Visual Excitability

    ERIC Educational Resources Information Center

    Carmel, David; Thorne, Jeremy D.; Rees, Geraint; Lavie, Nilli

    2011-01-01

    Increasing perceptual load reduces the processing of visual stimuli outside the focus of attention, but the mechanism underlying these effects remains unclear. Here we tested an account attributing the effects of perceptual load to modulations of visual cortex excitability. In contrast to stimulus competition accounts, which propose that load…

  20. Dissociation between perceptual processing and priming in long-term lorazepam users.

    PubMed

    Giersch, Anne; Vidailhet, Pierre

    2006-12-01

    Acute effects of lorazepam on visual information processing, perceptual priming and explicit memory are well established. However, visual processing and perceptual priming have rarely been explored in long-term lorazepam users. By exploring these functions it was possible to test the hypothesis that difficulty in processing visual information may lead to deficiencies in perceptual priming. Using a simple blind procedure, we tested explicit memory, perceptual priming and visual perception in 15 long-term lorazepam users and 15 control subjects individually matched according to sex, age and education level. Explicit memory, perceptual priming, and the identification of fragmented pictures were found to be preserved in long-term lorazepam users, contrary to what is usually observed after an acute drug intake. The processing of visual contour, on the other hand, was still significantly impaired. These results suggest that the effects observed on low-level visual perception are independent of the acute deleterious effects of lorazepam on perceptual priming. A comparison of perceptual priming in subjects with low- vs. high-level identification of new fragmented pictures further suggests that the ability to identify fragmented pictures has no influence on priming. Despite the fact that they were treated with relatively low doses and far from peak plasma concentration, it is noteworthy that in long-term users memory was preserved.

  1. "The Mask Who Wasn't There": Visual Masking Effect with the Perceptual Absence of the Mask

    ERIC Educational Resources Information Center

    Rey, Amandine Eve; Riou, Benoit; Muller, Dominique; Dabic, Stéphanie; Versace, Rémy

    2015-01-01

    Does a visual mask need to be perceptually present to disrupt processing? In the present research, we proposed to explore the link between perceptual and memory mechanisms by demonstrating that a typical sensory phenomenon (visual masking) can be replicated at a memory level. Experiment 1 highlighted an interference effect of a visual mask on the…

  2. ViA: a perceptual visualization assistant

    NASA Astrophysics Data System (ADS)

    Healey, Chris G.; St. Amant, Robert; Elhaddad, Mahmoud S.

    2000-05-01

    This paper describes an automated visualized assistant called ViA. ViA is designed to help users construct perceptually optical visualizations to represent, explore, and analyze large, complex, multidimensional datasets. We have approached this problem by studying what is known about the control of human visual attention. By harnessing the low-level human visual system, we can support our dual goals of rapid and accurate visualization. Perceptual guidelines that we have built using psychophysical experiments form the basis for ViA. ViA uses modified mixed-initiative planning algorithms from artificial intelligence to search of perceptually optical data attribute to visual feature mappings. Our perceptual guidelines are integrated into evaluation engines that provide evaluation weights for a given data-feature mapping, and hints on how that mapping might be improved. ViA begins by asking users a set of simple questions about their dataset and the analysis tasks they want to perform. Answers to these questions are used in combination with the evaluation engines to identify and intelligently pursue promising data-feature mappings. The result is an automatically-generated set of mappings that are perceptually salient, but that also respect the context of the dataset and users' preferences about how they want to visualize their data.

  3. Perceptual issues in scientific visualization

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Proffitt, Dennis R.

    1989-01-01

    In order to develop effective tools for scientific visulaization, consideration must be given to the perceptual competencies, limitations, and biases of the human operator. Perceptual psychology has amassed a rich body of research on these issues and can lend insight to the development of visualization tehcniques. Within a perceptual psychological framework, the computer display screen can best be thought of as a special kind of impoverished visual environemnt. Guidelines can be gleaned from the psychological literature to help visualization tool designers avoid ambiguities and/or illusions in the resulting data displays.

  4. Neural correlates of face gender discrimination learning.

    PubMed

    Su, Junzhu; Tan, Qingleng; Fang, Fang

    2013-04-01

    Using combined psychophysics and event-related potentials (ERPs), we investigated the effect of perceptual learning on face gender discrimination and probe the neural correlates of the learning effect. Human subjects were trained to perform a gender discrimination task with male or female faces. Before and after training, they were tested with the trained faces and other faces with the same and opposite genders. ERPs responding to these faces were recorded. Psychophysical results showed that training significantly improved subjects' discrimination performance and the improvement was specific to the trained gender, as well as to the trained identities. The training effect indicates that learning occurs at two levels-the category level (gender) and the exemplar level (identity). ERP analyses showed that the gender and identity learning was associated with the N170 latency reduction at the left occipital-temporal area and the N170 amplitude reduction at the right occipital-temporal area, respectively. These findings provide evidence for the facilitation model and the sharpening model on neuronal plasticity from visual experience, suggesting a faster processing speed and a sparser representation of face induced by perceptual learning.

  5. “Global” visual training and extent of transfer in amblyopic macaque monkeys

    PubMed Central

    Kiorpes, Lynne; Mangal, Paul

    2015-01-01

    Perceptual learning is gaining acceptance as a potential treatment for amblyopia in adults and children beyond the critical period. Many perceptual learning paradigms result in very specific improvement that does not generalize beyond the training stimulus, closely related stimuli, or visual field location. To be of use in amblyopia, a less specific effect is needed. To address this problem, we designed a more general training paradigm intended to effect improvement in visual sensitivity across tasks and domains. We used a “global” visual stimulus, random dot motion direction discrimination with 6 training conditions, and tested for posttraining improvement on a motion detection task and 3 spatial domain tasks (contrast sensitivity, Vernier acuity, Glass pattern detection). Four amblyopic macaques practiced the motion discrimination with their amblyopic eye for at least 20,000 trials. All showed improvement, defined as a change of at least a factor of 2, on the trained task. In addition, all animals showed improvements in sensitivity on at least some of the transfer test conditions, mainly the motion detection task; transfer to the spatial domain was inconsistent but best at fine spatial scales. However, the improvement on the transfer tasks was largely not retained at long-term follow-up. Our generalized training approach is promising for amblyopia treatment, but sustaining improved performance may require additional intervention. PMID:26505868

  6. Training directionally selective motion pathways can significantly improve reading efficiency

    NASA Astrophysics Data System (ADS)

    Lawton, Teri

    2004-06-01

    This study examined whether perceptual learning at early levels of visual processing would facilitate learning at higher levels of processing. This was examined by determining whether training the motion pathways by practicing leftright movement discrimination, as found previously, would improve the reading skills of inefficient readers significantly more than another computer game, a word discrimination game, or the reading program offered by the school. This controlled validation study found that practicing left-right movement discrimination 5-10 minutes twice a week (rapidly) for 15 weeks doubled reading fluency, and significantly improved all reading skills by more than one grade level, whereas inefficient readers in the control groups barely improved on these reading skills. In contrast to previous studies of perceptual learning, these experiments show that perceptual learning of direction discrimination significantly improved reading skills determined at higher levels of cognitive processing, thereby being generalized to a new task. The deficits in reading performance and attentional focus experienced by the person who struggles when reading are suggested to result from an information overload, resulting from timing deficits in the direction-selectivity network proposed by Russell De Valois et al. (2000), that following practice on direction discrimination goes away. This study found that practicing direction discrimination rapidly transitions the inefficient 7-year-old reader to an efficient reader.

  7. The Glenn A. Fry Award Lecture 2012: Plasticity of the visual system following central vision loss.

    PubMed

    Chung, Susana T L

    2013-06-01

    Following the onset of central vision loss, most patients develop an eccentric retinal location outside the affected macular region, the preferred retinal locus (PRL), as their new reference for visual tasks. The first goal of this article is to present behavioral evidence showing the presence of experience-dependent plasticity in people with central vision loss. The evidence includes the presence of oculomotor re-referencing of fixational saccades to the PRL; the characteristics of the shape of the crowding zone (spatial region within which the presence of other objects affects the recognition of a target) at the PRL are more "foveal-like" instead of resembling those of the normal periphery; and the change in the shape of the crowding zone at a para-PRL location that includes a component referenced to the PRL. These findings suggest that there is a shift in the referencing locus of the oculomotor and the sensory visual system from the fovea to the PRL for people with central vision loss, implying that the visual system for these individuals is still plastic and can be modified through experiences. The second goal of the article is to demonstrate the feasibility of applying perceptual learning, which capitalizes on the presence of plasticity, as a tool to improve functional vision for people with central vision loss. Our finding that visual function could improve with perceptual learning presents an exciting possibility for the development of an alternative rehabilitative strategy for people with central vision loss.

  8. A connectionist model of category learning by individuals with high-functioning autism spectrum disorder.

    PubMed

    Dovgopoly, Alexander; Mercado, Eduardo

    2013-06-01

    Individuals with autism spectrum disorder (ASD) show atypical patterns of learning and generalization. We explored the possible impacts of autism-related neural abnormalities on perceptual category learning using a neural network model of visual cortical processing. When applied to experiments in which children or adults were trained to classify complex two-dimensional images, the model can account for atypical patterns of perceptual generalization. This is only possible, however, when individual differences in learning are taken into account. In particular, analyses performed with a self-organizing map suggested that individuals with high-functioning ASD show two distinct generalization patterns: one that is comparable to typical patterns, and a second in which there is almost no generalization. The model leads to novel predictions about how individuals will generalize when trained with simplified input sets and can explain why some researchers have failed to detect learning or generalization deficits in prior studies of category learning by individuals with autism. On the basis of these simulations, we propose that deficits in basic neural plasticity mechanisms may be sufficient to account for the atypical patterns of perceptual category learning and generalization associated with autism, but they do not account for why only a subset of individuals with autism would show such deficits. If variations in performance across subgroups reflect heterogeneous neural abnormalities, then future behavioral and neuroimaging studies of individuals with ASD will need to account for such disparities.

  9. A model of color vision with a robot system

    NASA Astrophysics Data System (ADS)

    Wang, Haihui

    2006-01-01

    In this paper, we propose to generalize the saccade target method and state that perceptual stability in general arises by learning the effects one's actions have on sensor responses. The apparent visual stability of color percept across saccadic eye movements can be explained by positing that perception involves observing how sensory input changes in response to motor activities. The changes related to self-motion can be learned, and once learned, used to form stable percepts. The variation of sensor data in response to a motor act is therefore a requirement for stable perception rather than something that has to be compensated for in order to perceive a stable world. In this paper, we have provided a simple implementation of this sensory-motor contingency view of perceptual stability. We showed how a straightforward application of the temporal difference enhancement learning technique yielding color percepts that are stable across saccadic eye movements, even though the raw sensor input may change radically.

  10. Are neural correlates of visual consciousness retinotopic?

    PubMed

    ffytche, Dominic H; Pins, Delphine

    2003-11-14

    Some visual neurons code what we see, their defining characteristic being a response profile which mirrors conscious percepts rather than veridical sensory attributes. One issue yet to be resolved is whether, within a given cortical area, conscious visual perception relates to diffuse activity across the entire population of such cells or focal activity within the sub-population mapping the location of the perceived stimulus. Here we investigate the issue in the human brain with fMRI, using a threshold stimulation technique to dissociate perceptual from non-perceptual activity. Our results point to a retinotopic organisation of perceptual activity in early visual areas, with independent perceptual activations for different regions of visual space.

  11. Expertise facilitates the transfer of anticipation skill across domains.

    PubMed

    Rosalie, Simon M; Müller, Sean

    2014-02-01

    It is unclear whether perceptual-motor skill transfer is based upon similarity between the learning and transfer domains per identical elements theory, or facilitated by an understanding of underlying principles in accordance with general principle theory. Here, the predictions of identical elements theory, general principle theory, and aspects of a recently proposed model for the transfer of perceptual-motor skill with respect to expertise in the learning and transfer domains are examined. The capabilities of expert karate athletes, near-expert karate athletes, and novices to anticipate and respond to stimulus skills derived from taekwondo and Australian football were investigated in ecologically valid contexts using an in situ temporal occlusion paradigm and complex whole-body perceptual-motor skills. Results indicated that the karate experts and near-experts are as capable of using visual information to anticipate and guide motor skill responses as domain experts and near-experts in the taekwondo transfer domain, but only karate experts could perform like domain experts in the Australian football transfer domain. Findings suggest that transfer of anticipation skill is based upon expertise and an understanding of principles but may be supplemented by similarities that exist between the stimulus and response elements of the learning and transfer domains.

  12. Audiovisual speech perception development at varying levels of perceptual processing

    PubMed Central

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318

  13. Audiovisual speech perception development at varying levels of perceptual processing.

    PubMed

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  14. The malleability of emotional perception: Short-term plasticity in retinotopic neurons accompanies the formation of perceptual biases to threat.

    PubMed

    Thigpen, Nina N; Bartsch, Felix; Keil, Andreas

    2017-04-01

    Emotional experience changes visual perception, leading to the prioritization of sensory information associated with threats and opportunities. These emotional biases have been extensively studied by basic and clinical scientists, but their underlying mechanism is not known. The present study combined measures of brain-electric activity and autonomic physiology to establish how threat biases emerge in human observers. Participants viewed stimuli designed to differentially challenge known properties of different neuronal populations along the visual pathway: location, eye, and orientation specificity. Biases were induced using aversive conditioning with only 1 combination of eye, orientation, and location predicting a noxious loud noise and replicated in a separate group of participants. Selective heart rate-orienting responses for the conditioned threat stimulus indicated bias formation. Retinotopic visual brain responses were persistently and selectively enhanced after massive aversive learning for only the threat stimulus and dissipated after extinction training. These changes were location-, eye-, and orientation-specific, supporting the hypothesis that short-term plasticity in primary visual neurons mediates the formation of perceptual biases to threat. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Understanding Perceptual Differences; An Exploration of Neurological-Perceptual Roots of Learning Disabilities with Suggestions for Diagnosis and Treatment.

    ERIC Educational Resources Information Center

    Monroe, George E.

    In exploring the bases of learning disabilities, the following areas are considered: a working definition of perceptual handicaps; the relationship of perceptual handicaps to IQ; diagnosing perceptual handicaps; effective learning experiences for the perceptually handicapped child; and recommendations for developing new curricula. The appendixes…

  16. Adjective semantics, world knowledge and visual context: comprehension of size terms by 2- to 7-year-old Dutch-speaking children.

    PubMed

    Tribushinina, Elena

    2013-06-01

    The interpretation of size terms involves constructing contextually-relevant reference points by combining visual cues with knowledge of typical object sizes. This study aims to establish at what age children learn to integrate these two sources of information in the interpretation process and tests comprehension of the Dutch adjectives groot 'big' and klein 'small' by 2- to 7-year-old children. The results demonstrate that there is a gradual increase in the ability to inhibit visual cues and to use world knowledge for interpreting size terms. 2- and 3-year-old children only used the extremes of the perceptual range as reference points. From age four onwards children, like adults, used a cut-off point in the mid-zone of a series. From age five on, children were able to integrate world knowledge and perceptual context. Although 7-year-olds could make subtle distinctions between sizes of various object classes, their performance on incongruent items was not yet adult-like.

  17. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    NASA Astrophysics Data System (ADS)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  18. Can reading-specific training stimuli improve the effect of perceptual learning on peripheral reading speed?

    PubMed

    Bernard, Jean-Baptiste; Arunkumar, Amit; Chung, Susana T L

    2012-08-01

    In a previous study, Chung, Legge, and Cheung (2004) showed that training using repeated presentation of trigrams (sequences of three random letters) resulted in an increase in the size of the visual span (number of letters recognized in a glance) and reading speed in the normal periphery. In this study, we asked whether we could optimize the benefit of trigram training on reading speed by using trigrams more specific to the reading task (i.e., trigrams frequently used in the English language) and presenting them according to their frequencies of occurrence in normal English usage and observers' performance. Averaged across seven observers, our training paradigm (4 days of training) increased the size of the visual span by 6.44 bits, with an accompanied 63.6% increase in the maximum reading speed, compared with the values before training. However, these benefits were not statistically different from those of Chung, Legge, and Cheung (2004) using a random-trigram training paradigm. Our findings confirm the possibility of increasing the size of the visual span and reading speed in the normal periphery with perceptual learning, and suggest that the benefits of training on letter recognition and maximum reading speed may not be linked to the types of letter strings presented during training. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Which visual functions depend on intermediate visual regions? Insights from a case of developmental visual form agnosia.

    PubMed

    Gilaie-Dotan, Sharon

    2016-03-01

    A key question in visual neuroscience is the causal link between specific brain areas and perceptual functions; which regions are necessary for which visual functions? While the contribution of primary visual cortex and high-level visual regions to visual perception has been extensively investigated, the contribution of intermediate visual areas (e.g. V2/V3) to visual processes remains unclear. Here I review more than 20 visual functions (early, mid, and high-level) of LG, a developmental visual agnosic and prosopagnosic young adult, whose intermediate visual regions function in a significantly abnormal fashion as revealed through extensive fMRI and ERP investigations. While expectedly, some of LG's visual functions are significantly impaired, some of his visual functions are surprisingly normal (e.g. stereopsis, color, reading, biological motion). During the period of eight-year testing described here, LG trained on a perceptual learning paradigm that was successful in improving some but not all of his visual functions. Following LG's visual performance and taking into account additional findings in the field, I propose a framework for how different visual areas contribute to different visual functions, with an emphasis on intermediate visual regions. Thus, although rewiring and plasticity in the brain can occur during development to overcome and compensate for hindering developmental factors, LG's case seems to indicate that some visual functions are much less dependent on strict hierarchical flow than others, and can develop normally in spite of abnormal mid-level visual areas, thereby probably less dependent on intermediate visual regions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Prototype learning and dissociable categorization systems in Alzheimer's disease.

    PubMed

    Heindel, William C; Festa, Elena K; Ott, Brian R; Landy, Kelly M; Salmon, David P

    2013-08-01

    Recent neuroimaging studies suggest that prototype learning may be mediated by at least two dissociable memory systems depending on the mode of acquisition, with A/Not-A prototype learning dependent upon a perceptual representation system located within posterior visual cortex and A/B prototype learning dependent upon a declarative memory system associated with medial temporal and frontal regions. The degree to which patients with Alzheimer's disease (AD) can acquire new categorical information may therefore critically depend upon the mode of acquisition. The present study examined A/Not-A and A/B prototype learning in AD patients using procedures that allowed direct comparison of learning across tasks. Despite impaired explicit recall of category features in all tasks, patients showed differential patterns of category acquisition across tasks. First, AD patients demonstrated impaired prototype induction along with intact exemplar classification under incidental A/Not-A conditions, suggesting that the loss of functional connectivity within visual cortical areas disrupted the integration processes supporting prototype induction within the perceptual representation system. Second, AD patients demonstrated intact prototype induction but impaired exemplar classification during A/B learning under observational conditions, suggesting that this form of prototype learning is dependent on a declarative memory system that is disrupted in AD. Third, the surprisingly intact classification of both prototypes and exemplars during A/B learning under trial-and-error feedback conditions suggests that AD patients shifted control from their deficient declarative memory system to a feedback-dependent procedural memory system when training conditions allowed. Taken together, these findings serve to not only increase our understanding of category learning in AD, but to also provide new insights into the ways in which different memory systems interact to support the acquisition of categorical knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Prototype Learning and Dissociable Categorization Systems in Alzheimer’s Disease

    PubMed Central

    Heindel, William C.; Festa, Elena K.; Ott, Brian R.; Landy, Kelly M.; Salmon, David P.

    2015-01-01

    Recent neuroimaging studies suggest that prototype learning may be mediated by at least two dissociable memory systems depending on the mode of acquisition, with A/Not-A prototype learning dependent upon a perceptual representation system located within posterior visual cortex and A/B prototype learning dependent upon a declarative memory system associated with medial temporal and frontal regions. The degree to which patients with Alzheimer’s disease (AD) can acquire new categorical information may therefore critically depend upon the mode of acquisition. The present study examined A/Not-A and A/B prototype learning in AD patients using procedures that allowed direct comparison of learning across tasks. Despite impaired explicit recall of category features in all tasks, patients showed differential patterns of category acquisition across tasks. First, AD patients demonstrated impaired prototype induction along with intact exemplar classification under incidental A/Not-A conditions, suggesting that the loss of functional connectivity within visual cortical areas disrupted the integration processes supporting prototype induction within the perceptual representation system. Second, AD patients demonstrated intact prototype induction but impaired exemplar classification during A/B learning under observational conditions, suggesting that this form of prototype learning is dependent on a declarative memory system that is disrupted in AD. Third, the surprisingly intact classification of both prototypes and exemplars during A/B learning under trial-and-error feedback conditions suggests that AD patients shifted control from their deficient declarative memory system to a feedback-dependent procedural memory system when training conditions allowed. Taken together, these findings serve to not only increase our understanding of category learning in AD, but to also provide new insights into the ways in which different memory systems interact to support the acquisition of categorical knowledge. PMID:23751172

  2. Transfer of learning between unimanual and bimanual rhythmic movement coordination: transfer is a function of the task dynamic.

    PubMed

    Snapp-Childs, Winona; Wilson, Andrew D; Bingham, Geoffrey P

    2015-07-01

    Under certain conditions, learning can transfer from a trained task to an untrained version of that same task. However, it is as yet unclear what those certain conditions are or why learning transfers when it does. Coordinated rhythmic movement is a valuable model system for investigating transfer because we have a model of the underlying task dynamic that includes perceptual coupling between the limbs being coordinated. The model predicts that (1) coordinated rhythmic movements, both bimanual and unimanual, are organised with respect to relative motion information for relative phase in the coupling function, (2) unimanual is less stable than bimanual coordination because the coupling is unidirectional rather than bidirectional, and (3) learning a new coordination is primarily about learning to perceive and use the relevant information which, with equal perceptual improvement due to training, yields equal transfer of learning from bimanual to unimanual coordination and vice versa [but, given prediction (2), the resulting performance is also conditioned by the intrinsic stability of each task]. In the present study, two groups were trained to produce 90° either unimanually or bimanually, respectively, and tested in respect to learning (namely improved performance in the trained 90° coordination task and improved visual discrimination of 90°) and transfer of learning (to the other, untrained 90° coordination task). Both groups improved in the task condition in which they were trained and in their ability to visually discriminate 90°, and this learning transferred to the untrained condition. When scaled by the relative intrinsic stability of each task, transfer levels were found to be equal. The results are discussed in the context of the perception-action approach to learning and performance.

  3. Learning to recognize face shapes through serial exploration.

    PubMed

    Wallraven, Christian; Whittingstall, Lisa; Bülthoff, Heinrich H

    2013-05-01

    Human observers are experts at visual face recognition due to specialized visual mechanisms for face processing that evolve with perceptual expertize. Such expertize has long been attributed to the use of configural processing, enabled by fast, parallel information encoding of the visual information in the face. Here we tested whether participants can learn to efficiently recognize faces that are serially encoded-that is, when only partial visual information about the face is available at any given time. For this, ten participants were trained in gaze-restricted face recognition in which face masks were viewed through a small aperture controlled by the participant. Tests comparing trained with untrained performance revealed (1) a marked improvement in terms of speed and accuracy, (2) a gradual development of configural processing strategies, and (3) participants' ability to rapidly learn and accurately recognize novel exemplars. This performance pattern demonstrates that participants were able to learn new strategies to compensate for the serial nature of information encoding. The results are discussed in terms of expertize acquisition and relevance for other sensory modalities relying on serial encoding.

  4. Learning, retention, and generalization of haptic categories

    NASA Astrophysics Data System (ADS)

    Do, Phuong T.

    This dissertation explored how haptic concepts are learned, retained, and generalized to the same or different modality. Participants learned to classify objects into three categories either visually or haptically via different training procedures, followed by an immediate or delayed transfer test. Experiment I involved visual versus haptic learning and transfer. Intermodal matching between vision and haptics was investigated in Experiment II. Experiments III and IV examined intersensory conflict in within- and between-category bimodal situations to determine the degree of perceptual dominance between sight and touch. Experiment V explored the intramodal relationship between similarity and categorization in a psychological space, as revealed by MDS analysis of similarity judgments. Major findings were: (1) visual examination resulted in relatively higher performance accuracy than haptic learning; (2) systematic training produced better category learning of haptic concepts across all modality conditions; (3) the category prototypes were rated newer than any transfer stimulus followed learning both immediately and after a week delay; and, (4) although they converged at the apex of two transformational trajectories, the category prototypes became more central to their respective categories and increasingly structured as a function of learning. Implications for theories of multimodal similarity and categorization behavior are discussed in terms of discrimination learning, sensory integration, and dominance relation.

  5. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    PubMed

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Carving nature at its joints or cutting its effective loops? On the dangers of trying to disentangle intertwined mental processes.

    PubMed

    Goldstone, Robert L; de Leeuw, Joshua R; Landy, David H

    2016-01-01

    Attention is often inextricably intertwined with perception, and it is deployed not only to spatial regions, but also to sensory dimensions, learned dimensions, and learned complex configurations. Firestone & Scholl's (F&S)'s tactic of isolating visual perceptual processes from attention and action has the negative consequence of neglecting interactions that are critically important for allowing people to perceive their world in efficient and useful ways.

  7. The Effect of Visual Perceptual Load on Auditory Awareness in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Tillmann, Julian; Olguin, Andrea; Tuomainen, Jyrki; Swettenham, John

    2015-01-01

    Recent work on visual selective attention has shown that individuals with Autism Spectrum Disorder (ASD) demonstrate an increased perceptual capacity. The current study examined whether increasing visual perceptual load also has less of an effect on auditory awareness in children with ASD. Participants performed either a high- or low load version…

  8. Processing reafferent and exafferent visual information for action and perception.

    PubMed

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  9. Learning to see again: biological constraints on cortical plasticity and the implications for sight restoration technologies

    NASA Astrophysics Data System (ADS)

    Beyeler, Michael; Rokem, Ariel; Boynton, Geoffrey M.; Fine, Ione

    2017-10-01

    The ‘bionic eye’—so long a dream of the future—is finally becoming a reality with retinal prostheses available to patients in both the US and Europe. However, clinical experience with these implants has made it apparent that the visual information provided by these devices differs substantially from normal sight. Consequently, the ability of patients to learn to make use of this abnormal retinal input plays a critical role in whether or not some functional vision is successfully regained. The goal of the present review is to summarize the vast basic science literature on developmental and adult cortical plasticity with an emphasis on how this literature might relate to the field of prosthetic vision. We begin with describing the distortion and information loss likely to be experienced by visual prosthesis users. We then define cortical plasticity and perceptual learning, and describe what is known, and what is unknown, about visual plasticity across the hierarchy of brain regions involved in visual processing, and across different stages of life. We close by discussing what is known about brain plasticity in sight restoration patients and discuss biological mechanisms that might eventually be harnessed to improve visual learning in these patients.

  10. Improved probabilistic inference as a general learning mechanism with action video games.

    PubMed

    Green, C Shawn; Pouget, Alexandre; Bavelier, Daphne

    2010-09-14

    Action video game play benefits performance in an array of sensory, perceptual, and attentional tasks that go well beyond the specifics of game play [1-9]. That a training regimen may induce improvements in so many different skills is notable because the majority of studies on training-induced learning report improvements on the trained task but limited transfer to other, even closely related, tasks ([10], but see also [11-13]). Here we ask whether improved probabilistic inference may explain such broad transfer. By using a visual perceptual decision making task [14, 15], the present study shows for the first time that action video game experience does indeed improve probabilistic inference. A neural model of this task [16] establishes how changing a single parameter, namely the strength of the connections between the neural layer providing the momentary evidence and the layer integrating the evidence over time, captures improvements in action-gamers behavior. These results were established in a visual, but also in a novel auditory, task, indicating generalization across modalities. Thus, improved probabilistic inference provides a general mechanism for why action video game playing enhances performance in a wide variety of tasks. In addition, this mechanism may serve as a signature of training regimens that are likely to produce transfer of learning. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Acetylcholine and Olfactory Perceptual Learning

    ERIC Educational Resources Information Center

    Wilson, Donald A.; Fletcher, Max L.; Sullivan, Regina M.

    2004-01-01

    Olfactory perceptual learning is a relatively long-term, learned increase in perceptual acuity, and has been described in both humans and animals. Data from recent electrophysiological studies have indicated that olfactory perceptual learning may be correlated with changes in odorant receptive fields of neurons in the olfactory bulb and piriform…

  12. Accurate expectancies diminish perceptual distraction during visual search

    PubMed Central

    Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry

    2014-01-01

    The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374

  13. Effects of Experimentally Imposed Noise on Task Performance of Black Children Attending Day Care Centers Near Elevated Subway Trains.

    ERIC Educational Resources Information Center

    Hambrick-Dixon, Priscilla Janet

    1986-01-01

    Investigates whether an experimentally imposed 80dB (A) noise affected psychomotor, serial memory words and pictures, incidental memory, visual recall, paired associates, perceptual learning, and coding performance of five-year-old Black children attending day care centers near and far from elevated subways. (HOD)

  14. Enhancing Digital Access to Learning Materials for Canadians with Perceptual Disabilities: A Pilot Study. Research Report

    ERIC Educational Resources Information Center

    Lockerby, Christina; Breau, Rachel; Zuvela, Biljana

    2006-01-01

    By exploring the experiences of participants with DAISY (Digital Accessible Information System) Talking Books, the study reported in this article not only discovered how people who are blind, visually impaired, and/or print-disabled read DAISY books, but also identified participants' perceptions of DAISY as being particularly useful in their…

  15. Network model of top-down influences on local gain and contextual interactions in visual cortex.

    PubMed

    Piëch, Valentin; Li, Wu; Reeke, George N; Gilbert, Charles D

    2013-10-22

    The visual system uses continuity as a cue for grouping oriented line segments that define object boundaries in complex visual scenes. Many studies support the idea that long-range intrinsic horizontal connections in early visual cortex contribute to this grouping. Top-down influences in primary visual cortex (V1) play an important role in the processes of contour integration and perceptual saliency, with contour-related responses being task dependent. This suggests an interaction between recurrent inputs to V1 and intrinsic connections within V1 that enables V1 neurons to respond differently under different conditions. We created a network model that simulates parametrically the control of local gain by hypothetical top-down modification of local recurrence. These local gain changes, as a consequence of network dynamics in our model, enable modulation of contextual interactions in a task-dependent manner. Our model displays contour-related facilitation of neuronal responses and differential foreground vs. background responses over the neuronal ensemble, accounting for the perceptual pop-out of salient contours. It quantitatively reproduces the results of single-unit recording experiments in V1, highlighting salient contours and replicating the time course of contextual influences. We show by means of phase-plane analysis that the model operates stably even in the presence of large inputs. Our model shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.

  16. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

    PubMed Central

    Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi

    2015-01-01

    Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827

  17. Perceptual Biases in Relation to Paranormal and Conspiracy Beliefs

    PubMed Central

    van Elk, Michiel

    2015-01-01

    Previous studies have shown that one’s prior beliefs have a strong effect on perceptual decision-making and attentional processing. The present study extends these findings by investigating how individual differences in paranormal and conspiracy beliefs are related to perceptual and attentional biases. Two field studies were conducted in which visitors of a paranormal conducted a perceptual decision making task (i.e. the face / house categorization task; Experiment 1) or a visual attention task (i.e. the global / local processing task; Experiment 2). In the first experiment it was found that skeptics compared to believers more often incorrectly categorized ambiguous face stimuli as representing a house, indicating that disbelief rather than belief in the paranormal is driving the bias observed for the categorization of ambiguous stimuli. In the second experiment, it was found that skeptics showed a classical ‘global-to-local’ interference effect, whereas believers in conspiracy theories were characterized by a stronger ‘local-to-global interference effect’. The present study shows that individual differences in paranormal and conspiracy beliefs are associated with perceptual and attentional biases, thereby extending the growing body of work in this field indicating effects of cultural learning on basic perceptual processes. PMID:26114604

  18. Perceptual Biases in Relation to Paranormal and Conspiracy Beliefs.

    PubMed

    van Elk, Michiel

    2015-01-01

    Previous studies have shown that one's prior beliefs have a strong effect on perceptual decision-making and attentional processing. The present study extends these findings by investigating how individual differences in paranormal and conspiracy beliefs are related to perceptual and attentional biases. Two field studies were conducted in which visitors of a paranormal conducted a perceptual decision making task (i.e. the face/house categorization task; Experiment 1) or a visual attention task (i.e. the global/local processing task; Experiment 2). In the first experiment it was found that skeptics compared to believers more often incorrectly categorized ambiguous face stimuli as representing a house, indicating that disbelief rather than belief in the paranormal is driving the bias observed for the categorization of ambiguous stimuli. In the second experiment, it was found that skeptics showed a classical 'global-to-local' interference effect, whereas believers in conspiracy theories were characterized by a stronger 'local-to-global interference effect'. The present study shows that individual differences in paranormal and conspiracy beliefs are associated with perceptual and attentional biases, thereby extending the growing body of work in this field indicating effects of cultural learning on basic perceptual processes.

  19. Cross-sensory reference frame transfer in spatial memory: the case of proprioceptive learning.

    PubMed

    Avraamides, Marios N; Sarrou, Mikaella; Kelly, Jonathan W

    2014-04-01

    In three experiments, we investigated whether the information available to visual perception prior to encoding the locations of objects in a path through proprioception would influence the reference direction from which the spatial memory was formed. Participants walked a path whose orientation was misaligned to the walls of the enclosing room and to the square sheet that covered the path prior to learning (Exp. 1) and, in addition, to the intrinsic structure of a layout studied visually prior to walking the path and to the orientation of stripes drawn on the floor (Exps. 2 and 3). Despite the availability of prior visual information, participants constructed spatial memories that were aligned with the canonical axes of the path, as opposed to the reference directions primed by visual experience. The results are discussed in the context of previous studies documenting transfer of reference frames within and across perceptual modalities.

  20. The dissociation of perception and cognition in children with early brain damage.

    PubMed

    Stiers, Peter; Vandenbussche, Erik

    2004-03-01

    Reduced non-verbal compared to verbal intelligence is used in many outcome studies of perinatal complications as an indication of visual perceptual impairment. To investigate whether this is justified, we re-examined data sets from two previous studies, both of which used the visual perceptual battery L94. The first study comprised 47 children at risk for cerebral visual impairment due to prematurity or birth asphyxia, who had been administered the McCarthy Scales of Children's abilities. The second study evaluated visual perceptual abilities in 82 children with a physical disability. These children's intellectual ability had been assessed with the Wechsler Intelligence Scale for Children-Revised and/or Wechsler Pre-school and Primary Scale of Intelligence-Revised. No significant association was found between visual perceptual impairment and (1) reduced non-verbal to verbal intelligence; (2) increased non-verbal subtest scatter; or (3) non-verbal subtest profile deviation, for any of the intelligence scales. This result suggests that non-verbal intelligence subtests assess a complex of cognitive skills that are distinct from visual perceptual abilities, and that this assessment is not hampered by deficits in perceptual abilities as manifested in these children.

  1. Perceptual uncertainty facilitates creative discovery

    NASA Astrophysics Data System (ADS)

    Tseng, Winger Sei-Wo

    2018-06-01

    In this study, unstructured and ambiguous figures used as visual stimuli were classified as having high, moderate, and low ambiguity and presented to participants. The Experiment was designed to explore how the perceptual ambiguity that is inherent within presented visual cues can affect novice and expert designers' visual discovery during design development. A total number of 42 participants, half of them were recruited from non-design departments as novices. The remaining were chosen from design companies regarded as experts. The participants were tasked with discovering a sub-shape from the presented sketch and using this shape as a cue to design a concept. To this end, two types of sub-shapes were defined: known feature sub-shapes and innovative feature sub-shapes (IFSs). The experimental results strongly evidence that with an increase in the ambiguity of the visual stimuli, expert designers produce more ideas and IFSs, whereas novice designers produce fewer. The capability of expert designers to exploit visual ambiguity is interesting, and its absence in novice designers suggests that this capability is likely a unique skill gained, at least in part, through professional practice. Our results can be applied in design learning and education to generalize the principles and strategies of visual discovery by expert designers during concept sketching in order to train novice designers in addressing design problems.

  2. The roles of perceptual and conceptual information in face recognition.

    PubMed

    Schwartz, Linoy; Yovel, Galit

    2016-11-01

    The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. The effect of normal aging and age-related macular degeneration on perceptual learning.

    PubMed

    Astle, Andrew T; Blighe, Alan J; Webb, Ben S; McGraw, Paul V

    2015-01-01

    We investigated whether perceptual learning could be used to improve peripheral word identification speed. The relationship between the magnitude of learning and age was established in normal participants to determine whether perceptual learning effects are age invariant. We then investigated whether training could lead to improvements in patients with age-related macular degeneration (AMD). Twenty-eight participants with normal vision and five participants with AMD trained on a word identification task. They were required to identify three-letter words, presented 10° from fixation. To standardize crowding across each of the letters that made up the word, words were flanked laterally by randomly chosen letters. Word identification performance was measured psychophysically using a staircase procedure. Significant improvements in peripheral word identification speed were demonstrated following training (71% ± 18%). Initial task performance was correlated with age, with older participants having poorer performance. However, older adults learned more rapidly such that, following training, they reached the same level of performance as their younger counterparts. As a function of number of trials completed, patients with AMD learned at an equivalent rate as age-matched participants with normal vision. Improvements in word identification speed were maintained at least 6 months after training. We have demonstrated that temporal aspects of word recognition can be improved in peripheral vision with training across a range of ages and these learned improvements are relatively enduring. However, training targeted at other bottlenecks to peripheral reading ability, such as visual crowding, may need to be incorporated to optimize this approach.

  4. The effect of normal aging and age-related macular degeneration on perceptual learning

    PubMed Central

    Astle, Andrew T.; Blighe, Alan J.; Webb, Ben S.; McGraw, Paul V.

    2015-01-01

    We investigated whether perceptual learning could be used to improve peripheral word identification speed. The relationship between the magnitude of learning and age was established in normal participants to determine whether perceptual learning effects are age invariant. We then investigated whether training could lead to improvements in patients with age-related macular degeneration (AMD). Twenty-eight participants with normal vision and five participants with AMD trained on a word identification task. They were required to identify three-letter words, presented 10° from fixation. To standardize crowding across each of the letters that made up the word, words were flanked laterally by randomly chosen letters. Word identification performance was measured psychophysically using a staircase procedure. Significant improvements in peripheral word identification speed were demonstrated following training (71% ± 18%). Initial task performance was correlated with age, with older participants having poorer performance. However, older adults learned more rapidly such that, following training, they reached the same level of performance as their younger counterparts. As a function of number of trials completed, patients with AMD learned at an equivalent rate as age-matched participants with normal vision. Improvements in word identification speed were maintained at least 6 months after training. We have demonstrated that temporal aspects of word recognition can be improved in peripheral vision with training across a range of ages and these learned improvements are relatively enduring. However, training targeted at other bottlenecks to peripheral reading ability, such as visual crowding, may need to be incorporated to optimize this approach. PMID:26605694

  5. An object-based visual attention model for robotic applications.

    PubMed

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  6. Visual perceptual load reduces auditory detection in typically developing individuals but not in individuals with autism spectrum disorders.

    PubMed

    Tillmann, Julian; Swettenham, John

    2017-02-01

    Previous studies examining selective attention in individuals with autism spectrum disorder (ASD) have yielded conflicting results, some suggesting superior focused attention (e.g., on visual search tasks), others demonstrating greater distractibility. This pattern could be accounted for by the proposal (derived by applying the Load theory of attention, e.g., Lavie, 2005) that ASD is characterized by an increased perceptual capacity (Remington, Swettenham, Campbell, & Coleman, 2009). Recent studies in the visual domain support this proposal. Here we hypothesize that ASD involves an enhanced perceptual capacity that also operates across sensory modalities, and test this prediction, for the first time using a signal detection paradigm. Seventeen neurotypical (NT) and 15 ASD adolescents performed a visual search task under varying levels of visual perceptual load while simultaneously detecting presence/absence of an auditory tone embedded in noise. Detection sensitivity (d') for the auditory stimulus was similarly high for both groups in the low visual perceptual load condition (e.g., 2 items: p = .391, d = 0.31, 95% confidence interval [CI] [-0.39, 1.00]). However, at a higher level of visual load, auditory d' reduced for the NT group but not the ASD group, leading to a group difference (p = .002, d = 1.2, 95% CI [0.44, 1.96]). As predicted, when visual perceptual load was highest, both groups then showed a similarly low auditory d' (p = .9, d = 0.05, 95% CI [-0.65, 0.74]). These findings demonstrate that increased perceptual capacity in ASD operates across modalities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. The Glenn A. Fry Award Lecture 2012: Plasticity of the Visual System Following Central Vision Loss

    PubMed Central

    Chung, Susana T. L.

    2013-01-01

    Following the onset of central vision loss, most patients develop an eccentric retinal location outside the affected macular region, the preferred retinal locus (PRL), as their new reference for visual tasks. The first goal of this paper is to present behavioral evidence showing the presence of experience-dependent plasticity in people with central vision loss. The evidence includes (1) the presence of oculomotor re-referencing of fixational saccades to the PRL; (2) the characteristics of the shape of the crowding zone (spatial region within which the presence of other objects affects the recognition of a target) at the PRL are more “foveal-like” instead of resembling those of the normal periphery; and (3) the change in the shape of the crowding zone at a para-PRL location that includes a component referenced to the PRL. These findings suggest that there is a shift in the referencing locus of the oculomotor and the sensory visual system from the fovea to the PRL for people with central vision loss, implying that the visual system for these individuals is still plastic and can be modified through experiences. The second goal of the paper is to demonstrate the feasibility of applying perceptual learning, which capitalizes on the presence of plasticity, as a tool to improve functional vision for people with central vision loss. Our finding that visual function could improve with perceptual learning presents an exciting possibility for the development of an alternative rehabilitative strategy for people with central vision loss. PMID:23670125

  8. Learning to see, seeing to learn: visual aspects of sensemaking

    NASA Astrophysics Data System (ADS)

    Russell, Daniel M.

    2003-06-01

    When one says "I see," what is usually meant is "I understand." But what does it mean to create a sense of understanding a large, complex, problem, one with many interlocking pieces, sometimes ill-fitting data and the occasional bit of contradictory information? The traditional computer science perspective on helping people towards understanding is to provide an armamentarium of tools and techniques - databases, query tools and a variety of graphing methods. As a field, we have an overly simple perspective on what it means to grapple with real information. In practice, people who try to make sense of some thing (say, the life sciences, the Middle East, the large scale structure of the universe, their taxes) are faced with a complex collection of information, some in easy-to-digest structured forms, but with many relevant parts scattered hither and yon, in forms and shapes too difficult to manage. To create an understanding, we find that people create representations of complex information. Yet using representations relies on fairly sophisticated perceptual practices. These practices are in no way preordained, but subject to the kinds of perceptual and cognitive phenomena we see in every day life. In order to understand our information environments, we need to learn to perceive these perceptual elements, and understand when they do, and do not, work to our advantage. A more powerful approach to the problem of supporting realistic sensemaking practice is to design information environments that accommodate both the world"s information realities and people"s cognitive characteristics. This paper argues that visual aspects of representation use often dominate sensemaking behavior, and illustrates this by showing three sensemaking tools we have built that take advantage of this property.

  9. Efficacy of a perceptual and visual-motor skill intervention program for students with dyslexia.

    PubMed

    Fusco, Natália; Germano, Giseli Donadon; Capellini, Simone Aparecida

    2015-01-01

    To verify the efficacy of a perceptual and visual-motor skill intervention program for students with dyslexia. The participants were 20 students from third to fifth grade of a public elementary school in Marília, São Paulo, aged from 8 years to 11 years and 11 months, distributed into the following groups: Group I (GI; 10 students with developmental dyslexia) and Group II (GII; 10 students with good academic performance). A perceptual and visual-motor intervention program was applied, which comprised exercises for visual-motor coordination, visual discrimination, visual memory, visual-spatial relationship, shape constancy, sequential memory, visual figure-ground coordination, and visual closure. In pre- and post-testing situations, both groups were submitted to the Test of Visual-Perceptual Skills (TVPS-3), and the quality of handwriting was analyzed using the Dysgraphia Scale. The analyzed statistical results showed that both groups of students had dysgraphia in pretesting situation. In visual perceptual skills, GI presented a lower performance compared to GII, as well as in the quality of writing. After undergoing the intervention program, GI increased the average of correct answers in TVPS-3 and improved the quality of handwriting. The developed intervention program proved appropriate for being applied to students with dyslexia, and showed positive effects because it provided improved visual perception skills and quality of writing for students with developmental dyslexia.

  10. Perceptual Learning: Use-Dependent Cortical Plasticity.

    PubMed

    Li, Wu

    2016-10-14

    Our perceptual abilities significantly improve with practice. This phenomenon, known as perceptual learning, offers an ideal window for understanding use-dependent changes in the adult brain. Different experimental approaches have revealed a diversity of behavioral and cortical changes associated with perceptual learning, and different interpretations have been given with respect to the cortical loci and neural processes responsible for the learning. Accumulated evidence has begun to put together a coherent picture of the neural substrates underlying perceptual learning. The emerging view is that perceptual learning results from a complex interplay between bottom-up and top-down processes, causing a global reorganization across cortical areas specialized for sensory processing, engaged in top-down attentional control, and involved in perceptual decision making. Future studies should focus on the interactions among cortical areas for a better understanding of the general rules and mechanisms underlying various forms of skill learning.

  11. Explaining the Timing of Natural Scene Understanding with a Computational Model of Perceptual Categorization

    PubMed Central

    Sofer, Imri; Crouzet, Sébastien M.; Serre, Thomas

    2015-01-01

    Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability. PMID:26335683

  12. Beta oscillations define discrete perceptual cycles in the somatosensory domain.

    PubMed

    Baumgarten, Thomas J; Schnitzler, Alfons; Lange, Joachim

    2015-09-29

    Whether seeing a movie, listening to a song, or feeling a breeze on the skin, we coherently experience these stimuli as continuous, seamless percepts. However, there are rare perceptual phenomena that argue against continuous perception but, instead, suggest discrete processing of sensory input. Empirical evidence supporting such a discrete mechanism, however, remains scarce and comes entirely from the visual domain. Here, we demonstrate compelling evidence for discrete perceptual sampling in the somatosensory domain. Using magnetoencephalography (MEG) and a tactile temporal discrimination task in humans, we find that oscillatory alpha- and low beta-band (8-20 Hz) cycles in primary somatosensory cortex represent neurophysiological correlates of discrete perceptual cycles. Our results agree with several theoretical concepts of discrete perceptual sampling and empirical evidence of perceptual cycles in the visual domain. Critically, these results show that discrete perceptual cycles are not domain-specific, and thus restricted to the visual domain, but extend to the somatosensory domain.

  13. High perceptual load leads to both reduced gain and broader orientation tuning

    PubMed Central

    Stolte, Moritz; Bahrami, Bahador; Lavie, Nilli

    2014-01-01

    Due to its limited capacity, visual perception depends on the allocation of attention. The resultant phenomena of inattentional blindness, accompanied by reduced sensory visual cortex response to unattended stimuli in conditions of high perceptual load in the attended task, are now well established (Lavie, 2005; Lavie, 2010, for reviews). However, the underlying mechanisms for these effects remain to be elucidated. Specifically, is reduced perceptual processing under high perceptual load a result of reduced sensory signal gain, broader tuning, or both? We examined this question with psychophysical measures of orientation tuning under different levels of perceptual load in the task performed. Our results show that increased perceptual load leads to both reduced sensory signal and broadening of tuning. These results clarify the effects of attention on elementary visual perception and suggest that high perceptual load is critical for attentional effects on sensory tuning. PMID:24610952

  14. Training haptic stiffness discrimination: time course of learning with or without visual information and knowledge of results.

    PubMed

    Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria

    2013-08-01

    In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.

  15. Hierarchical acquisition of visual specificity in spatial contextual cueing.

    PubMed

    Lie, Kin-Pou

    2015-01-01

    Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.

  16. Effects of kinesthetic and cutaneous stimulation during the learning of a viscous force field.

    PubMed

    Rosati, Giulio; Oscari, Fabio; Pacchierotti, Claudio; Prattichizzo, Domenico

    2014-01-01

    Haptic stimulation can help humans learn perceptual motor skills, but the precise way in which it influences the learning process has not yet been clarified. This study investigates the role of the kinesthetic and cutaneous components of haptic feedback during the learning of a viscous curl field, taking also into account the influence of visual feedback. We present the results of an experiment in which 17 subjects were asked to make reaching movements while grasping a joystick and wearing a pair of cutaneous devices. Each device was able to provide cutaneous contact forces through a moving platform. The subjects received visual feedback about joystick's position. During the experiment, the system delivered a perturbation through (1) full haptic stimulation, (2) kinesthetic stimulation alone, (3) cutaneous stimulation alone, (4) altered visual feedback, or (5) altered visual feedback plus cutaneous stimulation. Conditions 1, 2, and 3 were also tested with the cancellation of the visual feedback of position error. Results indicate that kinesthetic stimuli played a primary role during motor adaptation to the viscous field, which is a fundamental premise to motor learning and rehabilitation. On the other hand, cutaneous stimulation alone appeared not to bring significant direct or adaptation effects, although it helped in reducing direct effects when used in addition to kinesthetic stimulation. The experimental conditions with visual cancellation of position error showed slower adaptation rates, indicating that visual feedback actively contributes to the formation of internal models. However, modest learning effects were detected when the visual information was used to render the viscous field.

  17. Perceptual learning in sensorimotor adaptation.

    PubMed

    Darainy, Mohammad; Vahdat, Shahabeddin; Ostry, David J

    2013-11-01

    Motor learning often involves situations in which the somatosensory targets of movement are, at least initially, poorly defined, as for example, in learning to speak or learning the feel of a proper tennis serve. Under these conditions, motor skill acquisition presumably requires perceptual as well as motor learning. That is, it engages both the progressive shaping of sensory targets and associated changes in motor performance. In the present study, we test the idea that perceptual learning alters somatosensory function and in so doing produces changes to human motor performance and sensorimotor adaptation. Subjects in these experiments undergo perceptual training in which a robotic device passively moves the subject's arm on one of a set of fan-shaped trajectories. Subjects are required to indicate whether the robot moved the limb to the right or the left and feedback is provided. Over the course of training both the perceptual boundary and acuity are altered. The perceptual learning is observed to improve both the rate and extent of learning in a subsequent sensorimotor adaptation task and the benefits persist for at least 24 h. The improvement in the present studies varies systematically with changes in perceptual acuity and is obtained regardless of whether the perceptual boundary shift serves to systematically increase or decrease error on subsequent movements. The beneficial effects of perceptual training are found to be substantially dependent on reinforced decision-making in the sensory domain. Passive-movement training on its own is less able to alter subsequent learning in the motor system. Overall, this study suggests perceptual learning plays an integral role in motor learning.

  18. Time-resolved neuroimaging of visual short term memory consolidation by post-perceptual attention shifts.

    PubMed

    Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan

    2016-01-15

    Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Perceptual load influences selective attention across development.

    PubMed

    Couperus, Jane W

    2011-09-01

    Research suggests that visual selective attention develops across childhood. However, there is relatively little understanding of the neurological changes that accompany this development, particularly in the context of adult theories of selective attention, such as N. Lavie's (1995) perceptual load theory of attention. This study examined visual selective attention across development from 7 years of age to adulthood. Specifically, the author examined if changes in processing as a function of selective attention are similarly influenced by perceptual load across development. Participants were asked to complete a task at either low or high perceptual load while processing of an unattended probe stimulus was examined using event related potentials. Similar to adults, children and teens showed reduced processing of the unattended stimulus as perceptual load increased at the P1 visual component. However, although there were no qualitative differences in changes in processing, there were quantitative differences, with shorter P1 latencies in teens and adults compared with children, suggesting increases in the speed of processing across development. In addition, younger children did not need as high a perceptual load to achieve the same difference in performance between low and high perceptual load as adults. Thus, this study demonstrates that although there are developmental changes in visual selective attention, the mechanisms by which visual selective attention is achieved in children may share similarities with adults.

  20. Advanced Computer Image Generation Techniques Exploiting Perceptual Characteristics

    DTIC Science & Technology

    1981-08-01

    the capabilities/limitations of the human visual perceptual processing system and improve the training effectiveness of visual simulation systems...Myron Braunstein of the University of California at Irvine performed all the work in the perceptual area. Mr. Timothy A. Zimmerlin contributed the... work . Thus, while some areas are related, each is resolved independently in order to focus on the basic perceptual limitation. In addition, the

  1. Thalamocortical dynamics of the McCollough effect: boundary-surface alignment through perceptual learning.

    PubMed

    Grossberg, Stephen; Hwang, Seungwoo; Mingolla, Ennio

    2002-05-01

    This article further develops the FACADE neural model of 3-D vision and figure-ground perception to quantitatively explain properties of the McCollough effect (ME). The model proposes that many ME data result from visual system mechanisms whose primary function is to adaptively align, through learning, boundary and surface representations that are positionally shifted due to the process of binocular fusion. For example, binocular boundary representations are shifted by binocular fusion relative to monocular surface representations, yet the boundaries must become positionally aligned with the surfaces to control binocular surface capture and filling-in. The model also includes perceptual reset mechanisms that use habituative transmitters in opponent processing circuits. Thus the model shows how ME data may arise from a combination of mechanisms that have a clear functional role in biological vision. Simulation results with a single set of parameters quantitatively fit data from 13 experiments that probe the nature of achromatic/chromatic and monocular/binocular interactions during induction of the ME. The model proposes how perceptual learning, opponent processing, and habituation at both monocular and binocular surface representations are involved, including early thalamocortical sites. In particular, it explains the anomalous ME utilizing these multiple processing sites. Alternative models of the ME are also summarized and compared with the present model.

  2. Finding an emotional face in a crowd: emotional and perceptual stimulus factors influence visual search efficiency.

    PubMed

    Lundqvist, Daniel; Bruce, Neil; Öhman, Arne

    2015-01-01

    In this article, we examine how emotional and perceptual stimulus factors influence visual search efficiency. In an initial task, we run a visual search task, using a large number of target/distractor emotion combinations. In two subsequent tasks, we then assess measures of perceptual (rated and computational distances) and emotional (rated valence, arousal and potency) stimulus properties. In a series of regression analyses, we then explore the degree to which target salience (the size of target/distractor dissimilarities) on these emotional and perceptual measures predict the outcome on search efficiency measures (response times and accuracy) from the visual search task. The results show that both emotional and perceptual stimulus salience contribute to visual search efficiency. The results show that among the emotional measures, salience on arousal measures was more influential than valence salience. The importance of the arousal factor may be a contributing factor to contradictory history of results within this field.

  3. Complete scanpaths analysis toolbox.

    PubMed

    Augustyniak, Piotr; Mikrut, Zbigniew

    2006-01-01

    This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.

  4. The Perceptual Root of Object-Based Storage: An Interactive Model of Perception and Visual Working Memory

    ERIC Educational Resources Information Center

    Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei

    2011-01-01

    Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…

  5. Is Statistical Learning Constrained by Lower Level Perceptual Organization?

    PubMed Central

    Emberson, Lauren L.; Liu, Ran; Zevin, Jason D.

    2013-01-01

    In order for statistical information to aid in complex developmental processes such as language acquisition, learning from higher-order statistics (e.g. across successive syllables in a speech stream to support segmentation) must be possible while perceptual abilities (e.g. speech categorization) are still developing. The current study examines how perceptual organization interacts with statistical learning. Adult participants were presented with multiple exemplars from novel, complex sound categories designed to reflect some of the spectral complexity and variability of speech. These categories were organized into sequential pairs and presented such that higher-order statistics, defined based on sound categories, could support stream segmentation. Perceptual similarity judgments and multi-dimensional scaling revealed that participants only perceived three perceptual clusters of sounds and thus did not distinguish the four experimenter-defined categories, creating a tension between lower level perceptual organization and higher-order statistical information. We examined whether the resulting pattern of learning is more consistent with statistical learning being “bottom-up,” constrained by the lower levels of organization, or “top-down,” such that higher-order statistical information of the stimulus stream takes priority over the perceptual organization, and perhaps influences perceptual organization. We consistently find evidence that learning is constrained by perceptual organization. Moreover, participants generalize their learning to novel sounds that occupy a similar perceptual space, suggesting that statistical learning occurs based on regions of or clusters in perceptual space. Overall, these results reveal a constraint on learning of sound sequences, such that statistical information is determined based on lower level organization. These findings have important implications for the role of statistical learning in language acquisition. PMID:23618755

  6. Eye movements and attention: The role of pre-saccadic shifts of attention in perception, memory and the control of saccades

    PubMed Central

    Gersch, Timothy M.; Schnitzer, Brian S.; Dosher, Barbara A.; Kowler, Eileen

    2012-01-01

    Saccadic eye movements and perceptual attention work in a coordinated fashion to allow selection of the objects, features or regions with the greatest momentary need for limited visual processing resources. This study investigates perceptual characteristics of pre-saccadic shifts of attention during a sequence of saccades using the visual manipulations employed to study mechanisms of attention during maintained fixation. The first part of this paper reviews studies of the connections between saccades and attention, and their significance for both saccadic control and perception. The second part presents three experiments that examine the effects of pre-saccadic shifts of attention on vision during sequences of saccades. Perceptual enhancements at the saccadic goal location relative to non-goal locations were found across a range of stimulus contrasts, with either perceptual discrimination or detection tasks, with either single or multiple perceptual targets, and regardless of the presence of external noise. The results show that the preparation of saccades can evoke a variety of attentional effects, including attentionally-mediated changes in the strength of perceptual representations, selection of targets for encoding in visual memory, exclusion of external noise, or changes in the levels of internal visual noise. The visual changes evoked by saccadic planning make it possible for the visual system to effectively use saccadic eye movements to explore the visual environment. PMID:22809798

  7. Distinct Contributions of the Magnocellular and Parvocellular Visual Streams to Perceptual Selection

    PubMed Central

    Denison, Rachel N.; Silver, Michael A.

    2014-01-01

    During binocular rivalry, conflicting images presented to the two eyes compete for perceptual dominance, but the neural basis of this competition is disputed. In interocular switch (IOS) rivalry, rival images periodically exchanged between the two eyes generate one of two types of perceptual alternation: 1) a fast, regular alternation between the images that is time-locked to the stimulus switches and has been proposed to arise from competition at lower levels of the visual processing hierarchy, or 2) a slow, irregular alternation spanning multiple stimulus switches that has been associated with higher levels of the visual system. The existence of these two types of perceptual alternation has been influential in establishing the view that rivalry may be resolved at multiple hierarchical levels of the visual system. We varied the spatial, temporal, and luminance properties of IOS rivalry gratings and found, instead, an association between fast, regular perceptual alternations and processing by the magnocellular stream and between slow, irregular alternations and processing by the parvocellular stream. The magnocellular and parvocellular streams are two early visual pathways that are specialized for the processing of motion and form, respectively. These results provide a new framework for understanding the neural substrates of binocular rivalry that emphasizes the importance of parallel visual processing streams, and not only hierarchical organization, in the perceptual resolution of ambiguities in the visual environment. PMID:21861685

  8. Rehabilitation of Visual and Perceptual Dysfunction after Severe Traumatic Brain Injury

    DTIC Science & Technology

    2013-03-01

    be virtual ma eality mode n which the cles (life-siz om the simu ived safe pa rtual reality ixate a cross ppears, move ure and the ntricity offset...AD_________________ Award Number: W81XWH-11-2-0082 TITLE: Rehabilitation of Visual and Perceptual...March 2012 – 28 February 2013 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Rehabilitation of Visual and Perceptual Dysfunction after Severe Traumatic

  9. Integrating mechanisms of visual guidance in naturalistic language production.

    PubMed

    Coco, Moreno I; Keller, Frank

    2015-05-01

    Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.

  10. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    PubMed

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Effects of lorazepam on visual perceptual abilities.

    PubMed

    Pompéia, S; Pradella-Hallinan, M; Manzano, G M; Bueno, O F A

    2008-04-01

    To evaluate the effects of an acute dose of the benzodiazepine (BZ) lorazepam in young healthy volunteers on five distinguishable visual perception abilities determined by previous factor-analytic studies. This was a double-blind, cross-over design study of acute oral doses of lorazepam (2 mg) and placebo in young healthy volunteers. We focused on a set of paper-and-pencil tests of visual perceptual abilities that load on five correlated but distinguishable factors (Spatial Visualization, Spatial Relations, Perceptual Speed, Closure Speed, and Closure Flexibility). Some other tests (DSST, immediate and delayed recall of prose; measures of subjective mood alterations) were used to control for the classic BZ-induced effects. Lorazepam impaired performance in the DSST and delayed recall of prose, increased subjective sedation and impaired tasks of all abilities except Spatial Visualization and Closure Speed. Only impairment in Perceptual Speed (Identical Pictures task) and delayed recall of prose were not explained by sedation. Acute administration of lorazepam, in a dose that impaired episodic memory, selectively affected different visual perceptual abilities before and after controlling for sedation. Central executive demands and sedation did not account for results, so impairment in the Identical Pictures task may be attributed to lorazepam's visual processing alterations. 2008 John Wiley & Sons, Ltd.

  12. An Examination of Undergraduate Student's Perceptions and Predilections of the Use of YouTube in the Teaching and Learning Process

    ERIC Educational Resources Information Center

    Buzzetto-More, Nicole A.

    2014-01-01

    Pervasive social networking and media sharing technologies have augmented perceptual understanding and information gathering and, while text-based resources have remained the standard for centuries, they do not appeal to the hyper-stimulated visual learners of today. In particular, the research suggests that targeted YouTube videos enhance student…

  13. Visuomotor Processing, Induced Stress and Perceptual Learning

    DTIC Science & Technology

    2006-11-01

    the performance of expert video game players with non-experienced video game players on multiple assessments of attention, Green & Bavelier (2003...concluded that experience and proficiency playing video games alters human visual attention beneficially in terms of numerical capacity, and both...person perspective video game play. We propose that psychological stress, though not addressed as a main factor in their study, may be an

  14. Using Reinforcement Learning to Understand the Emergence of "Intelligent" Eye-Movement Behavior during Reading

    ERIC Educational Resources Information Center

    Reichle, Erik D.; Laurent, Patryk A.

    2006-01-01

    The eye movements of skilled readers are typically very regular (K. Rayner, 1998). This regularity may arise as a result of the perceptual, cognitive, and motor limitations of the reader (e.g., limited visual acuity) and the inherent constraints of the task (e.g., identifying the words in their correct order). To examine this hypothesis,…

  15. Developing the Own-Race Advantage in 4-, 6-, and 9-Month-Old Taiwanese Infants: A Perceptual Learning Perspective

    PubMed Central

    Chien, Sarina Hui-Lin; Wang, Jing-Fong; Huang, Tsung-Ren

    2016-01-01

    Previous infant studies on the other-race effect have favored the perceptual narrowing view, or declined sensitivities to rarely exposed other-race faces. Here we wish to provide an alternative possibility, perceptual learning, manifested by improved sensitivity for frequently exposed own-race faces in the first year of life. Using the familiarization/visual-paired comparison paradigm, we presented 4-, 6-, and 9-month-old Taiwanese infants with oval-cropped Taiwanese, Caucasian, Filipino faces, and each with three different manipulations of increasing task difficulty (i.e., change identity, change eyes, and widen eye spacing). An adult experiment was first conducted to verify the task difficulty. Our results showed that, with oval-cropped faces, the 4 month-old infants could only discriminate Taiwanese “change identity” condition and not any others, suggesting an early own-race advantage at 4 months. The 6 month-old infants demonstrated novelty preferences in both Taiwanese and Caucasian “change identity” conditions, and proceeded to the Taiwanese “change eyes” condition. The 9-month-old infants demonstrated novelty preferences in the “change identity” condition of all three ethnic faces. They also passed the Taiwanese “change eyes” condition but could not extend this refined ability of detecting a change in the eyes for the Caucasian or Philippine faces. Taken together, we interpret the pattern of results as evidence supporting perceptual learning during the first year: the ability to discriminate own-race faces emerges at 4 months and continues to refine, while the ability to discriminate other-race faces emerges between 6 and 9 months and retains at 9 months. Additionally, the discrepancies in the face stimuli and methods between studies advocating the narrowing view and those supporting the learning view were discussed. PMID:27807427

  16. Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception

    ERIC Educational Resources Information Center

    Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…

  17. Field Dependence, Perceptual Instability, and Sex Differences.

    ERIC Educational Resources Information Center

    Bergum, Judith E.; Bergum, Bruce O.

    Recent studies have shown perceptual instability to be related to visual creativity as reflected in career choice. In general, those who display greater perceptual instability perceive themselves to be more creative and tend to choose careers related to visual creativity, regardless of their gender. To test the hypothesis that field independents…

  18. Auditory-visual stimulus pairing enhances perceptual learning in a songbird.

    PubMed

    Hultsch; Schleuss; Todt

    1999-07-01

    In many oscine birds, song learning is affected by social variables, for example the behaviour of a tutor. This implies that both auditory and visual perceptual systems should be involved in the acquisition process. To examine whether and how particular visual stimuli can affect song acquisition, we tested the impact of a tutoring design in which the presentation of auditory stimuli (i.e. species-specific master songs) was paired with a well-defined nonauditory stimulus (i.e. stroboscope light flashes: Strobe regime). The subjects were male hand-reared nightingales, Luscinia megarhynchos. For controls, males were exposed to tutoring without a light stimulus (Control regime). The males' singing recorded 9 months later showed that the Strobe regime had enhanced the acquisition of song patterns. During this treatment birds had acquired more songs than during the Control regime; the observed increase in repertoire size was from 20 to 30% in most cases. Furthermore, the copy quality of imitations acquired during the Strobe regime was better than that of imitations developed from the Control regime, and this was due to a significant increase in the number of 'perfect' song copies. We conclude that these effects were mediated by an intrinsic component (e.g. attention or arousal) which specifically responded to the Strobe regime. Our findings also show that mechanisms of song learning are well prepared to process information from cross-modal perception. Thus, more detailed enquiries into stimulus complexes that are usually referred to as social variables are promising. Copyright 1999 The Association for the Study of Animal Behaviour.

  19. Learning to make collective decisions: the impact of confidence escalation.

    PubMed

    Mahmoodi, Ali; Bang, Dan; Ahmadabadi, Majid Nili; Bahrami, Bahador

    2013-01-01

    Little is known about how people learn to take into account others' opinions in joint decisions. To address this question, we combined computational and empirical approaches. Human dyads made individual and joint visual perceptual decision and rated their confidence in those decisions (data previously published). We trained a reinforcement (temporal difference) learning agent to get the participants' confidence level and learn to arrive at a dyadic decision by finding the policy that either maximized the accuracy of the model decisions or maximally conformed to the empirical dyadic decisions. When confidences were shared visually without verbal interaction, RL agents successfully captured social learning. When participants exchanged confidences visually and interacted verbally, no collective benefit was achieved and the model failed to predict the dyadic behaviour. Behaviourally, dyad members' confidence increased progressively and verbal interaction accelerated this escalation. The success of the model in drawing collective benefit from dyad members was inversely related to confidence escalation rate. The findings show an automated learning agent can, in principle, combine individual opinions and achieve collective benefit but the same agent cannot discount the escalation suggesting that one cognitive component of collective decision making in human may involve discounting of overconfidence arising from interactions.

  20. Learning to see again: Biological constraints on cortical plasticity and the implications for sight restoration technologies

    PubMed Central

    Beyeler, Michael; Rokem, Ariel; Boynton, Geoffrey M.; Fine, Ione

    2018-01-01

    The “bionic eye” – so long a dream of the future – is finally becoming a reality with retinal prostheses available to patients in both the US and Europe. However, clinical experience with these implants has made it apparent that the vision provided by these devices differs substantially from normal sight. Consequently, the ability to learn to make use of this abnormal retinal input plays a critical role in whether or not some functional vision is successfully regained. The goal of the present review is to summarize the vast basic science literature on developmental and adult cortical plasticity with an emphasis on how this literature might relate to the field of prosthetic vision. We begin with describing the distortion and information loss likely to be experienced by visual prosthesis users. We then define cortical plasticity and perceptual learning, and describe what is known, and what is unknown, about visual plasticity across the hierarchy of brain regions involved in visual processing, and across different stages of life. We close by discussing what is known about brain plasticity in sight restoration patients and discuss biological mechanisms that might eventually be harnessed to improve visual learning in these patients. PMID:28612755

  1. Learning to read an alphabet of human faces produces left-lateralized training effects in the fusiform gyrus.

    PubMed

    Moore, Michelle W; Durisko, Corrine; Perfetti, Charles A; Fiez, Julie A

    2014-04-01

    Numerous functional neuroimaging studies have shown that most orthographic stimuli, such as printed English words, produce a left-lateralized response within the fusiform gyrus (FG) at a characteristic location termed the visual word form area (VWFA). We developed an experimental alphabet (FaceFont) comprising 35 face-phoneme pairs to disentangle phonological and perceptual influences on the lateralization of orthographic processing within the FG. Using functional imaging, we found that a region in the vicinity of the VWFA responded to FaceFont words more strongly in trained versus untrained participants, whereas no differences were observed in the right FG. The trained response magnitudes in the left FG region correlated with behavioral reading performance, providing strong evidence that the neural tissue recruited by training supported the newly acquired reading skill. These results indicate that the left lateralization of the orthographic processing is not restricted to stimuli with particular visual-perceptual features. Instead, lateralization may occur because the anatomical projections in the vicinity of the VWFA provide a unique interconnection between the visual system and left-lateralized language areas involved in the representation of speech.

  2. Expertise for upright faces improves the precision but not the capacity of visual working memory.

    PubMed

    Lorenc, Elizabeth S; Pratte, Michael S; Angeloni, Christopher F; Tong, Frank

    2014-10-01

    Considerable research has focused on how basic visual features are maintained in working memory, but little is currently known about the precision or capacity of visual working memory for complex objects. How precisely can an object be remembered, and to what extent might familiarity or perceptual expertise contribute to working memory performance? To address these questions, we developed a set of computer-generated face stimuli that varied continuously along the dimensions of age and gender, and we probed participants' memories using a method-of-adjustment reporting procedure. This paradigm allowed us to separately estimate the precision and capacity of working memory for individual faces, on the basis of the assumptions of a discrete capacity model, and to assess the impact of face inversion on memory performance. We found that observers could maintain up to four to five items on average, with equally good memory capacity for upright and upside-down faces. In contrast, memory precision was significantly impaired by face inversion at every set size tested. Our results demonstrate that the precision of visual working memory for a complex stimulus is not strictly fixed but, instead, can be modified by learning and experience. We find that perceptual expertise for upright faces leads to significant improvements in visual precision, without modifying the capacity of working memory.

  3. Attention affects visual perceptual processing near the hand.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2010-09-01

    Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.

  4. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    PubMed

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-08

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. Copyright © 2015 the authors 0270-6474/15/359786-13$15.00/0.

  5. Musical learning in children and adults with Williams syndrome.

    PubMed

    Lense, M; Dykens, E

    2013-09-01

    There is recent interest in using music making as an empirically supported intervention for various neurodevelopmental disorders due to music's engagement of perceptual-motor mapping processes. However, little is known about music learning in populations with developmental disabilities. Williams syndrome (WS) is a neurodevelopmental genetic disorder whose characteristic auditory strengths and visual-spatial weaknesses map onto the processes used to learn to play a musical instrument. We identified correlates of novel musical instrument learning in WS by teaching 46 children and adults (7-49 years) with WS to play the Appalachian dulcimer. Obtained dulcimer skill was associated with prior musical abilities (r = 0.634, P < 0.001) and visual-motor integration abilities (r = 0.487, P = 0.001), but not age, gender, IQ, handedness, auditory sensitivities or musical interest/emotionality. Use of auditory learning strategies, but not visual or instructional strategies, predicted greater dulcimer skill beyond individual musical and visual-motor integration abilities (β = 0.285, sr(2) = 0.06, P = 0.019). These findings map onto behavioural and emerging neural evidence for greater auditory-motor mapping processes in WS. Results suggest that explicit awareness of task-specific learning approaches is important when learning a new skill. Implications for using music with populations with syndrome-specific strengths and weakness will be discussed. © 2012 The Authors. Journal of Intellectual Disability Research © 2012 John Wiley & Sons Ltd, MENCAP & IASSID.

  6. Differences in perceptual learning transfer as a function of training task.

    PubMed

    Green, C Shawn; Kattner, Florian; Siegel, Max H; Kersten, Daniel; Schrater, Paul R

    2015-01-01

    A growing body of research--including results from behavioral psychology, human structural and functional imaging, single-cell recordings in nonhuman primates, and computational modeling--suggests that perceptual learning effects are best understood as a change in the ability of higher-level integration or association areas to read out sensory information in the service of particular decisions. Work in this vein has argued that, depending on the training experience, the "rules" for this read-out can either be applicable to new contexts (thus engendering learning generalization) or can apply only to the exact training context (thus resulting in learning specificity). Here we contrast learning tasks designed to promote either stimulus-specific or stimulus-general rules. Specifically, we compare learning transfer across visual orientation following training on three different tasks: an orientation categorization task (which permits an orientation-specific learning solution), an orientation estimation task (which requires an orientation-general learning solution), and an orientation categorization task in which the relevant category boundary shifts on every trial (which lies somewhere between the two tasks above). While the simple orientation-categorization training task resulted in orientation-specific learning, the estimation and moving categorization tasks resulted in significant orientation learning generalization. The general framework tested here--that task specificity or generality can be predicted via an examination of the optimal learning solution--may be useful in building future training paradigms with certain desired outcomes.

  7. Perceptual Learning and Attention: Reduction of Object Attention Limitations with Practice

    PubMed Central

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2012-01-01

    Perceptual learning has widely been claimed to be attention driven; attention assists in choosing the relevant sensory information and attention may be necessary in many cases for learning. In this paper, we focus on the interaction of perceptual learning and attention – that perceptual learning can reduce or eliminate the limitations of attention, or, correspondingly, that perceptual learning depends on the attention condition. Object attention is a robust limit on performance. Two attributes of a single attended object may be reported without loss, while the same two attributes of different objects can exhibit a substantial dual-report deficit due to the sharing of attention between objects. The current experiments document that this fundamental dual-object report deficit can be reduced, or eliminated, through perceptual learning that is partially specific to retinal location. This suggests that alternative routes established by practice may reduce the competition between objects for processing resources. PMID:19796653

  8. Characterizing Perceptual Learning with External Noise

    ERIC Educational Resources Information Center

    Gold, Jason M.; Sekuler, Allison B.; Bennett, Partrick J.

    2004-01-01

    Performance in perceptual tasks often improves with practice. This effect is known as "perceptual learning," and it has been the source of a great deal of interest and debate over the course of the last century. Here, we consider the effects of perceptual learning within the context of signal detection theory. According to signal detection theory,…

  9. Multisensory Cues Capture Spatial Attention Regardless of Perceptual Load

    ERIC Educational Resources Information Center

    Santangelo, Valerio; Spence, Charles

    2007-01-01

    We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in…

  10. Timing the impact of literacy on visual processing

    PubMed Central

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  11. Timing the impact of literacy on visual processing.

    PubMed

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas

    2014-12-09

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.

  12. Mild Perceptual Categorization Deficits Follow Bilateral Removal of Anterior Inferior Temporal Cortex in Rhesus Monkeys.

    PubMed

    Matsumoto, Narihisa; Eldridge, Mark A G; Saunders, Richard C; Reoli, Rachel; Richmond, Barry J

    2016-01-06

    In primates, visual recognition of complex objects depends on the inferior temporal lobe. By extension, categorizing visual stimuli based on similarity ought to depend on the integrity of the same area. We tested three monkeys before and after bilateral anterior inferior temporal cortex (area TE) removal. Although mildly impaired after the removals, they retained the ability to assign stimuli to previously learned categories, e.g., cats versus dogs, and human versus monkey faces, even with trial-unique exemplars. After the TE removals, they learned in one session to classify members from a new pair of categories, cars versus trucks, as quickly as they had learned the cats versus dogs before the removals. As with the dogs and cats, they generalized across trial-unique exemplars of cars and trucks. However, as seen in earlier studies, these monkeys with TE removals had difficulty learning to discriminate between two simple black and white stimuli. These results raise the possibility that TE is needed for memory of simple conjunctions of basic features, but that it plays only a small role in generalizing overall configural similarity across a large set of stimuli, such as would be needed for perceptual categorical assignment. The process of seeing and recognizing objects is attributed to a set of sequentially connected brain regions stretching forward from the primary visual cortex through the temporal lobe to the anterior inferior temporal cortex, a region designated area TE. Area TE is considered the final stage for recognizing complex visual objects, e.g., faces. It has been assumed, but not tested directly, that this area would be critical for visual generalization, i.e., the ability to place objects such as cats and dogs into their correct categories. Here, we demonstrate that monkeys rapidly and seemingly effortlessly categorize large sets of complex images (cats vs dogs, cars vs trucks), surprisingly, even after removal of area TE, leaving a puzzle about how this generalization is done. Copyright © 2016 the authors 0270-6474/16/360043-11$15.00/0.

  13. Auditory temporal perceptual learning and transfer in Chinese-speaking children with developmental dyslexia.

    PubMed

    Zhang, Manli; Xie, Weiyi; Xu, Yanzhi; Meng, Xiangzhi

    2018-03-01

    Perceptual learning refers to the improvement of perceptual performance as a function of training. Recent studies found that auditory perceptual learning may improve phonological skills in individuals with developmental dyslexia in alphabetic writing system. However, whether auditory perceptual learning could also benefit the reading skills of those learning the Chinese logographic writing system is, as yet, unknown. The current study aimed to investigate the remediation effect of auditory temporal perceptual learning on Mandarin-speaking school children with developmental dyslexia. Thirty children with dyslexia were screened from a large pool of students in 3th-5th grades. They completed a series of pretests and then were assigned to either a non-training control group or a training group. The training group worked on a pure tone duration discrimination task for 7 sessions over 2 weeks with thirty minutes per session. Post-tests immediately after training and a follow-up test 2 months later were conducted. Analyses revealed a significant training effect in the training group relative to non-training group, as well as near transfer to the temporal interval discrimination task and far transfer to phonological awareness, character recognition and reading fluency. Importantly, the training effect and all the transfer effects were stable at the 2-month follow-up session. Further analyses found that a significant correlation between character recognition performance and learning rate mainly existed in the slow learning phase, the consolidation stage of perceptual learning, and this effect was modulated by an individuals' executive function. These findings indicate that adaptive auditory temporal perceptual learning can lead to learning and transfer effects on reading performance, and shed further light on the potential role of basic perceptual learning in the remediation and prevention of developmental dyslexia. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Brief Report: Autism-like Traits are Associated With Enhanced Ability to Disembed Visual Forms.

    PubMed

    Sabatino DiCriscio, Antoinette; Troiani, Vanessa

    2017-05-01

    Atypical visual perceptual skills are thought to underlie unusual visual attention in autism spectrum disorders. We assessed whether individual differences in visual processing skills scaled with quantitative traits associated with the broader autism phenotype (BAP). Visual perception was assessed using the Figure-ground subtest of the Test of visual perceptual skills-3rd Edition (TVPS). In a large adult cohort (n = 209), TVPS-Figure Ground scores were positively correlated with autistic-like social features as assessed by the Broader autism phenotype questionnaire. This relationship was gender-specific, with males showing a correspondence between visual perceptual skills and autistic-like traits. This work supports the link between atypical visual perception and autism and highlights the importance in characterizing meaningful individual differences in clinically relevant behavioral phenotypes.

  15. Visual and Haptic Shape Processing in the Human Brain: Unisensory Processing, Multisensory Convergence, and Top-Down Influences.

    PubMed

    Lee Masson, Haemy; Bulthé, Jessica; Op de Beeck, Hans P; Wallraven, Christian

    2016-08-01

    Humans are highly adept at multisensory processing of object shape in both vision and touch. Previous studies have mostly focused on where visually perceived object-shape information can be decoded, with haptic shape processing receiving less attention. Here, we investigate visuo-haptic shape processing in the human brain using multivoxel correlation analyses. Importantly, we use tangible, parametrically defined novel objects as stimuli. Two groups of participants first performed either a visual or haptic similarity-judgment task. The resulting perceptual object-shape spaces were highly similar and matched the physical parameter space. In a subsequent fMRI experiment, objects were first compared within the learned modality and then in the other modality in a one-back task. When correlating neural similarity spaces with perceptual spaces, visually perceived shape was decoded well in the occipital lobe along with the ventral pathway, whereas haptically perceived shape information was mainly found in the parietal lobe, including frontal cortex. Interestingly, ventrolateral occipito-temporal cortex decoded shape in both modalities, highlighting this as an area capable of detailed visuo-haptic shape processing. Finally, we found haptic shape representations in early visual cortex (in the absence of visual input), when participants switched from visual to haptic exploration, suggesting top-down involvement of visual imagery on haptic shape processing. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Which neuropsychological functions predict various processing speed components in children with and without attention-deficit/hyperactivity disorder?

    PubMed

    Vadnais, Sarah A; Kibby, Michelle Y; Jagger-Rickels, Audreyana C

    2018-01-01

    We identified statistical predictors of four processing speed (PS) components in a sample of 151 children with and without attention-deficit/hyperactivity disorder (ADHD). Performance on perceptual speed was predicted by visual attention/short-term memory, whereas incidental learning/psychomotor speed was predicted by verbal working memory. Rapid naming was predictive of each PS component assessed, and inhibition predicted all but one task, suggesting a shared need to identify/retrieve stimuli rapidly and inhibit incorrect responding across PS components. Hence, we found both shared and unique predictors of perceptual, cognitive, and output speed, suggesting more specific terminology should be used in future research on PS in ADHD.

  17. Learned value and object perception: Accelerated perception or biased decisions?

    PubMed

    Rajsic, Jason; Perera, Harendri; Pratt, Jay

    2017-02-01

    Learned value is known to bias visual search toward valued stimuli. However, some uncertainty exists regarding the stage of visual processing that is modulated by learned value. Here, we directly tested the effect of learned value on preattentive processing using temporal order judgments. Across four experiments, we imbued some stimuli with high value and some with low value, using a nonmonetary reward task. In Experiment 1, we replicated the value-driven distraction effect, validating our nonmonetary reward task. Experiment 2 showed that high-value stimuli, but not low-value stimuli, exhibit a prior-entry effect. Experiment 3, which reversed the temporal order judgment task (i.e., reporting which stimulus came second), showed no prior-entry effect, indicating that although a response bias may be present for high-value stimuli, they are still reported as appearing earlier. However, Experiment 4, using a simultaneity judgment task, showed no shift in temporal perception. Overall, our results support the conclusion that learned value biases perceptual decisions about valued stimuli without speeding preattentive stimulus processing.

  18. Selective visual attention and motivation: the consequences of value learning in an attentional blink task.

    PubMed

    Raymond, Jane E; O'Brien, Jennifer L

    2009-08-01

    Learning to associate the probability and value of behavioral outcomes with specific stimuli (value learning) is essential for rational decision making. However, in demanding cognitive conditions, access to learned values might be constrained by limited attentional capacity. We measured recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses; the recognition task was performed both with and without constraints on available attention. Regardless of available attention, recognition was substantially enhanced for motivationally salient stimuli (i.e., stimuli highly predictive of outcomes), compared with equally familiar stimuli that had weak or no motivational salience, and this effect was found regardless of valence (win or loss). However, when attention was constrained (because stimuli were presented during an attentional blink, AB), valence determined recognition; win-associated faces showed no AB, but all other faces showed large ABs. Motivational salience acts independently of attention to modulate simple perceptual decisions, but when attention is limited, visual processing is biased in favor of reward-associated stimuli.

  19. The working memory Ponzo illusion: Involuntary integration of visuospatial information stored in visual working memory.

    PubMed

    Shen, Mowei; Xu, Haokui; Zhang, Haihang; Shui, Rende; Zhang, Meng; Zhou, Jifan

    2015-08-01

    Visual working memory (VWM) has been traditionally viewed as a mental structure subsequent to visual perception that stores the final output of perceptual processing. However, VWM has recently been emphasized as a critical component of online perception, providing storage for the intermediate perceptual representations produced during visual processing. This interactive view holds the core assumption that VWM is not the terminus of perceptual processing; the stored visual information rather continues to undergo perceptual processing if necessary. The current study tests this assumption, demonstrating an example of involuntary integration of the VWM content, by creating the Ponzo illusion in VWM: when the Ponzo illusion figure was divided into its individual components and sequentially encoded into VWM, the temporally separated components were involuntarily integrated, leading to the distorted length perception of the two horizontal lines. This VWM Ponzo illusion was replicated when the figure components were presented in different combinations and presentation order. The magnitude of the illusion was significantly correlated between VWM and perceptual versions of the Ponzo illusion. These results suggest that the information integration underling the VWM Ponzo illusion is constrained by the laws of visual perception and similarly affected by the common individual factors that govern its perception. Thus, our findings provide compelling evidence that VWM functions as a buffer serving perceptual processes at early stages. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Perceptual load corresponds with factors known to influence visual search

    PubMed Central

    Roper, Zachary J. J.; Cosman, Joshua D.; Vecera, Shaun P.

    2014-01-01

    One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a non-circular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spill-over to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. These results suggest that perceptual load might be defined in part by well-characterized, continuous factors that influence visual search. PMID:23398258

  1. A Critical Review of the "Motor-Free Visual Perception Test-Fourth Edition" (MVPT-4)

    ERIC Educational Resources Information Center

    Brown, Ted; Peres, Lisa

    2018-01-01

    The "Motor-Free Visual Perception Test-fourth edition" (MVPT-4) is a revised version of the "Motor-Free Visual Perception Test-third edition." The MVPT-4 is used to assess the visual-perceptual ability of individuals aged 4.0 through 80+ years via a series of visual-perceptual tasks that do not require a motor response. Test…

  2. The time-course of activation in the dorsal and ventral visual streams during landmark cueing and perceptual discrimination tasks.

    PubMed

    Lambert, Anthony J; Wootton, Adrienne

    2017-08-01

    Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Common mechanisms of human perceptual and motor learning

    PubMed Central

    Censor, Nitzan; Sagi, Dov; Cohen, Leonardo G.

    2016-01-01

    The adult mammalian brain has a remarkable capacity to learn in both the perceptual and motor domains through the formation and consolidation of memories. Such practice-enabled procedural learning results in perceptual and motor skill improvements. Here, we examine evidence supporting the notion that perceptual and motor learning in humans exhibit analogous properties, including similarities in temporal dynamics and the interactions between primary cortical and higher-order brain areas. These similarities may point to the existence of a common general mechanism for learning in humans. PMID:22903222

  4. Neural Mechanisms of Recognizing Camouflaged Objects: A Human fMRI Study

    DTIC Science & Technology

    2015-07-30

    Unlimited Final Report: Neural Mechanisms of Recognizing Camouflaged Objects: A Human fMRI Study The views, opinions and/or findings contained in this...27709-2211 Visual search, Camouflage, Functional magnetic resonance imaging ( fMRI ), Perceptual learning REPORT DOCUMENTATION PAGE 11. SPONSOR...ABSTRACT Number of Papers published in peer-reviewed journals: Final Report: Neural Mechanisms of Recognizing Camouflaged Objects: A Human fMRI Study

  5. New Directions in Resources for Special Needs Hearing Impaired Students: Outreach '88. Proceedings of the Annual Southeast Regional Summer Conference (8th, Cave Spring, Georgia, June 14-17, 1988).

    ERIC Educational Resources Information Center

    Kemp, Faye, Ed.; And Others

    The proceedings include, after the keynote address by E.M. Childers and the conference agenda, the following papers: "An Additional Handicap: Visual Perceptual Learning Disabilities of Deaf Children" (Vivienne Ratner); "Minimum Competency Testing" (Carl Williams); "Transitional Planning for Hearing Impaired Students in the Mainstream" (Helen…

  6. Prevalence, Gender Ratio and Gender Differences in Reading-Related Cognitive Abilities among Chinese Children with Dyslexia in Hong Kong

    ERIC Educational Resources Information Center

    Chan, David W.; Ho, Connie Suk-han; Tsang, Suk-man; Lee, Suk-han; Chung, Kevin K. H.

    2007-01-01

    Based on the data of the normative study of the "Hong Kong test of specific learning difficulties in reading and writing," and the "Test of visual-perceptual skills (non-motor)-Revised," 99 children aged between 6 and 10 1/2 years were identified as children with dyslexia out of the normative sample of 690 children. By…

  7. The influence of sleep deprivation and oscillating motion on sleepiness, motion sickness, and cognitive and motor performance.

    PubMed

    Kaplan, Janna; Ventura, Joel; Bakshi, Avijit; Pierobon, Alberto; Lackner, James R; DiZio, Paul

    2017-01-01

    Our goal was to determine how sleep deprivation, nauseogenic motion, and a combination of motion and sleep deprivation affect cognitive vigilance, visual-spatial perception, motor learning and retention, and balance. We exposed four groups of subjects to different combinations of normal 8h sleep or 4h sleep for two nights combined with testing under stationary conditions or during 0.28Hz horizontal linear oscillation. On the two days following controlled sleep, all subjects underwent four test sessions per day that included evaluations of fatigue, motion sickness, vigilance, perceptual discrimination, perceptual learning, motor performance and learning, and balance. Sleep loss and exposure to linear oscillation had additive or multiplicative relationships to sleepiness, motion sickness severity, decreases in vigilance and in perceptual discrimination and learning. Sleep loss also decelerated the rate of adaptation to motion sickness over repeated sessions. Sleep loss degraded the capacity to compensate for novel robotically induced perturbations of reaching movements but did not adversely affect adaptive recovery of accurate reaching. Overall, tasks requiring substantial attention to cognitive and motor demands were degraded more than tasks that were more automatic. Our findings indicate that predicting performance needs to take into account in addition to sleep loss, the attentional demands and novelty of tasks, the motion environment in which individuals will be performing and their prior susceptibility to motion sickness during exposure to provocative motion stimulation. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Sex is not everything: the role of gender in early performance of a fundamental laparoscopic skill.

    PubMed

    Kolozsvari, Nicoleta O; Andalib, Amin; Kaneva, Pepa; Cao, Jiguo; Vassiliou, Melina C; Fried, Gerald M; Feldman, Liane S

    2011-04-01

    Existing literature on the acquisition of surgical skills suggests that women generally perform worse than men. This literature is limited by looking at an arbitrary number of trials and not adjusting for potential confounders. The objective of this study was to evaluate the impact of gender on the learning curve for a fundamental laparoscopic task. Thirty-two medical students performed the FLS peg transfer task and their scores were plotted to generate a learning curve. Nonlinear regression was used to estimate learning plateau and learning rate. Variables that may affect performance were assessed using a questionnaire. Innate visual-spatial abilities were evaluated using tests for spatial orientation, spatial scanning, and perceptual abilities. Score on first peg transfer attempt, learning plateau, and learning rate were compared for men and women using Student's t test. Innate abilities were correlated to simulator performance using Pearson's coefficient. Multivariate linear regression was used to investigate the effect of gender on early laparoscopic performance after adjusting for factors found significant on univariate analysis. Statistical significance was defined as P < 0.05. Nineteen men and 13 women participated in the study; 30 were right-handed, 12 reported high interest in surgery, and 26 had video game experience. There were no differences between men and women in initial peg transfer score, learning plateau, or learning rate. Initial peg transfer score and learning rate were higher in subjects who reported having a high interest in surgery (P = 0.02, P = 0.03). Initial score also correlated with perceptual ability score (P = 0.03). In multivariate analysis, only surgical interest remained a significant predictor of score on first peg transfer (P = 0.03) and learning rate (P = 0.02), while gender had no significant relationship to early performance. Gender did not affect the learning curve for a fundamental laparoscopic task, while interest in surgery and perceptual abilities did influence early performance.

  9. Why Color Matters: The Effect of Visual Cues on Learner's Interpretation of Dark Matter in a Cosmology Visualization

    NASA Astrophysics Data System (ADS)

    Buck, Z.

    2013-04-01

    As we turn more and more to high-end computing to understand the Universe at cosmological scales, visualizations of simulations will take on a vital role as perceptual and cognitive tools. In collaboration with the Adler Planetarium and University of California High-Performance AstroComputing Center (UC-HiPACC), I am interested in better understanding the use of visualizations to mediate astronomy learning across formal and informal settings. The aspect of my research that I present here uses quantitative methods to investigate how learners are relying on color to interpret dark matter in a cosmology visualization. The concept of dark matter is vital to our current understanding of the Universe, and yet we do not know how to effectively present dark matter visually to support learning. I employ an alternative treatment post-test only experimental design, in which members of an equivalent sample are randomly assigned to one of three treatment groups, followed by treatment and a post-test. Results indicate significant correlation (p < .05) between the color of dark matter in the visualization and survey responses, implying that aesthetic variations like color can have a profound effect on audience interpretation of a cosmology visualization.

  10. Long-lasting perceptual priming and semantic learning in amnesia: a case experiment.

    PubMed

    Tulving, E; Hayman, C A; Macdonald, C A

    1991-07-01

    An investigation of perceptual priming and semantic learning in the severely amnesic subject K.C. is reported. He was taught 64 three-word sentences and tested for his ability to produce the final word of each sentence. Despite a total lack of episodic memory, he exhibited (a) strong perceptual priming effects in word-fragment completion, which were retained essentially in full strength for 12 months, and (b) independent of perceptual priming, learning of new semantic facts, many of which were also retained for 12 months. K.C.'s semantic learning may be at least partly attributable to repeated study trials and minimal interference during learning. The findings suggest that perceptual priming and semantic learning are subserved by two memory systems different from episodic memory and that both systems (perceptual representation and semantic memory) are at least partially preserved in some amnesic subjects.

  11. Integrated approaches to perceptual learning.

    PubMed

    Jacobs, Robert A

    2010-04-01

    New technologies and new ways of thinking have recently led to rapid expansions in the study of perceptual learning. We describe three themes shared by many of the nine articles included in this topic on Integrated Approaches to Perceptual Learning. First, perceptual learning cannot be studied on its own because it is closely linked to other aspects of cognition, such as attention, working memory, decision making, and conceptual knowledge. Second, perceptual learning is sensitive to both the stimulus properties of the environment in which an observer exists and to the properties of the tasks that the observer needs to perform. Moreover, the environmental and task properties can be characterized through their statistical regularities. Finally, the study of perceptual learning has important implications for society, including implications for science education and medical rehabilitation. Contributed articles relevant to each theme are summarized. Copyright © 2010 Cognitive Science Society, Inc.

  12. Perceptually lossless fractal image compression

    NASA Astrophysics Data System (ADS)

    Lin, Huawu; Venetsanopoulos, Anastasios N.

    1996-02-01

    According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.

  13. Using Japanese Onomatopoeias to Explore Perceptual Dimensions in Visual Material Perception.

    PubMed

    Hanada, Mitsuhiko

    2016-01-28

    This study examined the perceptual dimensions of visual material properties. Photographs of 50 objects were presented to the participants, and they reported a suitable onomatopoeia (mimetic word) for describing the material of the object in each photograph, based on visual appearance. The participants' responses were collated into a contingency table of photographs × onomatopoeias. After removing some items from the table, correspondence analysis was applied to the contingency table, and a six-dimensional biplot was obtained. By rotating the axes to maximize sparseness of the coordinates for the items in the biplot, three meaningful perceptual dimensions were derived: wetness/stickiness, fluffiness/softness, and smoothness-roughness/gloss-dullness. Two additional possible dimensions were obtained: crumbliness and coldness. These dimensions, except gloss-dullness, were paid little attention to in vision science, though they were suggested as perceptual dimensions of tactile texture. This suggests that the perceptual dimensions that are considered to be primarily related to haptics are also important in visual material perception. © The Author(s) 2016.

  14. Disentangling perceptual from motor implicit sequence learning with a serial color-matching task.

    PubMed

    Gheysen, Freja; Gevers, Wim; De Schutter, Erik; Van Waelvelde, Hilde; Fias, Wim

    2009-08-01

    This paper contributes to the domain of implicit sequence learning by presenting a new version of the serial reaction time (SRT) task that allows unambiguously separating perceptual from motor learning. Participants matched the colors of three small squares with the color of a subsequently presented large target square. An identical sequential structure was tied to the colors of the target square (perceptual version, Experiment 1) or to the manual responses (motor version, Experiment 2). Short blocks of sequenced and randomized trials alternated and hence provided a continuous monitoring of the learning process. Reaction time measurements demonstrated clear evidence of independently learning perceptual and motor serial information, though revealed different time courses between both learning processes. No explicit awareness of the serial structure was needed for either of the two types of learning to occur. The paradigm introduced in this paper evidenced that perceptual learning can occur with SRT measurements and opens important perspectives for future imaging studies to answer the ongoing question, which brain areas are involved in the implicit learning of modality specific (motor vs. perceptual) or general serial order.

  15. Ambiguity Tolerance and Perceptual Learning Styles of Chinese EFL Learners

    ERIC Educational Resources Information Center

    Li, Haishan; He, Qingshun

    2016-01-01

    Ambiguity tolerance and perceptual learning styles are the two influential elements showing individual differences in EFL learning. This research is intended to explore the relationship between Chinese EFL learners' ambiguity tolerance and their preferred perceptual learning styles. The findings include (1) the learners are sensitive to English…

  16. Effect of perceptual load on conceptual processing: an extension of Vermeulen's theory.

    PubMed

    Xie, Jiushu; Wang, Ruiming; Sun, Xun; Chang, Song

    2013-10-01

    The effect of color and shape load on conceptual processing was studied. Perceptual load effects have been found in visual and auditory conceptual processing, supporting the theory of embodied cognition. However, whether different types of visual concepts, such as color and shape, share the same perceptual load effects is unknown. In the current experiment, 32 participants were administered simultaneous perceptual and conceptual tasks to assess the relation between perceptual load and conceptual processing. Keeping color load in mind obstructed color conceptual processing. Hence, perceptual processing and conceptual load shared the same resources, suggesting embodied cognition. Color conceptual processing was not affected by shape pictures, indicating that different types of properties within vision were separate.

  17. Tuned by experience: How orientation probability modulates early perceptual processing.

    PubMed

    Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt

    2017-09-01

    Probable stimuli are more often and more quickly detected. While stimulus probability is known to affect decision-making, it can also be explained as a perceptual phenomenon. Using spatial gratings, we have previously shown that probable orientations are also more precisely estimated, even while participants remained naive to the manipulation. We conducted an electrophysiological study to investigate the effect that probability has on perception and visual-evoked potentials. In line with previous studies on oddballs and stimulus prevalence, low-probability orientations were associated with a greater late positive 'P300' component which might be related to either surprise or decision-making. However, the early 'C1' component, thought to reflect V1 processing, was dampened for high-probability orientations while later P1 and N1 components were unaffected. Exploratory analyses revealed a participant-level correlation between C1 and P300 amplitudes, suggesting a link between perceptual processing and decision-making. We discuss how these probability effects could be indicative of sharpening of neurons preferring the probable orientations, due either to perceptual learning, or to feature-based attention. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Neurofeedback training of gamma band oscillations improves perceptual processing.

    PubMed

    Salari, Neda; Büchel, Christian; Rose, Michael

    2014-10-01

    In this study, a noninvasive electroencephalography-based neurofeedback method is applied to train volunteers to deliberately increase gamma band oscillations (40 Hz) in the visual cortex. Gamma band oscillations in the visual cortex play a functional role in perceptual processing. In a previous study, we were able to demonstrate that gamma band oscillations prior to stimulus presentation have a significant influence on perceptual processing of visual stimuli. In the present study, we aimed to investigate longer lasting effects of gamma band neurofeedback training on perceptual processing. For this purpose, a feedback group was trained to modulate oscillations in the gamma band, while a control group participated in a task with an identical design setting but without gamma band feedback. Before and after training, both groups participated in a perceptual object detection task and a spatial attention task. Our results clearly revealed that only the feedback group but not the control group exhibited a visual processing advantage and an increase in oscillatory gamma band activity in the pre-stimulus period of the processing of the visual object stimuli after the neurofeedback training. Results of the spatial attention task showed no difference between the groups, which underlines the specific role of gamma band oscillations for perceptual processing. In summary, our results show that modulation of gamma band activity selectively affects perceptual processing and therefore supports the relevant role of gamma band activity for this specific process. Furthermore, our results demonstrate the eligibility of gamma band oscillations as a valuable tool for neurofeedback applications.

  19. Is sequence awareness mandatory for perceptual sequence learning: An assessment using a pure perceptual sequence learning design.

    PubMed

    Deroost, Natacha; Coomans, Daphné

    2018-02-01

    We examined the role of sequence awareness in a pure perceptual sequence learning design. Participants had to react to the target's colour that changed according to a perceptual sequence. By varying the mapping of the target's colour onto the response keys, motor responses changed randomly. The effect of sequence awareness on perceptual sequence learning was determined by manipulating the learning instructions (explicit versus implicit) and assessing the amount of sequence awareness after the experiment. In the explicit instruction condition (n = 15), participants were instructed to intentionally search for the colour sequence, whereas in the implicit instruction condition (n = 15), they were left uninformed about the sequenced nature of the task. Sequence awareness after the sequence learning task was tested by means of a questionnaire and the process-dissociation-procedure. The results showed that the instruction manipulation had no effect on the amount of perceptual sequence learning. Based on their report to have actively applied their sequence knowledge during the experiment, participants were subsequently regrouped in a sequence strategy group (n = 14, of which 4 participants from the implicit instruction condition and 10 participants from the explicit instruction condition) and a no-sequence strategy group (n = 16, of which 11 participants from the implicit instruction condition and 5 participants from the explicit instruction condition). Only participants of the sequence strategy group showed reliable perceptual sequence learning and sequence awareness. These results indicate that perceptual sequence learning depends upon the continuous employment of strategic cognitive control processes on sequence knowledge. Sequence awareness is suggested to be a necessary but not sufficient condition for perceptual learning to take place. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Guiding Visual Attention in Decision Making--Verbal Instructions versus Flicker Cueing

    ERIC Educational Resources Information Center

    Canal-Bruland, Rouwen

    2009-01-01

    Perceptual-cognitive processes play an important role in open, fast-paced, interceptive sports such as tennis, basketball, and soccer. Visual information processing has been shown to distinguish skilled from less skilled athletes. Research on the perceptual demands of sports performance has raised questions regarding athletes' visual information…

  1. Importance of perceptual representation in the visual control of action

    NASA Astrophysics Data System (ADS)

    Loomis, Jack M.; Beall, Andrew C.; Kelly, Jonathan W.; Macuga, Kristen L.

    2005-03-01

    In recent years, many experiments have demonstrated that optic flow is sufficient for visually controlled action, with the suggestion that perceptual representations of 3-D space are superfluous. In contrast, recent research in our lab indicates that some visually controlled actions, including some thought to be based on optic flow, are indeed mediated by perceptual representations. For example, we have demonstrated that people are able to perform complex spatial behaviors, like walking, driving, and object interception, in virtual environments which are rendered visible solely by cyclopean stimulation (random-dot cinematograms). In such situations, the absence of any retinal optic flow that is correlated with the objects and surfaces within the virtual environment means that people are using stereo-based perceptual representations to perform the behavior. The fact that people can perform such behaviors without training suggests that the perceptual representations are likely the same as those used when retinal optic flow is present. Other research indicates that optic flow, whether retinal or a more abstract property of the perceptual representation, is not the basis for postural control, because postural instability is related to perceived relative motion between self and the visual surroundings rather than to optic flow, even in the abstract sense.

  2. Perceptual load corresponds with factors known to influence visual search.

    PubMed

    Roper, Zachary J J; Cosman, Joshua D; Vecera, Shaun P

    2013-10-01

    One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a noncircular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spillover to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. We conclude that rather than be arbitrarily defined, perceptual load might be defined by well-characterized, continuous factors that influence visual search. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  3. [The Impact of Visual Perceptual Abilities on the Performance on the Wechsler Nonverbal Scale of Ability (WNV)].

    PubMed

    Werpup-Stüwe, L; Petermann, F; Daseking, M

    2015-10-01

    The use of psychometric tests in with children and adolescents is especially important in psychological diagnostics. Nonverbal intelligence tests are very often used to diagnose psychological abnormalities and generate developmental prognosis independent of the child´s verbal abilities. The correlation of the German version of the Developmental Test of Visual Perception - Adolescents and Adults (DTVP-A) with the Wechsler Nonverbal Scala of Abilities (WNV) was calculated based on the results of 172 children, adolescents and young adults aged 9-21 years. Furthermore, it was examined if individuals with poor visual perceptual abilities scored lower on the WNV than healthy subjects. The correlations of the results scored on DTVP-A and WNV ranged from moderate to strong. The group with poor visual perceptual abilities scored significantly lower on the WNV than the control group. Nonverbal intelligence tests like the WNV are not reliable for estimating the intelligence of individuals with low visual perceptual abilities. Therefore, the intelligence of these subjects should be tested with a test that also contains verbal subtests. If poor visual perceptual abilities are suspected, then they should be tested. The DTVP-A seems to be the right instrument for achieving this goal. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Size Constancy in Bat Biosonar? Perceptual Interaction of Object Aperture and Distance

    PubMed Central

    Heinrich, Melina; Wiegrebe, Lutz

    2013-01-01

    Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed “size constancy”. It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the ‘sonar aperture’, i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats. PMID:23630598

  5. Size constancy in bat biosonar? Perceptual interaction of object aperture and distance.

    PubMed

    Heinrich, Melina; Wiegrebe, Lutz

    2013-01-01

    Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed "size constancy". It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the 'sonar aperture', i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats.

  6. Increases in the autistic trait of attention to detail are associated with decreased multisensory temporal adaptation.

    PubMed

    Stevenson, Ryan A; Toulmin, Jennifer K; Youm, Ariana; Besney, Richard M A; Schulz, Samantha E; Barense, Morgan D; Ferber, Susanne

    2017-10-30

    Recent empirical evidence suggests that autistic individuals perceive the world differently than their typically-developed peers. One theoretical account, the predictive coding hypothesis, posits that autistic individuals show a decreased reliance on previous perceptual experiences, which may relate to autism symptomatology. We tested this through a well-characterized, audiovisual statistical-learning paradigm in which typically-developed participants were first adapted to consistent temporal relationships between audiovisual stimulus pairs (audio-leading, synchronous, visual-leading) and then performed a simultaneity judgement task with audiovisual stimulus pairs varying in temporal offset from auditory-leading to visual-leading. Following exposure to the visual-leading adaptation phase, participants' perception of synchrony was biased towards visual-leading presentations, reflecting the statistical regularities of their previously experienced environment. Importantly, the strength of adaptation was significantly related to the level of autistic traits that the participant exhibited, measured by the Autism Quotient (AQ). This was specific to the Attention to Detail subscale of the AQ that assesses the perceptual propensity to focus on fine-grain aspects of sensory input at the expense of more integrative perceptions. More severe Attention to Detail was related to weaker adaptation. These results support the predictive coding framework, and suggest that changes in sensory perception commonly reported in autism may contribute to autistic symptomatology.

  7. Task-relevant perceptual features can define categories in visual memory too.

    PubMed

    Antonelli, Karla B; Williams, Carrick C

    2017-11-01

    Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.

  8. An exploratory study: prolonged periods of binocular stimulation can provide an effective treatment for childhood amblyopia.

    PubMed

    Knox, Pamela J; Simmers, Anita J; Gray, Lyle S; Cleary, Marie

    2012-02-21

    The purpose of the present study was to explore the potential for treating childhood amblyopia with a binocular stimulus designed to correlate the visual input from both eyes. Eight strabismic, two anisometropic, and four strabismic and anisometropic amblyopes (mean age, 8.5 ± 2.6 years) undertook a dichoptic perceptual learning task for five sessions (each lasting 1 hour) over the course of a week. The training paradigm involved a simple computer game, which required the subject to use both eyes to perform the task. A statistically significant improvement (t(₁₃) = 5.46; P = 0.0001) in the mean visual acuity (VA) of the amblyopic eye (AE) was demonstrated, from 0.51 ± 0.27 logMAR before training to 0.42 ± 0.28 logMAR after training with six subjects gaining 0.1 logMAR or more of improvement. Measurable stereofunction was established for the first time in three subjects with an overall significant mean improvement in stereoacuity after training (t(₁₃) =2.64; P = 0.02). The dichoptic-based perceptual learning therapy employed in the present study improved both the monocular VA of the AE and stereofunction, verifying the feasibility of a binocular approach in the treatment of childhood amblyopia.

  9. Cognitive load effects on early visual perceptual processing.

    PubMed

    Liu, Ping; Forte, Jason; Sewell, David; Carter, Olivia

    2018-05-01

    Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.

  10. A Tangent Bundle Theory for Visual Curve Completion.

    PubMed

    Ben-Yosef, Guy; Ben-Shahar, Ohad

    2012-07-01

    Visual curve completion is a fundamental perceptual mechanism that completes the missing parts (e.g., due to occlusion) between observed contour fragments. Previous research into the shape of completed curves has generally followed an "axiomatic" approach, where desired perceptual/geometrical properties are first defined as axioms, followed by mathematical investigation into curves that satisfy them. However, determining psychophysically such desired properties is difficult and researchers still debate what they should be in the first place. Instead, here we exploit the observation that curve completion is an early visual process to formalize the problem in the unit tangent bundle R(2) × S(1), which abstracts the primary visual cortex (V1) and facilitates exploration of basic principles from which perceptual properties are later derived rather than imposed. Exploring here the elementary principle of least action in V1, we show how the problem becomes one of finding minimum-length admissible curves in R(2) × S(1). We formalize the problem in variational terms, we analyze it theoretically, and we formulate practical algorithms for the reconstruction of these completed curves. We then explore their induced visual properties vis-à-vis popular perceptual axioms and show how our theory predicts many perceptual properties reported in the corresponding perceptual literature. Finally, we demonstrate a variety of curve completions and report comparisons to psychophysical data and other completion models.

  11. Perceptual Averaging in Individuals with Autism Spectrum Disorder.

    PubMed

    Corbett, Jennifer E; Venuti, Paola; Melcher, David

    2016-01-01

    There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value) of a visual feature (e.g., mean size) appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD) have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles ( mean task ) despite poor accuracy in recalling individual circle sizes ( member task ). In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.

  12. Age-related changes in selective attention and perceptual load during visual search.

    PubMed

    Madden, David J; Langley, Linda K

    2003-03-01

    Three visual search experiments were conducted to test the hypothesis that age differences in selective attention vary as a function of perceptual load (E. A. Maylor & N. Lavie, 1998). Under resource-limited conditions (Experiments 1 and 2), the distraction from irrelevant display items generally decreased as display size (perceptual load) increased. This perceptual load effect was similar for younger and older adults, contrary to the findings of Maylor and Lavie. Distraction at low perceptual loads appeared to reflect both general and specific inhibitory mechanisms. Under more data-limited conditions (Experiment 3), an age-related decline in selective attention was evident, but the age difference was not attributable to capacity limitations as predicted by the perceptual load theory.

  13. Explaining neural signals in human visual cortex with an associative learning model.

    PubMed

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  14. Priming for performance: valence of emotional primes interact with dissociable prototype learning systems.

    PubMed

    Gorlick, Marissa A; Maddox, W Todd

    2013-01-01

    Arousal Biased Competition theory suggests that arousal enhances competitive attentional processes, but makes no strong claims about valence effects. Research suggests that the scope of enhanced attention depends on valence with negative arousal narrowing and positive arousal broadening attention. Attentional scope likely affects declarative-memory-mediated and perceptual-representation-mediated learning systems differently, with declarative-memory-mediated learning depending on narrow attention to develop targeted verbalizable rules, and perceptual-representation-mediated learning depending on broad attention to develop a perceptual representation. We hypothesize that negative arousal accentuates declarative-memory-mediated learning and attenuates perceptual-representation-mediated learning, while positive arousal reverses this pattern. Prototype learning provides an ideal test bed as dissociable declarative-memory and perceptual-representation systems mediate two-prototype (AB) and one-prototype (AN) prototype learning, respectively, and computational models are available that provide powerful insights on cognitive processing. As predicted, we found that negative arousal narrows attentional focus facilitating AB learning and impairing AN learning, while positive arousal broadens attentional focus facilitating AN learning and impairing AB learning.

  15. Priming for Performance: Valence of Emotional Primes Interact with Dissociable Prototype Learning Systems

    PubMed Central

    Gorlick, Marissa A.; Maddox, W. Todd

    2013-01-01

    Arousal Biased Competition theory suggests that arousal enhances competitive attentional processes, but makes no strong claims about valence effects. Research suggests that the scope of enhanced attention depends on valence with negative arousal narrowing and positive arousal broadening attention. Attentional scope likely affects declarative-memory-mediated and perceptual-representation-mediated learning systems differently, with declarative-memory-mediated learning depending on narrow attention to develop targeted verbalizable rules, and perceptual-representation-mediated learning depending on broad attention to develop a perceptual representation. We hypothesize that negative arousal accentuates declarative-memory-mediated learning and attenuates perceptual-representation-mediated learning, while positive arousal reverses this pattern. Prototype learning provides an ideal test bed as dissociable declarative-memory and perceptual-representation systems mediate two-prototype (AB) and one-prototype (AN) prototype learning, respectively, and computational models are available that provide powerful insights on cognitive processing. As predicted, we found that negative arousal narrows attentional focus facilitating AB learning and impairing AN learning, while positive arousal broadens attentional focus facilitating AN learning and impairing AB learning. PMID:23646101

  16. Relationship of Perceptual Learning Styles and Academic Achievement among High School Students

    ERIC Educational Resources Information Center

    Rani, K. V.

    2016-01-01

    Perceptual Learning styles are different ways in which people process the information in the course of learning, intimately involved in producing more effective response stimuli. The objective of the study was to find out the correlation between the variables of Perceptual learning style in total and with its dimensions to Academic achievement.…

  17. Auditory-Perceptual Learning Improves Speech Motor Adaptation in Children

    PubMed Central

    Shiller, Douglas M.; Rochon, Marie-Lyne

    2015-01-01

    Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5–7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children’s ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation. PMID:24842067

  18. Region-Specific Slowing of Alpha Oscillations is Associated with Visual-Perceptual Abilities in Children Born Very Preterm

    PubMed Central

    Doesburg, Sam M.; Moiseev, Alexander; Herdman, Anthony T.; Ribary, Urs; Grunau, Ruth E.

    2013-01-01

    Children born very preterm (≤32 weeks gestational age) without major intellectual or neurological impairments often express selective deficits in visual-perceptual abilities. The alterations in neurophysiological development underlying these problems, however, remain poorly understood. Recent research has indicated that spontaneous alpha oscillations are slowed in children born very preterm, and that atypical alpha-mediated functional network connectivity may underlie selective developmental difficulties in visual-perceptual ability in this group. The present study provides the first source-resolved analysis of slowing of spontaneous alpha oscillations in very preterm children, indicating alterations in a distributed set of brain regions concentrated in areas of posterior parietal and inferior temporal regions associated with visual perception, as well as prefrontal cortical regions and thalamus. We also uniquely demonstrate that slowing of alpha oscillations is associated with selective difficulties in visual-perceptual ability in very preterm children. These results indicate that region-specific slowing of alpha oscillations contribute to selective developmental difficulties prevalent in this population. PMID:24298250

  19. Seeing Fluid Physics via Visual Expertise Training

    NASA Astrophysics Data System (ADS)

    Hertzberg, Jean; Goodman, Katherine; Curran, Tim

    2016-11-01

    In a course on Flow Visualization, students often expressed that their perception of fluid flows had increased, implying the acquisition of a type of visual expertise, akin to that of radiologists or dog show judges. In the first steps towards measuring this expertise, we emulated an experimental design from psychology. The study had two groups of participants: "novices" with no formal fluids education, and "experts" who had passed as least one fluid mechanics course. All participants were trained to place static images of fluid flows into two categories (laminar and turbulent). Half the participants were trained on flow images with a specific format (Von Kármán vortex streets), and the other half on a broader group. Novices' results were in line with past perceptual expertise studies, showing that it is easier to transfer learning from a broad category to a new specific format than vice versa. In contrast, experts did not have a significant difference between training conditions, suggesting the experts did not undergo the same learning process as the novices. We theorize that expert subjects were able to access their conceptual knowledge about fluids to perform this new, visual task. This finding supports new ways of understanding conceptual learning.

  20. Perceptual adaptation in the use of night vision goggles

    NASA Technical Reports Server (NTRS)

    Durgin, Frank H.; Proffitt, Dennis R.

    1992-01-01

    The image intensification (I sup 2) systems studied for this report were the biocular AN/PVS-7(NVG) and the binocular AN/AVS-6(ANVIS). Both are quite impressive for purposes of revealing the structure of the environment in a fairly straightforward way in extremely low-light conditions. But these systems represent an unusual viewing medium. The perceptual information available through I sup 2 systems is different in a variety of ways from the typical input of everyday vision, and extensive training and practice is required for optimal use. Using this sort of system involves a kind of perceptual skill learning, but is may also involve visual adaptations that are not simply an extension of normal vision. For example, the visual noise evident in the goggles in very low-light conditions results in unusual statistical properties in visual input. Because we had recently discovered a strong and enduring aftereffect of perceived texture density which seemed to be sensitive to precisely the sorts of statistical distortions introduced by I sup 2 systems, it occurred to use that visual noise of this sort might be a very adapting stimulus for texture density and produce an aftereffect that extended into normal vision once the goggles were removed. We have not found any experimental evidence that I sup 2 systems produce texture density aftereffects. The nature of the texture density aftereffect is briefly explained, followed by an accounting of our studies of I sup 2 systems and our most recent work on the texture density aftereffect. A test for spatial frequency adaptation after exposure to NVG's is also reported, as is a study of perceived depth from motion (motion parallax) while wearing the biocular goggles. We conclude with a summary of our findings.

  1. Opposite Influence of Perceptual Memory on Initial and Prolonged Perception of Sensory Ambiguity

    PubMed Central

    de Jong, Maartje Cathelijne; Knapen, Tomas; van Ee, Raymond

    2012-01-01

    Observers continually make unconscious inferences about the state of the world based on ambiguous sensory information. This process of perceptual decision-making may be optimized by learning from experience. We investigated the influence of previous perceptual experience on the interpretation of ambiguous visual information. Observers were pre-exposed to a perceptually stabilized sequence of an ambiguous structure-from-motion stimulus by means of intermittent presentation. At the subsequent re-appearance of the same ambiguous stimulus perception was initially biased toward the previously stabilized perceptual interpretation. However, prolonged viewing revealed a bias toward the alternative perceptual interpretation. The prevalence of the alternative percept during ongoing viewing was largely due to increased durations of this percept, as there was no reliable decrease in the durations of the pre-exposed percept. Moreover, the duration of the alternative percept was modulated by the specific characteristics of the pre-exposure, whereas the durations of the pre-exposed percept were not. The increase in duration of the alternative percept was larger when the pre-exposure had lasted longer and was larger after ambiguous pre-exposure than after unambiguous pre-exposure. Using a binocular rivalry stimulus we found analogous perceptual biases, while pre-exposure did not affect eye-bias. We conclude that previously perceived interpretations dominate at the onset of ambiguous sensory information, whereas alternative interpretations dominate prolonged viewing. Thus, at first instance ambiguous information seems to be judged using familiar percepts, while re-evaluation later on allows for alternative interpretations. PMID:22295095

  2. Handwriting Error Patterns of Children with Mild Motor Difficulties.

    ERIC Educational Resources Information Center

    Malloy-Miller, Theresa; And Others

    1995-01-01

    A test of handwriting legibility and 6 perceptual-motor tests were completed by 66 children ages 7-12. Among handwriting error patterns, execution was associated with visual-motor skill and sensory discrimination, aiming with visual-motor and fine-motor skills. The visual-spatial factor had no significant association with perceptual-motor…

  3. Perceptual load in different regions of the visual scene and its relevance for driving.

    PubMed

    Marciano, Hadas; Yeshurun, Yaffa

    2015-06-01

    The aim of this study was to better understand the role played by perceptual load, at both central and peripheral regions of the visual scene, in driving safety. Attention is a crucial factor in driving safety, and previous laboratory studies suggest that perceptual load is an important factor determining the efficiency of attentional selectivity. Yet, the effects of perceptual load on driving were never studied systematically. Using a driving simulator, we orthogonally manipulated the load levels at the road (central load) and its sides (peripheral load), while occasionally introducing critical events at one of these regions. Perceptual load affected driving performance at both regions of the visual scene. Critically, the effect was different for central versus peripheral load: Whereas load levels on the road mainly affected driving speed, load levels on its sides mainly affected the ability to detect critical events initiating from the roadsides. Moreover, higher levels of peripheral load impaired performance but mainly with low levels of central load, replicating findings with simple letter stimuli. Perceptual load has a considerable effect on driving, but the nature of this effect depends on the region of the visual scene at which the load is introduced. Given the observed importance of perceptual load, authors of future studies of driving safety should take it into account. Specifically, these findings suggest that our understanding of factors that may be relevant for driving safety would benefit from studying these factors under different levels of load at different regions of the visual scene. © 2014, Human Factors and Ergonomics Society.

  4. Visual-perceptual impairment in children with cerebral palsy: a systematic review.

    PubMed

    Ego, Anne; Lidzba, Karen; Brovedani, Paola; Belmonti, Vittorio; Gonzalez-Monge, Sibylle; Boudia, Baya; Ritz, Annie; Cans, Christine

    2015-04-01

    Visual perception is one of the cognitive functions often impaired in children with cerebral palsy (CP). The aim of this systematic literature review was to assess the frequency of visual-perceptual impairment (VPI) and its relationship with patient characteristics. Eligible studies were relevant papers assessing visual perception with five common standardized assessment instruments in children with CP published from January 1990 to August 2011. Of the 84 studies selected, 15 were retained. In children with CP, the proportion of VPI ranged from 40% to 50% and the mean visual perception quotient from 70 to 90. None of the studies reported a significant influence of CP subtype, IQ level, side of motor impairment, neuro-ophthalmological outcomes, or seizures. The severity of neuroradiological lesions seemed associated with VPI. The influence of prematurity was controversial, but a lower gestational age was more often associated with lower visual motor skills than with decreased visual-perceptual abilities. The impairment of visual perception in children with CP should be considered a core disorder within the CP syndrome. Further research, including a more systematic approach to neuropsychological testing, is needed to explore the specific impact of CP subgroups and of neuroradiological features on visual-perceptual development. © 2015 The Authors. Developmental Medicine & Child Neurology © 2015 Mac Keith Press.

  5. Spatial integration and cortical dynamics.

    PubMed

    Gilbert, C D; Das, A; Ito, M; Kapadia, M; Westheimer, G

    1996-01-23

    Cells in adult primary visual cortex are capable of integrating information over much larger portions of the visual field than was originally thought. Moreover, their receptive field properties can be altered by the context within which local features are presented and by changes in visual experience. The substrate for both spatial integration and cortical plasticity is likely to be found in a plexus of long-range horizontal connections, formed by cortical pyramidal cells, which link cells within each cortical area over distances of 6-8 mm. The relationship between horizontal connections and cortical functional architecture suggests a role in visual segmentation and spatial integration. The distribution of lateral interactions within striate cortex was visualized with optical recording, and their functional consequences were explored by using comparable stimuli in human psychophysical experiments and in recordings from alert monkeys. They may represent the substrate for perceptual phenomena such as illusory contours, surface fill-in, and contour saliency. The dynamic nature of receptive field properties and cortical architecture has been seen over time scales ranging from seconds to months. One can induce a remapping of the topography of visual cortex by making focal binocular retinal lesions. Shorter-term plasticity of cortical receptive fields was observed following brief periods of visual stimulation. The mechanisms involved entailed, for the short-term changes, altering the effectiveness of existing cortical connections, and for the long-term changes, sprouting of axon collaterals and synaptogenesis. The mutability of cortical function implies a continual process of calibration and normalization of the perception of visual attributes that is dependent on sensory experience throughout adulthood and might further represent the mechanism of perceptual learning.

  6. Learning to Read an Alphabet of Human Faces Produces Left-lateralized Training Effects in the Fusiform Gyrus

    PubMed Central

    Moore, Michelle W.; Durisko, Corrine; Perfetti, Charles A.; Fiez, Julie A.

    2014-01-01

    Numerous functional neuroimaging studies have shown that most orthographic stimuli, such as printed English words, produce a left-lateralized response within the fusiform gyrus (FG) at a characteristic location termed the visual word form area (VWFA). We developed an experimental alphabet (FaceFont) comprising 35 face–phoneme pairs to disentangle phonological and perceptual influences on the lateralization of orthographic processing within the FG. Using functional imaging, we found that a region in the vicinity of the VWFA responded to FaceFont words more strongly in trained versus untrained participants, whereas no differences were observed in the right FG. The trained response magnitudes in the left FG region correlated with behavioral reading performance, providing strong evidence that the neural tissue recruited by training supported the newly acquired reading skill. These results indicate that the left lateralization of the orthographic processing is not restricted to stimuli with particular visual-perceptual features. Instead, lateralization may occur because the anatomical projections in the vicinity of the VWFA provide a unique interconnection between the visual system and left-lateralized language areas involved in the representation of speech. PMID:24168219

  7. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    PubMed Central

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  8. Parafoveal magnification: visual acuity does not modulate the perceptual span in reading.

    PubMed

    Miellet, Sébastien; O'Donnell, Patrick J; Sereno, Sara C

    2009-06-01

    Models of eye guidance in reading rely on the concept of the perceptual span-the amount of information perceived during a single eye fixation, which is considered to be a consequence of visual and attentional constraints. To directly investigate attentional mechanisms underlying the perceptual span, we implemented a new reading paradigm-parafoveal magnification (PM)-that compensates for how visual acuity drops off as a function of retinal eccentricity. On each fixation and in real time, parafoveal text is magnified to equalize its perceptual impact with that of concurrent foveal text. Experiment 1 demonstrated that PM does not increase the amount of text that is processed, supporting an attentional-based account of eye movements in reading. Experiment 2 explored a contentious issue that differentiates competing models of eye movement control and showed that, even when parafoveal information is enlarged, visual attention in reading is allocated in a serial fashion from word to word.

  9. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    PubMed

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  10. Young Skilled Deaf Readers Have an Enhanced Perceptual Span in Reading.

    PubMed

    Bélanger, Nathalie N; Lee, Michelle; Schotter, Elizabeth R

    2017-04-27

    Recently, Bélanger, Slattery, Mayberry and Rayner (2012) showed, using the moving window paradigm, that profoundly deaf adults have a wider perceptual span during reading relative to hearing adults matched on reading level. This difference might be related to the fact that deaf adults allocate more visual attention to simple stimuli in the parafovea (Bavelier, Dye & Hauser, 2006). Importantly, this reorganization of visual attention in deaf individuals is already manifesting in deaf children (Dye, Hauser & Bavelier, 2009). This leads to questions about the time course of the emergence of an enhanced perceptual span (which is under attentional control; Rayner, 2014; Miellet, O'Donnell, & Sereno, 2009) in young deaf readers. The present research addressed this question by comparing the perceptual spans of young deaf readers (age 7-15) and young hearing children (age 7-15). Young deaf readers, like deaf adults, were found to have a wider perceptual span relative to their hearing peers matched on reading level, suggesting that strong and early reorganization of visual attention in deaf individuals goes beyond the processing of simple visual stimuli and emerges into more cognitively complex tasks, such as reading.

  11. The neural response in short-term visual recognition memory for perceptual conjunctions.

    PubMed

    Elliott, R; Dolan, R J

    1998-01-01

    Short-term visual memory has been widely studied in humans and animals using delayed matching paradigms. The present study used positron emission tomography (PET) to determine the neural substrates of delayed matching to sample for complex abstract patterns over a 5-s delay. More specifically, the study assessed any differential neural response associated with remembering individual perceptual properties (color only and shape only) compared to conjunction between these properties. Significant activations associated with short-term visual memory (all memory conditions compared to perceptuomotor control) were observed in extrastriate cortex, medial and lateral parietal cortex, anterior cingulate, inferior frontal gyrus, and the thalamus. Significant deactivations were observed throughout the temporal cortex. Although the requirement to remember color compared to shape was associated with subtly different patterns of blood flow, the requirement to remember perceptual conjunctions between these features was not associated with additional specific activations. These data suggest that visual memory over a delay of the order of 5 s is mainly dependent on posterior perceptual regions of the cortex, with the exact regions depending on the perceptual aspect of the stimuli to be remembered.

  12. Some effects of alcohol and eye movements on cross-race face learning.

    PubMed

    Harvey, Alistair J

    2014-01-01

    This study examines the impact of acute alcohol intoxication on visual scanning in cross-race face learning. The eye movements of a group of white British participants were recorded as they encoded a series of own-and different-race faces, under alcohol and placebo conditions. Intoxication reduced the rate and extent of visual scanning during face encoding, reorienting the focus of foveal attention away from the eyes and towards the nose. Differences in encoding eye movements also varied between own-and different-race face conditions as a function of alcohol. Fixations to both face types were less frequent and more lingering following intoxication, but in the placebo condition this was only the case for different-race faces. While reducing visual scanning, however, alcohol had no adverse effect on memory, only encoding restrictions associated with sober different-race face processing led to poorer recognition. These results support perceptual expertise accounts of own-race face processing, but suggest the adverse effects of alcohol on face learning published previously are not caused by foveal encoding restrictions. The implications of these findings for alcohol myopia theory are discussed.

  13. Cortical visual prostheses: from microstimulation to functional percept

    NASA Astrophysics Data System (ADS)

    Najarpour Foroushani, Armin; Pack, Christopher C.; Sawan, Mohamad

    2018-04-01

    Cortical visual prostheses are intended to restore vision by targeted electrical stimulation of the visual cortex. The perception of spots of light, called phosphenes, resulting from microstimulation of the visual pathway, suggests the possibility of creating meaningful percept made of phosphenes. However, to date electrical stimulation of V1 has still not resulted in perception of phosphenated images that goes beyond punctate spots of light. In this review, we summarize the clinical and experimental progress that has been made in generating phosphenes and modulating their associated perceptual characteristics in human and macaque primary visual cortex (V1). We focus specifically on the effects of different microstimulation parameters on perception and we analyse key challenges facing the generation of meaningful artificial percepts. Finally, we propose solutions to these challenges based on the application of supervised learning of population codes for spatial stimulation of visual cortex.

  14. The Psychophysics of Algebra Expertise: Mathematics Perceptual Learning Interventions Produce Durable Encoding Changes

    ERIC Educational Resources Information Center

    Bufford, Carolyn A.; Mettler, Everett; Geller, Emma H.; Kellman, Philip J.

    2014-01-01

    Mathematics requires thinking but also pattern recognition. Recent research indicates that perceptual learning (PL) interventions facilitate discovery of structure and recognition of patterns in mathematical domains, as assessed by tests of mathematical competence. Here we sought direct evidence that a brief perceptual learning module (PLM)…

  15. An assessment of domain-general metacognitive responding in rhesus monkeys.

    PubMed

    Brown, Emily Kathryn; Templer, Victoria L; Hampton, Robert R

    2017-02-01

    Metacognition is the ability to monitor and control one's cognition. Monitoring may involve either public cues or introspection of private cognitive states. We tested rhesus monkeys (Macaca mulatta) in a series of generalization tests to determine which type of cues control metacognition. In Experiment 1, monkeys learned a perceptual discrimination in which a "decline-test" response allowed them to avoid tests and receive a guaranteed small reward. Monkeys declined more difficult than easy tests. In Experiments 2-4, we evaluated whether monkeys generalized this metacognitive responding to new perceptual tests. Monkeys showed a trend toward generalization in Experiments 2 & 3, and reliable generalization in Experiment 4. In Experiments 5 & 6, we presented the decline-test response in a delayed matching-to-sample task. Memory tests differed from perceptual tests in that the appearance of the test display could not control metacognitive responding. In Experiment 6, monkeys made prospective metamemory judgments before seeing the tests. Generalization across perceptual tests with different visual properties and mixed generalization from perceptual to memory tests provide provisional evidence that domain-general, private cues controlled metacognition in some monkeys. We observed individual differences in generalization, suggesting that monkeys differ in use of public and private metacognitive cues. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    PubMed Central

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2015-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations. PMID:25642198

  17. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements.

    PubMed

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2014-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.

  18. The cognitive capabilities of farm animals: categorisation learning in dwarf goats (Capra hircus).

    PubMed

    Meyer, Susann; Nürnberg, Gerd; Puppe, Birger; Langbein, Jan

    2012-07-01

    The ability to establish categories enables organisms to classify stimuli, objects and events by assessing perceptual, associative or rational similarities and provides the basis for higher cognitive processing. The cognitive capabilities of farm animals are receiving increasing attention in applied ethology, a development driven primarily by scientifically based efforts to improve animal welfare. The present study investigated the learning of perceptual categories in Nigerian dwarf goats (Capra hircus) by using an automated learning device installed in the animals' pen. Thirteen group-housed goats were trained in a closed-economy approach to discriminate artificial two-dimensional symbols presented in a four-choice design. The symbols belonged to two categories: category I, black symbols with an open centre (rewarded) and category II, the same symbols but filled black (unrewarded). One symbol from category I and three different symbols from category II were used to define a discrimination problem. After the training of eight problems, the animals were presented with a transfer series containing the training problems interspersed with completely new problems made from new symbols belonging to the same categories. The results clearly demonstrate that dwarf goats are able to form categories based on similarities in the visual appearance of artificial symbols and to generalise across new symbols. However, the goats had difficulties in discriminating specific symbols. It is probable that perceptual problems caused these difficulties. Nevertheless, the present study suggests that goats housed under farming conditions have well-developed cognitive abilities, including learning of open-ended categories. This result could prove beneficial by facilitating animals' adaptation to housing environments that favour their cognitive capabilities.

  19. Transfer in motion perceptual learning depends on the difficulty of the training task.

    PubMed

    Wang, Xiaoxiao; Zhou, Yifeng; Liu, Zili

    2013-06-07

    One hypothesis in visual perceptual learning is that the amount of transfer depends on the difficulty of the training and transfer tasks (Ahissar & Hochstein, 1997; Liu, 1995, 1999). Jeter, Dosher, Petrov, and Lu (2009), using an orientation discrimination task, challenged this hypothesis by arguing that the amount of transfer depends only on the transfer task but not on the training task. Here we show in a motion direction discrimination task that the amount of transfer indeed depends on the difficulty of the training task. Specifically, participants were first trained with either 4° or 8° direction discrimination along one average direction. Their transfer performance was then tested along an average direction 90° away from the trained direction. A variety of transfer measures consistently demonstrated that transfer performance depended on whether the participants were trained on 4° or 8° directional difference. The results contradicted the prediction that transfer was independent of the training task difficulty.

  20. Visual short-term memory load strengthens selective attention.

    PubMed

    Roper, Zachary J J; Vecera, Shaun P

    2014-04-01

    Perceptual load theory accounts for many attentional phenomena; however, its mechanism remains elusive because it invokes underspecified attentional resources. Recent dual-task evidence has revealed that a concurrent visual short-term memory (VSTM) load slows visual search and reduces contrast sensitivity, but it is unknown whether a VSTM load also constricts attention in a canonical perceptual load task. If attentional selection draws upon VSTM resources, then distraction effects-which measure attentional "spill-over"-will be reduced as competition for resources increases. Observers performed a low perceptual load flanker task during the delay period of a VSTM change detection task. We observed a reduction of the flanker effect in the perceptual load task as a function of increasing concurrent VSTM load. These findings were not due to perceptual-level interactions between the physical displays of the two tasks. Our findings suggest that perceptual representations of distractor stimuli compete with the maintenance of visual representations held in memory. We conclude that access to VSTM determines the degree of attentional selectivity; when VSTM is not completely taxed, it is more likely for task-irrelevant items to be consolidated and, consequently, affect responses. The "resources" hypothesized by load theory are at least partly mnemonic in nature, due to the strong correspondence they share with VSTM capacity.

  1. Enhanced perceptual functioning in autism: an update, and eight principles of autistic perception.

    PubMed

    Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.

  2. Anatomical Substrates of Visual and Auditory Miniature Second-language Learning

    PubMed Central

    Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.

    2007-01-01

    Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186

  3. The use of head/eye-centered, hand-centered and allocentric representations for visually guided hand movements and perceptual judgments.

    PubMed

    Thaler, Lore; Todd, James T

    2009-04-01

    Two experiments are reported that were designed to measure the accuracy and reliability of both visually guided hand movements (Exp. 1) and perceptual matching judgments (Exp. 2). The specific procedure for informing subjects of the required response on each trial was manipulated so that some tasks could only be performed using an allocentric representation of the visual target; others could be performed using either an allocentric or hand-centered representation; still others could be performed based on an allocentric, hand-centered or head/eye-centered representation. Both head/eye and hand centered representations are egocentric because they specify visual coordinates with respect to the subject. The results reveal that accuracy and reliability of both motor and perceptual responses are highest when subjects direct their response towards a visible target location, which allows them to rely on a representation of the target in head/eye-centered coordinates. Systematic changes in averages and standard deviations of responses are observed when subjects cannot direct their response towards a visible target location, but have to represent target distance and direction in either hand-centered or allocentric visual coordinates instead. Subjects' motor and perceptual performance agree quantitatively well. These results strongly suggest that subjects process head/eye-centered representations differently from hand-centered or allocentric representations, but that they process visual information for motor actions and perceptual judgments together.

  4. Neural Correlates of Visual Perceptual Expertise: Evidence from Cognitive Neuroscience Using Functional Neuroimaging

    ERIC Educational Resources Information Center

    Gegenfurtner, Andreas; Kok, Ellen M.; van Geel, Koos; de Bruin, Anique B. H.; Sorger, Bettina

    2017-01-01

    Functional neuroimaging is a useful approach to study the neural correlates of visual perceptual expertise. The purpose of this paper is to review the functional-neuroimaging methods that have been implemented in previous research in this context. First, we will discuss research questions typically addressed in visual expertise research. Second,…

  5. Unconscious learning processes: mental integration of verbal and pictorial instructional materials.

    PubMed

    Kuldas, Seffetullah; Ismail, Hairul Nizam; Hashim, Shahabuddin; Bakar, Zainudin Abu

    2013-12-01

    This review aims to provide an insight into human learning processes by examining the role of cognitive and emotional unconscious processing in mentally integrating visual and verbal instructional materials. Reviewed literature shows that conscious mental integration does not happen all the time, nor does it necessarily result in optimal learning. Students of all ages and levels of experience cannot always have conscious awareness, control, and the intention to learn or promptly and continually organize perceptual, cognitive, and emotional processes of learning. This review suggests considering the role of unconscious learning processes to enhance the understanding of how students form or activate mental associations between verbal and pictorial information. The understanding would assist in presenting students with spatially-integrated verbal and pictorial instructional materials as a way of facilitating mental integration and improving teaching and learning performance.

  6. Visual perceptual and handwriting skills in children with Developmental Coordination Disorder.

    PubMed

    Prunty, Mellissa; Barnett, Anna L; Wilmut, Kate; Plumb, Mandy

    2016-10-01

    Children with Developmental Coordination Disorder demonstrate a lack of automaticity in handwriting as measured by pauses during writing. Deficits in visual perception have been proposed in the literature as underlying mechanisms of handwriting difficulties in children with DCD. The aim of this study was to examine whether correlations exist between measures of visual perception and visual motor integration with measures of the handwriting product and process in children with DCD. The performance of twenty-eight 8-14year-old children who met the DSM-5 criteria for DCD was compared with 28 typically developing (TD) age and gender-matched controls. The children completed the Developmental Test of Visual Motor Integration (VMI) and the Test of Visual Perceptual Skills (TVPS). Group comparisons were made, correlations were conducted between the visual perceptual measures and handwriting measures and the sensitivity and specificity examined. The DCD group performed below the TD group on the VMI and TVPS. There were no significant correlations between the VMI or TVPS and any of the handwriting measures in the DCD group. In addition, both tests demonstrated low sensitivity. Clinicians should execute caution in using visual perceptual measures to inform them about handwriting skill in children with DCD. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  7. How to make a good animation: A grounded cognition model of how visual representation design affects the construction of abstract physics knowledge

    NASA Astrophysics Data System (ADS)

    Chen, Zhongzhou; Gladding, Gary

    2014-06-01

    Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition, which leads to a significant variance in their effectiveness. In this paper we propose a cognitive mechanism based on grounded cognition, suggesting that visual perception affects understanding by activating "perceptual symbols": the basic cognitive unit used by the brain to construct a concept. A good visual representation activates perceptual symbols that are essential for the construction of the represented concept, whereas a bad representation does the opposite. As a proof of concept, we conducted a clinical experiment in which participants received three different versions of a multimedia tutorial teaching the integral expression of electric potential. The three versions were only different by the details of the visual representation design, only one of which contained perceptual features that activate perceptual symbols essential for constructing the idea of "accumulation." On a following post-test, participants receiving this version of tutorial significantly outperformed those who received the other two versions of tutorials designed to mimic conventional visual representations used in classrooms.

  8. Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition

    PubMed Central

    Craddock, Matt; Lawson, Rebecca

    2009-01-01

    A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685

  9. Neural representation of form-contingent color filling-in in the early visual cortex.

    PubMed

    Hong, Sang Wook; Tong, Frank

    2017-11-01

    Perceptual filling-in exemplifies the constructive nature of visual processing. Color, a prominent surface property of visual objects, can appear to spread to neighboring areas that lack any color. We investigated cortical responses to a color filling-in illusion that effectively dissociates perceived color from the retinal input (van Lier, Vergeer, & Anstis, 2009). Observers adapted to a star-shaped stimulus with alternating red- and cyan-colored points to elicit a complementary afterimage. By presenting an achromatic outline that enclosed one of the two afterimage colors, perceptual filling-in of that color was induced in the unadapted central region. Visual cortical activity was monitored with fMRI, and analyzed using multivariate pattern analysis. Activity patterns in early visual areas (V1-V4) reliably distinguished between the two color-induced filled-in conditions, but only higher extrastriate visual areas showed the predicted correspondence with color perception. Activity patterns allowed for reliable generalization between filled-in colors and physical presentations of perceptually matched colors in areas V3 and V4, but not in earlier visual areas. These findings suggest that the perception of filled-in surface color likely requires more extensive processing by extrastriate visual areas, in order for the neural representation of surface color to become aligned with perceptually matched real colors.

  10. Uncovering Camouflage: Amygdala Activation Predicts Long-Term Memory of Induced Perceptual Insight

    PubMed Central

    Ludmer, Rachel; Dudai, Yadin; Rubin, Nava

    2012-01-01

    What brain mechanisms underlie learning of new knowledge from single events? We studied encoding in long-term memory of a unique type of one-shot experience, induced perceptual insight. While undergoing an fMRI brain scan, participants viewed degraded images of real-world pictures where the underlying objects were hard to recognize (‘camouflage’), followed by brief exposures to the original images (‘solution’), which led to induced insight (“Aha!”). A week later, participants’ memory was tested; a solution image was classified as ‘remembered’ if detailed perceptual knowledge was elicited from the camouflage image alone. During encoding, subsequently remembered images enjoyed higher activity in mid-level visual cortex and medial frontal cortex, but most pronouncedly in the amygdala, whose activity could be used to predict which solutions will remain in long-term memory. Our findings extend the known roles of amygdala in memory to include promoting of long-term memory of the sudden reorganization of internal representations. PMID:21382558

  11. Visual generalization in honeybees: evidence of peak shift in color discrimination.

    PubMed

    Martínez-Harms, J; Márquez, N; Menzel, R; Vorobyev, M

    2014-04-01

    In the present study, we investigated color generalization in the honeybee Apis mellifera after differential conditioning. In particular, we evaluated the effect of varying the position of a novel color along a perceptual continuum relative to familiar colors on response biases. Honeybee foragers were differentially trained to discriminate between rewarded (S+) and unrewarded (S-) colors and tested on responses toward the former S+ when presented against a novel color. A color space based on the receptor noise-limited model was used to evaluate the relationship between colors and to characterize a perceptual continuum. When S+ was tested against a novel color occupying a locus in the color space located in the same direction from S- as S+, but further away, the bees shifted their stronger response away from S- toward the novel color. These results reveal the occurrence of peak shift in the color vision of honeybees and indicate that honeybees can learn color stimuli in relational terms based on chromatic perceptual differences.

  12. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    PubMed

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  13. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning

    PubMed Central

    Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359

  14. To hear or not to hear: Voice processing under visual load.

    PubMed

    Zäske, Romi; Perlich, Marie-Christin; Schweinberger, Stefan R

    2016-07-01

    Adaptation to female voices causes subsequent voices to be perceived as more male, and vice versa. This contrastive aftereffect disappears under spatial inattention to adaptors, suggesting that voices are not encoded automatically. According to Lavie, Hirst, de Fockert, and Viding (2004), the processing of task-irrelevant stimuli during selective attention depends on perceptual resources and working memory. Possibly due to their social significance, faces may be an exceptional domain: That is, task-irrelevant faces can escape perceptual load effects. Here we tested voice processing, to study whether voice gender aftereffects (VGAEs) depend on low or high perceptual (Exp. 1) or working memory (Exp. 2) load in a relevant visual task. Participants adapted to irrelevant voices while either searching digit displays for a target (Exp. 1) or recognizing studied digits (Exp. 2). We found that the VGAE was unaffected by perceptual load, indicating that task-irrelevant voices, like faces, can also escape perceptual-load effects. Intriguingly, the VGAE was increased under high memory load. Therefore, visual working memory load, but not general perceptual load, determines the processing of task-irrelevant voices.

  15. Linking Cognitive and Visual Perceptual Decline in Healthy Aging: The Information Degradation Hypothesis

    PubMed Central

    Monge, Zachary A.; Madden, David J.

    2016-01-01

    Several hypotheses attempt to explain the relation between cognitive and perceptual decline in aging (e.g., common-cause, sensory deprivation, cognitive load on perception, information degradation). Unfortunately, the majority of past studies examining this association have used correlational analyses, not allowing for these hypotheses to be tested sufficiently. This correlational issue is especially relevant for the information degradation hypothesis, which states that degraded perceptual signal inputs, resulting from either age-related neurobiological processes (e.g., retinal degeneration) or experimental manipulations (e.g., reduced visual contrast), lead to errors in perceptual processing, which in turn may affect non-perceptual, higher-order cognitive processes. Even though the majority of studies examining the relation between age-related cognitive and perceptual decline have been correlational, we reviewed several studies demonstrating that visual manipulations affect both younger and older adults’ cognitive performance, supporting the information degradation hypothesis and contradicting implications of other hypotheses (e.g., common-cause, sensory deprivation, cognitive load on perception). The reviewed evidence indicates the necessity to further examine the information degradation hypothesis in order to identify mechanisms underlying age-related cognitive decline. PMID:27484869

  16. Perceptual Contrast Enhancement with Dynamic Range Adjustment

    PubMed Central

    Zhang, Hong; Li, Yuecheng; Chen, Hao; Yuan, Ding; Sun, Mingui

    2013-01-01

    Recent years, although great efforts have been made to improve its performance, few Histogram equalization (HE) methods take human visual perception (HVP) into account explicitly. The human visual system (HVS) is more sensitive to edges than brightness. This paper proposes to take use of this nature intuitively and develops a perceptual contrast enhancement approach with dynamic range adjustment through histogram modification. The use of perceptual contrast connects the image enhancement problem with the HVS. To pre-condition the input image before the HE procedure is implemented, a perceptual contrast map (PCM) is constructed based on the modified Difference of Gaussian (DOG) algorithm. As a result, the contrast of the image is sharpened and high frequency noise is suppressed. A modified Clipped Histogram Equalization (CHE) is also developed which improves visual quality by automatically detecting the dynamic range of the image with improved perceptual contrast. Experimental results show that the new HE algorithm outperforms several state-of-the-art algorithms in improving perceptual contrast and enhancing details. In addition, the new algorithm is simple to implement, making it suitable for real-time applications. PMID:24339452

  17. I see/hear what you mean: semantic activation in visual word recognition depends on perceptual attention.

    PubMed

    Connell, Louise; Lynott, Dermot

    2014-04-01

    How does the meaning of a word affect how quickly we can recognize it? Accounts of visual word recognition allow semantic information to facilitate performance but have neglected the role of modality-specific perceptual attention in activating meaning. We predicted that modality-specific semantic information would differentially facilitate lexical decision and reading aloud, depending on how perceptual attention is implicitly directed by each task. Large-scale regression analyses showed the perceptual modalities involved in representing a word's referent concept influence how easily that word is recognized. Both lexical decision and reading-aloud tasks direct attention toward vision, and are faster and more accurate for strongly visual words. Reading aloud additionally directs attention toward audition and is faster and more accurate for strongly auditory words. Furthermore, the overall semantic effects are as large for reading aloud as lexical decision and are separable from age-of-acquisition effects. These findings suggest that implicitly directing perceptual attention toward a particular modality facilitates representing modality-specific perceptual information in the meaning of a word, which in turn contributes to the lexical decision or reading-aloud response.

  18. Acting without seeing: eye movements reveal visual processing without awareness.

    PubMed

    Spering, Miriam; Carrasco, Marisa

    2015-04-01

    Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. Here, we review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movement. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging, and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Acting without seeing: Eye movements reveal visual processing without awareness Miriam Spering & Marisa Carrasco

    PubMed Central

    Spering, Miriam; Carrasco, Marisa

    2015-01-01

    Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. We review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movements. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness. PMID:25765322

  20. Task Versus Component Consistency in the Development of Automatic Processes: Consistent Attending Versus Consistent Responding.

    DTIC Science & Technology

    1982-03-01

    are two qualitatively different forms of human information processing (James, 1890; Hasher & Zacks, 1979; LaBerge , 1973, 1975; Logan, 1978, 1979...Kristofferson, M. W. When item recognition and visual search functions are similar. Perception & Psychophysics, 1972, 12, 379-384. LaBerge , D. Attention and...the measurement of perceptual learning. Hemory and3 Conition, 1973, 1, 263-276. LaBerge , D. Acquisition of automatic processing in purceptual and

  1. Influence of audio triggered emotional attention on video perception

    NASA Astrophysics Data System (ADS)

    Torres, Freddy; Kalva, Hari

    2014-02-01

    Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.

  2. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    PubMed Central

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  3. Irrelevant reward and selection histories have different influences on task-relevant attentional selection.

    PubMed

    MacLean, Mary H; Giesbrecht, Barry

    2015-07-01

    Task-relevant and physically salient features influence visual selective attention. In the present study, we investigated the influence of task-irrelevant and physically nonsalient reward-associated features on visual selective attention. Two hypotheses were tested: One predicts that the effects of target-defining task-relevant and task-irrelevant features interact to modulate visual selection; the other predicts that visual selection is determined by the independent combination of relevant and irrelevant feature effects. These alternatives were tested using a visual search task that contained multiple targets, placing a high demand on the need for selectivity, and that was data-limited and required unspeeded responses, emphasizing early perceptual selection processes. One week prior to the visual search task, participants completed a training task in which they learned to associate particular colors with a specific reward value. In the search task, the reward-associated colors were presented surrounding targets and distractors, but were neither physically salient nor task-relevant. In two experiments, the irrelevant reward-associated features influenced performance, but only when they were presented in a task-relevant location. The costs induced by the irrelevant reward-associated features were greater when they oriented attention to a target than to a distractor. In a third experiment, we examined the effects of selection history in the absence of reward history and found that the interaction between task relevance and selection history differed, relative to when the features had previously been associated with reward. The results indicate that under conditions that demand highly efficient perceptual selection, physically nonsalient task-irrelevant and task-relevant factors interact to influence visual selective attention.

  4. Associative fear learning and perceptual discrimination: a perceptual pathway in the development of chronic pain.

    PubMed

    Zaman, Jonas; Vlaeyen, Johan W S; Van Oudenhove, Lukas; Wiech, Katja; Van Diest, Ilse

    2015-04-01

    Recent neuropsychological theories emphasize the influence of maladaptive learning and memory processes on pain perception. However, the precise relationship between these processes as well as the underlying mechanisms remain poorly understood; especially the role of perceptual discrimination and its modulation by associative fear learning has received little attention so far. Experimental work with exteroceptive stimuli consistently points to effects of fear learning on perceptual discrimination acuity. In addition, clinical observations have revealed that in individuals with chronic pain perceptual discrimination is impaired, and that tactile discrimination training reduces pain. Based on these findings, we present a theoretical model of which the central tenet is that associative fear learning contributes to the development of chronic pain through impaired interoceptive and proprioceptive discrimination acuity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Can Attention be Divided Between Perceptual Groups?

    NASA Technical Reports Server (NTRS)

    McCann, Robert S.; Foyle, David C.; Johnston, James C.; Hart, Sandra G. (Technical Monitor)

    1994-01-01

    Previous work using Head-Up Displays (HUDs) suggests that the visual system parses the HUD and the outside world into distinct perceptual groups, with attention deployed sequentially to first one group and then the other. New experiments show that both groups can be processed in parallel in a divided attention search task, even though subjects have just processed a stimulus in one perceptual group or the other. Implications for models of visual attention will be discussed.

  6. Differential effect of visual masking in perceptual categorization.

    PubMed

    Hélie, Sébastien; Cousineau, Denis

    2015-06-01

    This article explores the visual information used to categorize stimuli drawn from a common stimulus space into verbal and nonverbal categories using 2 experiments. Experiment 1 explores the effect of target duration on verbal and nonverbal categorization using backward masking to interrupt visual processing. With categories equated for difficulty for long and short target durations, intermediate target duration shows an advantage for verbal categorization over nonverbal categorization. Experiment 2 tests whether the results of Experiment 1 can be explained by shorter target duration resulting in a smaller signal-to-noise ratio of the categorization stimulus. To test for this possibility, Experiment 2 used integration masking with the same stimuli, categories, and masks as Experiment 1 with a varying level of mask opacity. As predicted, low mask opacity yielded similar results to long target duration while high mask opacity yielded similar results to short target duration. Importantly, intermediate mask opacity produced an advantage for verbal categorization over nonverbal categorization, similar to intermediate target duration. These results suggest that verbal and nonverbal categorization are affected differently by manipulations affecting the signal-to-noise ratio of the stimulus, consistent with multiple-system theories of categorizations. The results further suggest that verbal categorization may be more digital (and more robust to low signal-to-noise ratio) while the information used in nonverbal categorization may be more analog (and less robust to lower signal-to-noise ratio). This article concludes with a discussion of how these new results affect the use of masking in perceptual categorization and multiple-system theories of perceptual category learning. (c) 2015 APA, all rights reserved).

  7. Route Learning Impairment in Temporal Lobe Epilepsy

    PubMed Central

    Bell, Brian D.

    2012-01-01

    Memory impairment on neuropsychological tests is relatively common in temporal lobe epilepsy (TLE) patients. But memory rarely has been evaluated in more naturalistic settings. This study assessed TLE (n = 19) and control (n = 32) groups on a real-world route learning (RL) test. Compared to the controls, the TLE group committed significantly more total errors across the three RL test trials. RL errors correlated significantly with standardized auditory and visual memory and visual-perceptual test scores in the TLE group. In the TLE subset for whom hippocampal data were available (n = 14), RL errors also correlated significantly with left hippocampal volume. This is one of the first studies to demonstrate real-world memory impairment in TLE patients and its association with both mesial temporal lobe integrity and standardized memory test performance. The results support the ecological validity of clinical neuropsychological assessment. PMID:23041173

  8. The Construct Validity of Scores on a Japanese Version of the Perceptual Component of the Style Analysis Survey

    ERIC Educational Resources Information Center

    Isemonger, Ian; Watanabe, Kaoru

    2007-01-01

    This study examines the psychometrics of the perceptual component of the Style Analysis Survey (SAS) [Oxford, R.L., 1993a. "Style Analysis Survey (SAS)." University of Alabama, Tuscaloosa, AL]. The study is conducted in the context of questions over another perceptual learning-styles instrument, the "Perceptual Learning Styles Preferences…

  9. Conceptual Distinctiveness Supports Detailed Visual Long-Term Memory for Real-World Objects

    PubMed Central

    Konkle, Talia; Brady, Timothy F.; Alvarez, George A.; Oliva, Aude

    2012-01-01

    Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars presented from each category. At test, observers indicated which of 2 exemplars they had previously studied. Memory performance was high and remained quite high (82% accuracy) with 16 exemplars from a category in memory, demonstrating a large memory capacity for object exemplars. However, memory performance decreased as more exemplars were held in memory, implying systematic categorical interference. Object categories with conceptually distinctive exemplars showed less interference in memory as the number of exemplars increased. Interference in memory was not predicted by the perceptual distinctiveness of exemplars from an object category, though these perceptual measures predicted visual search rates for an object target among exemplars. These data provide evidence that observers’ capacity to remember visual information in long-term memory depends more on conceptual structure than perceptual distinctiveness. PMID:20677899

  10. The involvement of central attention in visual search is determined by task demands.

    PubMed

    Han, Suk Won

    2017-04-01

    Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.

  11. Training-Induced Recovery of Low-Level Vision Followed by Mid-Level Perceptual Improvements in Developmental Object and Face Agnosia

    ERIC Educational Resources Information Center

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L.; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental…

  12. Perceptual training yields rapid improvements in visually impaired youth.

    PubMed

    Nyquist, Jeffrey B; Lappin, Joseph S; Zhang, Ruyuan; Tadin, Duje

    2016-11-30

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events.

  13. Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.

    PubMed

    Liu, Xiang-Yun; Zhang, Jun-Yun

    2017-08-04

    Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Indexing sensory plasticity: Evidence for distinct Predictive Coding and Hebbian learning mechanisms in the cerebral cortex.

    PubMed

    Spriggs, M J; Sumner, R L; McMillan, R L; Moran, R J; Kirk, I J; Muthukumaraswamy, S D

    2018-04-30

    The Roving Mismatch Negativity (MMN), and Visual LTP paradigms are widely used as independent measures of sensory plasticity. However, the paradigms are built upon fundamentally different (and seemingly opposing) models of perceptual learning; namely, Predictive Coding (MMN) and Hebbian plasticity (LTP). The aim of the current study was to compare the generative mechanisms of the MMN and visual LTP, therefore assessing whether Predictive Coding and Hebbian mechanisms co-occur in the brain. Forty participants were presented with both paradigms during EEG recording. Consistent with Predictive Coding and Hebbian predictions, Dynamic Causal Modelling revealed that the generation of the MMN modulates forward and backward connections in the underlying network, while visual LTP only modulates forward connections. These results suggest that both Predictive Coding and Hebbian mechanisms are utilized by the brain under different task demands. This therefore indicates that both tasks provide unique insight into plasticity mechanisms, which has important implications for future studies of aberrant plasticity in clinical populations. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Influence of semantic consistency and perceptual features on visual attention during scene viewing in toddlers.

    PubMed

    Helo, Andrea; van Ommen, Sandrien; Pannasch, Sebastian; Danteny-Dordoigne, Lucile; Rämä, Pia

    2017-11-01

    Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Perceived state of self during motion can differentially modulate numerical magnitude allocation.

    PubMed

    Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M

    2016-09-01

    Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Involvement of the Parietal Cortex in Perceptual Learning (Eureka Effect): An Interference Approach Using rTMS

    ERIC Educational Resources Information Center

    Giovannelli, Fabio; Silingardi, Davide; Borgheresi, Alessandra; Feurra, Matteo; Amati, Gianluca; Pizzorusso, Tommaso; Viggiano, Maria Pia; Zaccara, Gaetano; Berardi, Nicoletta; Cincotta, Massimo

    2010-01-01

    The neural mechanisms underlying perceptual learning are still under investigation. Eureka effect is a form of rapid, long-lasting perceptual learning by which a degraded image, which appears meaningless when first seen, becomes recognizable after a single exposure to its undegraded version. We used online interference by focal 10-Hz repetitive…

  18. Structural salience and the nonaccidentality of a Gestalt.

    PubMed

    Strother, Lars; Kubovy, Michael

    2012-08-01

    We perceive structure through a process of perceptual organization. Here we report a new perceptual organization phenomenon-the facilitation of visual grouping by global curvature. Observers viewed patterns that they perceived as organized into collections of curves. The patterns were perceptually ambiguous such that the perceived orientation of the patterns varied from trial to trial. When patterns were sufficiently dense and proximity was equated for the predominant perceptual alternatives, observers tended to perceive the organization with the greatest curvature. This effect is tantamount to visual grouping by maximal curvature and thus demonstrates an unprecedented effect of global structure on perceptual organization. We account for this result with a model that predicts the perceived organization of a pattern as function of its nonaccidentality, which we define as the probability that it could have occurred by chance. Our findings demonstrate a novel relationship between the geometry of a pattern and the visual salience of global structure. (c) 2012 APA, all rights reserved.

  19. Reward modulates the effect of visual cortical microstimulation on perceptual decisions

    PubMed Central

    Cicmil, Nela; Cumming, Bruce G; Parker, Andrew J; Krug, Kristine

    2015-01-01

    Effective perceptual decisions rely upon combining sensory information with knowledge of the rewards available for different choices. However, it is not known where reward signals interact with the multiple stages of the perceptual decision-making pathway and by what mechanisms this may occur. We combined electrical microstimulation of functionally specific groups of neurons in visual area V5/MT with performance-contingent reward manipulation, while monkeys performed a visual discrimination task. Microstimulation was less effective in shifting perceptual choices towards the stimulus preferences of the stimulated neurons when available reward was larger. Psychophysical control experiments showed this result was not explained by a selective change in response strategy on microstimulated trials. A bounded accumulation decision model, applied to analyse behavioural performance, revealed that the interaction of expected reward with microstimulation can be explained if expected reward modulates a sensory representation stage of perceptual decision-making, in addition to the better-known effects at the integration stage. DOI: http://dx.doi.org/10.7554/eLife.07832.001 PMID:26402458

  20. Can human amblyopia be treated in adulthood?

    PubMed

    Astle, Andrew T; McGraw, Paul V; Webb, Ben S

    2011-09-01

    Amblyopia is a common visual disorder that results in a spatial acuity deficit in the affected eye. Orthodox treatment is to occlude the unaffected eye for lengthy periods, largely determined by the severity of the visual deficit at diagnosis. Although this treatment is not without its problems (poor compliance, potential to reduce binocular function, etc) it is effective in many children with moderate to severe amblyopia. Diagnosis and initiation of treatment early in life are thought to be critical to the success of this form of therapy. Occlusion is rarely undertaken in older children (more than 10 years old) as the visual benefits are considered to be marginal. Therefore, in subjects where occlusion is not effective or those missed by mass screening programs, there is no alternative therapy available later in life. More recently, burgeoning evidence has begun to reveal previously unrecognized levels of residual neural plasticity in the adult brain and scientists have developed new genetic, pharmacological, and behavioral interventions to activate these latent mechanisms in order to harness their potential for visual recovery. Prominent amongst these is the concept of perceptual learning--the fact that repeatedly practicing a challenging visual task leads to substantial and enduring improvements in visual performance over time. In the normal visual system the improvements are highly specific to the attributes of the trained stimulus. However, in the amblyopic visual system, learned improvements have been shown to generalize to novel tasks. In this paper we ask whether amblyopic deficits can be reduced in adulthood and explore the pattern of transfer of learned improvements. We also show that developing training protocols that target the deficit in stereo acuity allows the recovery of normal stereo function even in adulthood. This information will help guide further development of learning-based interventions in this clinical group.

Top