Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue
2009-06-15
Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.
ERIC Educational Resources Information Center
Kodak, Tiffany; Clements, Andrea; Paden, Amber R.; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A.
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The…
Visual discrimination training improves Humphrey perimetry in chronic cortically induced blindness.
Cavanaugh, Matthew R; Huxlin, Krystel R
2017-05-09
To assess if visual discrimination training improves performance on visual perimetry tests in chronic stroke patients with visual cortex involvement. 24-2 and 10-2 Humphrey visual fields were analyzed for 17 chronic cortically blind stroke patients prior to and following visual discrimination training, as well as in 5 untrained, cortically blind controls. Trained patients practiced direction discrimination, orientation discrimination, or both, at nonoverlapping, blind field locations. All pretraining and posttraining discrimination performance and Humphrey fields were collected with online eye tracking, ensuring gaze-contingent stimulus presentation. Trained patients recovered ∼108 degrees 2 of vision on average, while untrained patients spontaneously improved over an area of ∼16 degrees 2 . Improvement was not affected by patient age, time since lesion, size of initial deficit, or training type, but was proportional to the amount of training performed. Untrained patients counterbalanced their improvements with worsening of sensitivity over ∼9 degrees 2 of their visual field. Worsening was minimal in trained patients. Finally, although discrimination performance improved at all trained locations, changes in Humphrey sensitivity occurred both within trained regions and beyond, extending over a larger area along the blind field border. In adults with chronic cortical visual impairment, the blind field border appears to have enhanced plastic potential, which can be recruited by gaze-controlled visual discrimination training to expand the visible field. Our findings underscore a critical need for future studies to measure the effects of vision restoration approaches on perimetry in larger cohorts of patients. Copyright © 2017 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the American Academy of Neurology.
Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela
2013-01-01
The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
Renfroe, Jenna B; Turner, Travis H; Hinson, Vanessa K
2017-02-01
Judgment of Line Orientation (JOLO) test is widely used in assessing visuospatial deficits in Parkinson's disease (PD). The neuropsychological assessment battery (NAB) offers the Visual Discrimination test, with age and education correction, parallel forms, and co-normed standardization sample for comparisons within and between domains. However, NAB Visual Discrimination has not been validated in PD, and may not measure the same construct as JOLO. A heterogeneous sample of 47 PD patients completed the JOLO and NAB Visual Discrimination within a broader neuropsychological evaluation. Pearson correlations assessed relationships between JOLO and NAB Visual Discrimination performances. Raw and demographically corrected scores from JOLO and Visual Discrimination were only weakly correlated. NAB Visual Discrimination subtest was moderately correlated with overall cognitive functioning, whereas the JOLO was not. Despite apparent virtues, results do not support NAB Visual Discrimination as an alternative to JOLO in assessing visuospatial functioning in PD. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Performance, physiological, and oculometer evaluation of VTOL landing displays
NASA Technical Reports Server (NTRS)
North, R. A.; Stackhouse, S. P.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.
Kodak, Tiffany; Clements, Andrea; Paden, Amber R; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The results of the skills assessment showed that 4 participants failed to demonstrate mastery of at least 1 of the skills. We compared the outcomes of the assessment to the results of auditory-visual conditional discrimination training and found that training outcomes were related to the assessment outcomes for 7 of the 9 participants. One participant who did not demonstrate mastery of all assessment skills subsequently learned several conditional discriminations when blocked training trials were conducted. Another participant who did not demonstrate mastery of the auditory discrimination skill subsequently acquired conditional discriminations in 1 of the training conditions. We discuss the implications of the assessment for practice and suggest additional areas of research on this topic. © Society for the Experimental Analysis of Behavior.
Improved Discrimination of Visual Stimuli Following Repetitive Transcranial Magnetic Stimulation
Waterston, Michael L.; Pack, Christopher C.
2010-01-01
Background Repetitive transcranial magnetic stimulation (rTMS) at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a “virtual lesion” in stimulated brain regions, with correspondingly diminished behavioral performance. Methodology/Principal Findings Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz) stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. Conclusions/Significance Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception. PMID:20442776
Visual discrimination transfer and modulation by biogenic amines in honeybees.
Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo
2018-05-10
For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.
Dynamic and predictive links between touch and vision.
Gray, Rob; Tan, Hong Z
2002-07-01
We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.
Vermaercke, Ben; Van den Bergh, Gert; Gerich, Florian; Op de Beeck, Hans
2015-01-01
Recent studies have revealed a surprising degree of functional specialization in rodent visual cortex. It is unknown to what degree this functional organization is related to the well-known hierarchical organization of the visual system in primates. We designed a study in rats that targets one of the hallmarks of the hierarchical object vision pathway in primates: selectivity for behaviorally relevant dimensions. We compared behavioral performance in a visual water maze with neural discriminability in five visual cortical areas. We tested behavioral discrimination in two independent batches of six rats using six pairs of shapes used previously to probe shape selectivity in monkey cortex (Lehky and Sereno, 2007). The relative difficulty (error rate) of shape pairs was strongly correlated between the two batches, indicating that some shape pairs were more difficult to discriminate than others. Then, we recorded in naive rats from five visual areas from primary visual cortex (V1) over areas LM, LI, LL, up to lateral occipito-temporal cortex (TO). Shape selectivity in the upper layers of V1, where the information enters cortex, correlated mostly with physical stimulus dissimilarity and not with behavioral performance. In contrast, neural discriminability in lower layers of all areas was strongly correlated with behavioral performance. These findings, in combination with the results from Vermaercke et al. (2014b), suggest that the functional specialization in rodent lateral visual cortex reflects a processing hierarchy resulting in the emergence of complex selectivity that is related to behaviorally relevant stimulus differences.
Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G
2017-01-01
Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.
Kibby, Michelle Y.; Dyer, Sarah M.; Vadnais, Sarah A.; Jagger, Audreyana C.; Casher, Gabriel A.; Stacy, Maria
2015-01-01
Whether visual processing deficits are common in reading disorders (RD), and related to reading ability in general, has been debated for decades. The type of visual processing affected also is debated, although visual discrimination and short-term memory (STM) may be more commonly related to reading ability. Reading disorders are frequently comorbid with ADHD, and children with ADHD often have subclinical reading problems. Hence, children with ADHD were used as a comparison group in this study. ADHD and RD may be dissociated in terms of visual processing. Whereas RD may be associated with deficits in visual discrimination and STM for order, ADHD is associated with deficits in visual-spatial processing. Thus, we hypothesized that children with RD would perform worse than controls and children with ADHD only on a measure of visual discrimination and a measure of visual STM that requires memory for order. We expected all groups would perform comparably on the measure of visual STM that does not require sequential processing. We found children with RD or ADHD were commensurate to controls on measures of visual discrimination and visual STM that do not require sequential processing. In contrast, both RD groups (RD, RD/ADHD) performed worse than controls on the measure of visual STM that requires memory for order, and children with comorbid RD/ADHD performed worse than those with ADHD. In addition, of the three visual measures, only sequential visual STM predicted reading ability. Hence, our findings suggest there is a deficit in visual sequential STM that is specific to RD and is related to basic reading ability. The source of this deficit is worthy of further research, but it may include both reduced memory for order and poorer verbal mediation. PMID:26579020
Kibby, Michelle Y; Dyer, Sarah M; Vadnais, Sarah A; Jagger, Audreyana C; Casher, Gabriel A; Stacy, Maria
2015-01-01
Whether visual processing deficits are common in reading disorders (RD), and related to reading ability in general, has been debated for decades. The type of visual processing affected also is debated, although visual discrimination and short-term memory (STM) may be more commonly related to reading ability. Reading disorders are frequently comorbid with ADHD, and children with ADHD often have subclinical reading problems. Hence, children with ADHD were used as a comparison group in this study. ADHD and RD may be dissociated in terms of visual processing. Whereas RD may be associated with deficits in visual discrimination and STM for order, ADHD is associated with deficits in visual-spatial processing. Thus, we hypothesized that children with RD would perform worse than controls and children with ADHD only on a measure of visual discrimination and a measure of visual STM that requires memory for order. We expected all groups would perform comparably on the measure of visual STM that does not require sequential processing. We found children with RD or ADHD were commensurate to controls on measures of visual discrimination and visual STM that do not require sequential processing. In contrast, both RD groups (RD, RD/ADHD) performed worse than controls on the measure of visual STM that requires memory for order, and children with comorbid RD/ADHD performed worse than those with ADHD. In addition, of the three visual measures, only sequential visual STM predicted reading ability. Hence, our findings suggest there is a deficit in visual sequential STM that is specific to RD and is related to basic reading ability. The source of this deficit is worthy of further research, but it may include both reduced memory for order and poorer verbal mediation.
Meng, Xiangzhi; Lin, Ou; Wang, Fang; Jiang, Yuzheng; Song, Yan
2014-01-01
Background High order cognitive processing and learning, such as reading, interact with lower-level sensory processing and learning. Previous studies have reported that visual perceptual training enlarges visual span and, consequently, improves reading speed in young and old people with amblyopia. Recently, a visual perceptual training study in Chinese-speaking children with dyslexia found that the visual texture discrimination thresholds of these children in visual perceptual training significantly correlated with their performance in Chinese character recognition, suggesting that deficits in visual perceptual processing/learning might partly underpin the difficulty in reading Chinese. Methodology/Principal Findings To further clarify whether visual perceptual training improves the measures of reading performance, eighteen children with dyslexia and eighteen typically developed readers that were age- and IQ-matched completed a series of reading measures before and after visual texture discrimination task (TDT) training. Prior to the TDT training, each group of children was split into two equivalent training and non-training groups in terms of all reading measures, IQ, and TDT. The results revealed that the discrimination threshold SOAs of TDT were significantly higher for the children with dyslexia than for the control children before training. Interestingly, training significantly decreased the discrimination threshold SOAs of TDT for both the typically developed readers and the children with dyslexia. More importantly, the training group with dyslexia exhibited significant enhancement in reading fluency, while the non-training group with dyslexia did not show this improvement. Additional follow-up tests showed that the improvement in reading fluency is a long-lasting effect and could be maintained for up to two months in the training group with dyslexia. Conclusion/Significance These results suggest that basic visual perceptual processing/learning and reading ability in Chinese might at least partially rely on overlapping mechanisms. PMID:25247602
ERIC Educational Resources Information Center
Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina
2005-01-01
The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…
Color discrimination performance in patients with Alzheimer's disease.
Salamone, Giovanna; Di Lorenzo, Concetta; Mosti, Serena; Lupo, Federica; Cravello, Luca; Palmer, Katie; Musicco, Massimo; Caltagirone, Carlo
2009-01-01
Visual deficits are frequent in Alzheimer's disease (AD), yet little is known about the nature of these disturbances. The aim of the present study was to investigate color discrimination in patients with AD to determine whether impairment of this visual function is a cognitive or perceptive/sensory disturbance. A cross-sectional clinical study was conducted in a specialized dementia unit on 20 patients with mild/moderate AD and 21 age-matched normal controls. Color discrimination was measured by the Farnsworth-Munsell 100 hue test. Cognitive functioning was measured with the Mini-Mental State Examination (MMSE) and a comprehensive battery of neuropsychological tests. The scores obtained on the color discrimination test were compared between AD patients and controls adjusting for global and domain-specific cognitive performance. Color discrimination performance was inversely related to MMSE score. AD patients had a higher number of errors in color discrimination than controls (mean +/- SD total error score: 442.4 +/- 84.5 vs. 304.1 +/- 45.9). This trend persisted even after adjustment for MMSE score and cognitive performance on specific cognitive domains. A specific reduction of color discrimination capacity is present in AD patients. This deficit does not solely depend upon cognitive impairment, and involvement of the primary visual cortex and/or retinal ganglionar cells may be contributory.
Visual Discrimination and Motor Reproduction of Movement by Individuals with Mental Retardation.
ERIC Educational Resources Information Center
Shinkfield, Alison J.; Sparrow, W. A.; Day, R. H.
1997-01-01
Visual discrimination and motor reproduction tasks involving computer-simulated arm movements were administered to 12 adults with mental retardation and a gender-matched control group. The purpose was to examine whether inadequacies in visual perception account for the poorer motor performance of this population. Results indicate both perceptual…
Fengler, Ineke; Nava, Elena; Röder, Brigitte
2015-01-01
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
Thomson, Eric E.; Zea, Ivan; França, Wendy
2017-01-01
Abstract Adult rats equipped with a sensory prosthesis, which transduced infrared (IR) signals into electrical signals delivered to somatosensory cortex (S1), took approximately 4 d to learn a four-choice IR discrimination task. Here, we show that when such IR signals are projected to the primary visual cortex (V1), rats that are pretrained in a visual-discrimination task typically learn the same IR discrimination task on their first day of training. However, without prior training on a visual discrimination task, the learning rates for S1- and V1-implanted animals converged, suggesting there is no intrinsic difference in learning rate between the two areas. We also discovered that animals were able to integrate IR information into the ongoing visual processing stream in V1, performing a visual-IR integration task in which they had to combine IR and visual information. Furthermore, when the IR prosthesis was implanted in S1, rats showed no impairment in their ability to use their whiskers to perform a tactile discrimination task. Instead, in some rats, this ability was actually enhanced. Cumulatively, these findings suggest that cortical sensory neuroprostheses can rapidly augment the representational scope of primary sensory areas, integrating novel sources of information into ongoing processing while incurring minimal loss of native function. PMID:29279860
Visual body recognition in a prosopagnosic patient.
Moro, V; Pernigo, S; Avesani, R; Bulgarelli, C; Urgesi, C; Candidi, M; Aglioti, S M
2012-01-01
Conspicuous deficits in face recognition characterize prosopagnosia. Information on whether agnosic deficits may extend to non-facial body parts is lacking. Here we report the neuropsychological description of FM, a patient affected by a complete deficit in face recognition in the presence of mild clinical signs of visual object agnosia. His deficit involves both overt and covert recognition of faces (i.e. recognition of familiar faces, but also categorization of faces for gender or age) as well as the visual mental imagery of faces. By means of a series of matching-to-sample tasks we investigated: (i) a possible association between prosopagnosia and disorders in visual body perception; (ii) the effect of the emotional content of stimuli on the visual discrimination of faces, bodies and objects; (iii) the existence of a dissociation between identity recognition and the emotional discrimination of faces and bodies. Our results document, for the first time, the co-occurrence of body agnosia, i.e. the visual inability to discriminate body forms and body actions, and prosopagnosia. Moreover, the results show better performance in the discrimination of emotional face and body expressions with respect to body identity and neutral actions. Since FM's lesions involve bilateral fusiform areas, it is unlikely that the amygdala-temporal projections explain the relative sparing of emotion discrimination performance. Indeed, the emotional content of the stimuli did not improve the discrimination of their identity. The results hint at the existence of two segregated brain networks involved in identity and emotional discrimination that are at least partially shared by face and body processing. Copyright © 2011 Elsevier Ltd. All rights reserved.
Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination.
Wiemers, Michael; Fischer, Martin H
2016-01-01
Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.
Visual discrimination predicts naming and semantic association accuracy in Alzheimer disease.
Harnish, Stacy M; Neils-Strunjas, Jean; Eliassen, James; Reilly, Jamie; Meinzer, Marcus; Clark, John Greer; Joseph, Jane
2010-12-01
Language impairment is a common symptom of Alzheimer disease (AD), and is thought to be related to semantic processing. This study examines the contribution of another process, namely visual perception, on measures of confrontation naming and semantic association abilities in persons with probable AD. Twenty individuals with probable mild-moderate Alzheimer disease and 20 age-matched controls completed a battery of neuropsychologic measures assessing visual perception, naming, and semantic association ability. Visual discrimination tasks that varied in the degree to which they likely accessed stored structural representations were used to gauge whether structural processing deficits could account for deficits in naming and in semantic association in AD. Visual discrimination abilities of nameable objects in AD strongly predicted performance on both picture naming and semantic association ability, but lacked the same predictive value for controls. Although impaired, performance on visual discrimination tests of abstract shapes and novel faces showed no significant relationship with picture naming and semantic association. These results provide additional evidence to support that structural processing deficits exist in AD, and may contribute to object recognition and naming deficits. Our findings suggest that there is a common deficit in discrimination of pictures using nameable objects, picture naming, and semantic association of pictures in AD. Disturbances in structural processing of pictured items may be associated with lexical-semantic impairment in AD, owing to degraded internal storage of structural knowledge.
Scully, Erin N; Acerbo, Martin J; Lazareva, Olga F
2014-01-01
Earlier, we reported that nucleus rotundus (Rt) together with its inhibitory complex, nucleus subpretectalis/interstitio-pretecto-subpretectalis (SP/IPS), had significantly higher activity in pigeons performing figure-ground discrimination than in the control group that did not perform any visual discriminations. In contrast, color discrimination produced significantly higher activity than control in the Rt but not in the SP/IPS. Finally, shape discrimination produced significantly lower activity than control in both the Rt and the SP/IPS. In this study, we trained pigeons to simultaneously perform three visual discriminations (figure-ground, color, and shape) using the same stimulus displays. When birds learned to perform all three tasks concurrently at high levels of accuracy, we conducted bilateral chemical lesions of the SP/IPS. After a period of recovery, the birds were retrained on the same tasks to evaluate the effect of lesions on maintenance of these discriminations. We found that the lesions of the SP/IPS had no effect on color or shape discrimination and that they significantly impaired figure-ground discrimination. Together with our earlier data, these results suggest that the nucleus Rt and the SP/IPS are the key structures involved in figure-ground discrimination. These results also imply that thalamic processing is critical for figure-ground segregation in avian brain.
Multi-class ERP-based BCI data analysis using a discriminant space self-organizing map.
Onishi, Akinari; Natsume, Kiyohisa
2014-01-01
Emotional or non-emotional image stimulus is recently applied to event-related potential (ERP) based brain computer interfaces (BCI). Though the classification performance is over 80% in a single trial, a discrimination between those ERPs has not been considered. In this research we tried to clarify the discriminability of four-class ERP-based BCI target data elicited by desk, seal, spider images and letter intensifications. A conventional self organizing map (SOM) and newly proposed discriminant space SOM (ds-SOM) were applied, then the discriminabilites were visualized. We also classify all pairs of those ERPs by stepwise linear discriminant analysis (SWLDA) and verify the visualization of discriminabilities. As a result, the ds-SOM showed understandable visualization of the data with a shorter computational time than the traditional SOM. We also confirmed the clear boundary between the letter cluster and the other clusters. The result was coherent with the classification performances by SWLDA. The method might be helpful not only for developing a new BCI paradigm, but also for the big data analysis.
Sadato, Norihiro; Okada, Tomohisa; Kubota, Kiyokazu; Yonekura, Yoshiharu
2004-04-08
The occipital cortex of blind subjects is known to be activated during tactile discrimination tasks such as Braille reading. To investigate whether this is due to long-term learning of Braille or to sensory deafferentation, we used fMRI to study tactile discrimination tasks in subjects who had recently lost their sight and never learned Braille. The occipital cortex of the blind subjects without Braille training was activated during the tactile discrimination task, whereas that of control sighted subjects was not. This finding suggests that the activation of the visual cortex of the blind during performance of a tactile discrimination task may be due to sensory deafferentation, wherein a competitive imbalance favors the tactile over the visual modality.
Scopolamine effects on visual discrimination: modifications related to stimulus control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, H.L.
1975-01-01
Stumptail monkeys (Macaca arctoides) performed a discrete trial, three-choice visual discrimination. The discrimination behavior was controlled by the shape of the visual stimuli. Strength of the stimuli in controlling behavior was systematically related to a physical property of the stimuli, luminance. Low luminance provided weak control, resulting in a low accuracy of discrimination, a low response probability and maximal sensitivity to scopolamine (7.5-60 ..mu..g/kg). In contrast, high luminance provided strong control of behavior and attenuated the effects of scopolamine. Methylscopolamine had no effect in doses of 30 to 90 ..mu..g/kg. Scopolamine effects resembled the effects of reducing stimulus control inmore » undrugged monkeys. Since behavior under weak control seems to be especially sensitive to drugs, manipulations of stimulus control may be particularly useful whenever determination of the minimally-effective dose is important, as in behavioral toxicology. Present results are interpreted as specific visual effects of the drug, since nonsensory factors such as base-line response rate, reinforcement schedule, training history, motor performance and motivation were controlled. Implications for state-dependent effects of drugs are discussed.« less
Visual speech discrimination and identification of natural and synthetic consonant stimuli
Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.
2015-01-01
From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249
Evaluation of a pilot workload metric for simulated VTOL landing tasks
NASA Technical Reports Server (NTRS)
North, R. A.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Multivariate discriminant functions were formed from conventional flight performance and/or visual response variables to maximize detection of experimental differences. The flight performance variable discriminant showed maximum differentiation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition/trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus, represented higher workload levels.
Short-term visual deprivation, tactile acuity, and haptic solid shape discrimination.
Crabtree, Charles E; Norman, J Farley
2014-01-01
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task - perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task - the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation.
Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter
2018-05-01
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.
Two Methods for Teaching Simple Visual Discriminations to Learners with Severe Disabilities
ERIC Educational Resources Information Center
Graff, Richard B.; Green, Gina
2004-01-01
Simple discriminations are involved in many functional skills; additionally, they are components of conditional discriminations (identity and arbitrary matching-to-sample), which are involved in a wide array of other important performances. Many individuals with severe disabilities have difficulty acquiring simple discriminations with standard…
Mental workload while driving: effects on visual search, discrimination, and decision making.
Recarte, Miguel A; Nunes, Luis M
2003-06-01
The effects of mental workload on visual search and decision making were studied in real traffic conditions with 12 participants who drove an instrumented car. Mental workload was manipulated by having participants perform several mental tasks while driving. A simultaneous visual-detection and discrimination test was used as performance criteria. Mental tasks produced spatial gaze concentration and visual-detection impairment, although no tunnel vision occurred. According to ocular behavior analysis, this impairment was due to late detection and poor identification more than to response selection. Verbal acquisition tasks were innocuous compared with production tasks, and complex conversations, whether by phone or with a passenger, are dangerous for road safety.
Takemoto, Atsushi; Miwa, Miki; Koba, Reiko; Yamaguchi, Chieko; Suzuki, Hiromi; Nakamura, Katsuki
2015-04-01
Detailed information about the characteristics of learning behavior in marmosets is useful for future marmoset research. We trained 42 marmosets in visual discrimination and reversal learning. All marmosets could learn visual discrimination, and all but one could complete reversal learning, though some marmosets failed to touch the visual stimuli and were screened out. In 87% of measurements, the final percentage of correct responses was over 95%. We quantified performance with two measures: onset trial and dynamic interval. Onset trial represents the number of trials that elapsed before the marmoset started to learn. Dynamic interval represents the number of trials from the start before reaching the final percentage of correct responses. Both measures decreased drastically as a result of the formation of discrimination learning sets. In reversal learning, both measures worsened, but the effect on onset trial was far greater. The effects of age and sex were not significant as far as we used adolescent or young adult marmosets. Unexpectedly, experimental circumstance (in the colony or isolator) had only a subtle effect on performance. However, we found that marmosets from different families exhibited different learning process characteristics, suggesting some family effect on learning. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Beneficial effects of verbalization and visual distinctiveness on remembering and knowing faces.
Brown, Charity; Lloyd-Jones, Toby J
2006-03-01
We examined the effect of verbally describing faces upon visual memory. In particular, we examined the locus of the facilitative effects of verbalization by manipulating the visual distinctiveness ofthe to-be-remembered faces and using the remember/know procedure as a measure of recognition performance (i.e., remember vs. know judgments). Participants were exposed to distinctive faces intermixed with typical faces and described (or not, in the control condition) each face following its presentation. Subsequently, the participants discriminated the original faces from distinctive and typical distractors in a yes/no recognition decision and made remember/know judgments. Distinctive faces elicited better discrimination performance than did typical faces. Furthermore, for both typical and distinctive faces, better discrimination performance was obtained in the description than in the control condition. Finally, these effects were evident for both recollection- and familiarity-based recognition decisions. We argue that verbalization and visual distinctiveness independently benefit face recognition, and we discuss these findings in terms of the nature of verbalization and the role of recollective and familiarity-based processes in recognition.
Preserved Discrimination Performance and Neural Processing during Crossmodal Attention in Aging
Mishra, Jyoti; Gazzaley, Adam
2013-01-01
In a recent study in younger adults (19-29 year olds) we showed evidence that distributed audiovisual attention resulted in improved discrimination performance for audiovisual stimuli compared to focused visual attention. Here, we extend our findings to healthy older adults (60-90 year olds), showing that performance benefits of distributed audiovisual attention in this population match those of younger adults. Specifically, improved performance was revealed in faster response times for semantically congruent audiovisual stimuli during distributed relative to focused visual attention, without any differences in accuracy. For semantically incongruent stimuli, discrimination accuracy was significantly improved during distributed relative to focused attention. Furthermore, event-related neural processing showed intact crossmodal integration in higher performing older adults similar to younger adults. Thus, there was insufficient evidence to support an age-related deficit in crossmodal attention. PMID:24278464
Examining the relationship between skilled music training and attention.
Wang, Xiao; Ossher, Lynn; Reuter-Lorenz, Patricia A
2015-11-01
While many aspects of cognition have been investigated in relation to skilled music training, surprisingly little work has examined the connection between music training and attentional abilities. The present study investigated the performance of skilled musicians on cognitively demanding sustained attention tasks, measuring both temporal and visual discrimination over a prolonged duration. Participants with extensive formal music training were found to have superior performance on a temporal discrimination task, but not a visual discrimination task, compared to participants with no music training. In addition, no differences were found between groups in vigilance decrement in either type of task. Although no differences were evident in vigilance per se, the results indicate that performance in an attention-demanding temporal discrimination task was superior in individuals with extensive music training. We speculate that this basic cognitive ability may contribute to advantages that musicians show in other cognitive measures. Copyright © 2015 Elsevier Inc. All rights reserved.
Lack of power enhances visual perceptual discrimination.
Weick, Mario; Guinote, Ana; Wilkinson, David
2011-09-01
Powerless individuals face much challenge and uncertainty. As a consequence, they are highly vigilant and closely scrutinize their social environments. The aim of the present research was to determine whether these qualities enhance performance in more basic cognitive tasks involving simple visual feature discrimination. To test this hypothesis, participants performed a series of perceptual matching and search tasks involving colour, texture, and size discrimination. As predicted, those primed with powerlessness generated shorter reaction times and made fewer eye movements than either powerful or control participants. The results indicate that the heightened vigilance shown by powerless individuals is associated with an advantage in performing simple types of psychophysical discrimination. These findings highlight, for the first time, an underlying competency in perceptual cognition that sets powerless individuals above their powerful counterparts, an advantage that may reflect functional adaptation to the environmental challenge and uncertainty that they face. © 2011 Canadian Psychological Association
Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria
2013-08-01
In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.
Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian
2017-01-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469
Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen
2017-06-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.
Prestimulus EEG Power Predicts Conscious Awareness But Not Objective Visual Performance
Veniero, Domenica
2017-01-01
Abstract Prestimulus oscillatory neural activity has been linked to perceptual outcomes during performance of psychophysical detection and discrimination tasks. Specifically, the power and phase of low frequency oscillations have been found to predict whether an upcoming weak visual target will be detected or not. However, the mechanisms by which baseline oscillatory activity influences perception remain unclear. Recent studies suggest that the frequently reported negative relationship between α power and stimulus detection may be explained by changes in detection criterion (i.e., increased target present responses regardless of whether the target was present/absent) driven by the state of neural excitability, rather than changes in visual sensitivity (i.e., more veridical percepts). Here, we recorded EEG while human participants performed a luminance discrimination task on perithreshold stimuli in combination with single-trial ratings of perceptual awareness. Our aim was to investigate whether the power and/or phase of prestimulus oscillatory activity predict discrimination accuracy and/or perceptual awareness on a trial-by-trial basis. Prestimulus power (3–28 Hz) was inversely related to perceptual awareness ratings (i.e., higher ratings in states of low prestimulus power/high excitability) but did not predict discrimination accuracy. In contrast, prestimulus oscillatory phase did not predict awareness ratings or accuracy in any frequency band. These results provide evidence that prestimulus α power influences the level of subjective awareness of threshold visual stimuli but does not influence visual sensitivity when a decision has to be made regarding stimulus features. Hence, we find a clear dissociation between the influence of ongoing neural activity on conscious awareness and objective performance. PMID:29255794
Both hand position and movement direction modulate visual attention
Festman, Yariv; Adam, Jos J.; Pratt, Jay; Fischer, Martin H.
2013-01-01
The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festman et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention. PMID:24098288
LaRoche, Ronee B; Morgan, Russell E
2007-01-01
Over the past two decades the use of selective serotonin reuptake inhibitors (SSRIs) to treat behavioral disorders in children has grown rapidly, despite little evidence regarding the safety and efficacy of these drugs for use in children. Utilizing a rat model, this study investigated whether post-weaning exposure to a prototype SSRI, fluoxetine (FLX), influenced performance on visual tasks designed to measure discrimination learning, sustained attention, inhibitory control, and reaction time. Additionally, sex differences in response to varying doses of fluoxetine were examined. In Experiment 1, female rats were administered (P.O.) fluoxetine (10 mg/kg ) or vehicle (apple juice) from PND 25 thru PND 49. After a 14 day washout period, subjects were trained to perform a simultaneous visual discrimination task. Subjects were then tested for 20 sessions on a visual attention task that consisted of varied stimulus delays (0, 3, 6, or 9 s) and cue durations (200, 400, or 700 ms). In Experiment 2, both male and female Long-Evans rats (24 F, 24 M) were administered fluoxetine (0, 5, 10, or 15 mg/kg) then tested in the same visual tasks used in Experiment 1, with the addition of open-field and elevated plus-maze testing. Few FLX-related differences were seen in the visual discrimination, open field, or plus-maze tasks. However, results from the visual attention task indicated a dose-dependent reduction in the performance of fluoxetine-treated males, whereas fluoxetine-treated females tended to improve over baseline. These findings indicate that enduring, behaviorally-relevant alterations of the CNS can occur following pharmacological manipulation of the serotonin system during postnatal development.
Effective 3-D shape discrimination survives retinal blur.
Norman, J Farley; Beers, Amanda M; Holmin, Jessica S; Boswell, Alexandria M
2010-08-01
A single experiment evaluated observers' ability to visually discriminate 3-D object shape, where the 3-D structure was defined by motion, texture, Lambertian shading, and occluding contours. The observers' vision was degraded to varying degrees by blurring the experimental stimuli, using 2.0-, 2.5-, and 3.0-diopter convex lenses. The lenses reduced the observers' acuity from -0.091 LogMAR (in the no-blur conditions) to 0.924 LogMAR (in the conditions with the most blur; 3.0-diopter lenses). This visual degradation, although producing severe reductions in visual acuity, had only small (but significant) effects on the observers' ability to discriminate 3-D shape. The observers' shape discrimination performance was facilitated by the objects' rotation in depth, regardless of the presence or absence of blur. Our results indicate that accurate global shape discrimination survives a considerable amount of retinal blur.
Short-Term Visual Deprivation, Tactile Acuity, and Haptic Solid Shape Discrimination
Crabtree, Charles E.; Norman, J. Farley
2014-01-01
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task – perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task – the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation. PMID:25397327
Oetjen, Sophie; Ziefle, Martina
2009-01-01
An increasing demand to work with electronic displays and to use mobile computers emphasises the need to compare visual performance while working with different screen types. In the present study, a cathode ray tube (CRT) was compared to an external liquid crystal display (LCD) and a Notebook-LCD. The influence of screen type and viewing angle on discrimination performance was studied. Physical measurements revealed that luminance and contrast values change with varying viewing angles (anisotropy). This is most pronounced in Notebook-LCDs, followed by external LCDs and CRTs. Performance data showed that LCD's anisotropy has negative impacts on completing time critical visual tasks. The best results were achieved when a CRT was used. The largest deterioration of performance resulted when participants worked with a Notebook-LCD. When it is necessary to react quickly and accurately, LCD screens have disadvantages. The anisotropy of LCD-TFTs is therefore considered to be as a limiting factor deteriorating visual performance.
Is improved contrast sensitivity a natural consequence of visual training?
Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.
2015-01-01
Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736
Olfactory discrimination: when vision matters?
Demattè, M Luisa; Sanabria, Daniel; Spence, Charles
2009-02-01
Many previous studies have attempted to investigate the effect of visual cues on olfactory perception in humans. The majority of this research has only looked at the modulatory effect of color, which has typically been explained in terms of multisensory perceptual interactions. However, such crossmodal effects may equally well relate to interactions taking place at a higher level of information processing as well. In fact, it is well-known that semantic knowledge can have a substantial effect on people's olfactory perception. In the present study, we therefore investigated the influence of visual cues, consisting of color patches and/or shapes, on people's olfactory discrimination performance. Participants had to make speeded odor discrimination responses (lemon vs. strawberry) while viewing a red or yellow color patch, an outline drawing of a strawberry or lemon, or a combination of these color and shape cues. Even though participants were instructed to ignore the visual stimuli, our results demonstrate that the accuracy of their odor discrimination responses was influenced by visual distractors. This result shows that both color and shape information are taken into account during speeded olfactory discrimination, even when such information is completely task irrelevant, hinting at the automaticity of such higher level visual-olfactory crossmodal interactions.
Braille character discrimination in blindfolded human subjects.
Kauffman, Thomas; Théoret, Hugo; Pascual-Leone, Alvaro
2002-04-16
Visual deprivation may lead to enhanced performance in other sensory modalities. Whether this is the case in the tactile modality is controversial and may depend upon specific training and experience. We compared the performance of sighted subjects on a Braille character discrimination task to that of normal individuals blindfolded for a period of five days. Some participants in each group (blindfolded and sighted) received intensive Braille training to offset the effects of experience. Blindfolded subjects performed better than sighted subjects in the Braille discrimination task, irrespective of tactile training. For the left index finger, which had not been used in the formal Braille classes, blindfolding had no effect on performance while subjects who underwent tactile training outperformed non-stimulated participants. These results suggest that visual deprivation speeds up Braille learning and may be associated with behaviorally relevant neuroplastic changes.
Empiric determination of corrected visual acuity standards for train crews.
Schwartz, Steven H; Swanson, William H
2005-08-01
Probably the most common visual standard for employment in the transportation industry is best-corrected, high-contrast visual acuity. Because such standards were often established absent empiric linkage to job performance, it is possible that a job applicant or employee who has visual acuity less than the standard may be able to satisfactorily perform the required job activities. For the transportation system that we examined, the train crew is required to inspect visually the length of the train before and during the time it leaves the station. The purpose of the inspection is to determine if an individual is in a hazardous position with respect to the train. In this article, we determine the extent to which high-contrast visual acuity can predict performance on a simulated task. Performance at discriminating hazardous from safe conditions, as depicted in projected photographic slides, was determined as a function of visual acuity. For different levels of visual acuity, which was varied through the use of optical defocus, a subject was required to label scenes as hazardous or safe. Task performance was highly correlated with visual acuity as measured under conditions normally used for vision screenings (high-illumination and high-contrast): as the acuity decreases, performance at discriminating hazardous from safe scenes worsens. This empirically based methodology can be used to establish a corrected high-contrast visual acuity standard for safety-sensitive work in transportation that is linked to the performance of a job-critical task.
Real-time detection and discrimination of visual perception using electrocorticographic signals
NASA Astrophysics Data System (ADS)
Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.
2018-06-01
Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.
Hogarth, Lee; Dickinson, Anthony; Duka, Theodora
2003-08-01
Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.
Object localization, discrimination, and grasping with the optic nerve visual prosthesis.
Duret, Florence; Brelén, Måten E; Lambert, Valerie; Gérard, Benoît; Delbeke, Jean; Veraart, Claude
2006-01-01
This study involved a volunteer completely blind from retinis pigmentosa who had previously been implanted with an optic nerve visual prosthesis. The aim of this two-year study was to train the volunteer to localize a given object in nine different positions, to discriminate the object within a choice of six, and then to grasp it. In a closed-loop protocol including a head worn video camera, the nerve was stimulated whenever a part of the processed image of the object being scrutinized matched the center of an elicitable phosphene. The accessible visual field included 109 phosphenes in a 14 degrees x 41 degrees area. Results showed that training was required to succeed in the localization and discrimination tasks, but practically no training was required for grasping the object. The volunteer was able to successfully complete all tasks after training. The volunteer systematically performed several left-right and bottom-up scanning movements during the discrimination task. Discrimination strategies included stimulation phases and no-stimulation phases of roughly similar duration. This study provides a step towards the practical use of the optic nerve visual prosthesis in current daily life.
McKean, Danielle L.; Tsao, Jack W.; Chan, Annie W.-Y.
2017-01-01
The Body Inversion Effect (BIE; reduced visual discrimination performance for inverted compared to upright bodies) suggests that bodies are visually processed configurally; however, the specific importance of head posture information in the BIE has been indicated in reports of BIE reduction for whole bodies with fixed head position and for headless bodies. Through measurement of gaze patterns and investigation of the causal relation of fixation location to visual body discrimination performance, the present study reveals joint contributions of feature and configuration processing to visual body discrimination. Participants predominantly gazed at the (body-centric) upper body for upright bodies and the lower body for inverted bodies in the context of an experimental paradigm directly comparable to that of prior studies of the BIE. Subsequent manipulation of fixation location indicates that these preferential gaze locations causally contributed to the BIE for whole bodies largely due to the informative nature of gazing at or near the head. Also, a BIE was detected for both whole and headless bodies even when fixation location on the body was held constant, indicating a role of configural processing in body discrimination, though inclusion of the head posture information was still highly discriminative in the context of such processing. Interestingly, the impact of configuration (upright and inverted) to the BIE appears greater than that of differential preferred gaze locations. PMID:28085894
Castillo-Padilla, Diana V; Funke, Klaus
2016-01-01
Early cortical critical period resembles a state of enhanced neuronal plasticity enabling the establishment of specific neuronal connections during first sensory experience. Visual performance with regard to pattern discrimination is impaired if the cortex is deprived from visual input during the critical period. We wondered how unspecific activation of the visual cortex before closure of the critical period using repetitive transcranial magnetic stimulation (rTMS) could affect the critical period and the visual performance of the experimental animals. Would it cause premature closure of the plastic state and thus worsen experience-dependent visual performance, or would it be able to preserve plasticity? Effects of intermittent theta-burst stimulation (iTBS) were compared with those of an enriched environment (EE) during dark-rearing (DR) from birth. Rats dark-reared in a standard cage showed poor improvement in a visual pattern discrimination task, while rats housed in EE or treated with iTBS showed a performance indistinguishable from rats reared in normal light/dark cycle. The behavioral effects were accompanied by correlated changes in the expression of brain-derived neurotrophic factor (BDNF) and atypical PKC (PKCζ/PKMζ), two factors controlling stabilization of synaptic potentiation. It appears that not only nonvisual sensory activity and exercise but also cortical activation induced by rTMS has the potential to alleviate the effects of DR on cortical development, most likely due to stimulation of BDNF synthesis and release. As we showed previously, iTBS reduced the expression of parvalbumin in inhibitory cortical interneurons, indicating that modulation of the activity of fast-spiking interneurons contributes to the observed effects of iTBS. © 2015 Wiley Periodicals, Inc.
Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia
2012-10-01
A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.
Moehler, Tobias; Fiehler, Katja
2014-12-01
The present study investigated the coupling of selection-for-perception and selection-for-action during saccadic eye movement planning in three dual-task experiments. We focused on the effects of spatial congruency of saccade target (ST) location and discrimination target (DT) location and the time between ST-cue and Go-signal (SOA) on saccadic eye movement performance. In two experiments, participants performed a visual discrimination task at a cued location while programming a saccadic eye movement to a cued location. In the third experiment, the discrimination task was not cued and appeared at a random location. Spatial congruency of ST-location and DT-location resulted in enhanced perceptual performance irrespective of SOA. Perceptual performance in spatially incongruent trials was above chance, but only when the DT-location was cued. Saccade accuracy and precision were also affected by spatial congruency showing superior performance when the ST- and DT-location coincided. Saccade latency was only affected by spatial congruency when the DT-cue was predictive of the ST-location. Moreover, saccades consistently curved away from the incongruent DT-locations. Importantly, the effects of spatial congruency on saccade parameters only occurred when the DT-location was cued; therefore, results from experiments 1 and 2 are due to the endogenous allocation of attention to the DT-location and not caused by the salience of the probe. The SOA affected saccade latency showing decreasing latencies with increasing SOA. In conclusion, our results demonstrate that visuospatial attention can be voluntarily distributed upon spatially distinct perceptual and motor goals in dual-task situations, resulting in a decline of visual discrimination and saccade performance.
Prestimulus alpha-band power biases visual discrimination confidence, but not accuracy.
Samaha, Jason; Iemi, Luca; Postle, Bradley R
2017-09-01
The magnitude of power in the alpha-band (8-13Hz) of the electroencephalogram (EEG) prior to the onset of a near threshold visual stimulus predicts performance. Together with other findings, this has been interpreted as evidence that alpha-band dynamics reflect cortical excitability. We reasoned, however, that non-specific changes in excitability would be expected to influence signal and noise in the same way, leaving actual discriminability unchanged. Indeed, using a two-choice orientation discrimination task, we found that discrimination accuracy was unaffected by fluctuations in prestimulus alpha power. Decision confidence, on the other hand, was strongly negatively correlated with prestimulus alpha power. This finding constitutes a clear dissociation between objective and subjective measures of visual perception as a function of prestimulus cortical excitability. This dissociation is predicted by a model where the balance of evidence supporting each choice drives objective performance but only the magnitude of evidence supporting the selected choice drives subjective reports, suggesting that human perceptual confidence can be suboptimal with respect to tracking objective accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.
Investigating the role of the superior colliculus in active vision with the visual search paradigm.
Shen, Kelly; Valero, Jerome; Day, Gregory S; Paré, Martin
2011-06-01
We review here both the evidence that the functional visuomotor organization of the optic tectum is conserved in the primate superior colliculus (SC) and the evidence for the linking proposition that SC discriminating activity instantiates saccade target selection. We also present new data in response to questions that arose from recent SC visual search studies. First, we observed that SC discriminating activity predicts saccade initiation when monkeys perform an unconstrained search for a target defined by either a single visual feature or a conjunction of two features. Quantitative differences between the results in these two search tasks suggest, however, that SC discriminating activity does not only reflect saccade programming. This finding concurs with visual search studies conducted in posterior parietal cortex and the idea that, during natural active vision, visual attention is shifted concomitantly with saccade programming. Second, the analysis of a large neuronal sample recorded during feature search revealed that visual neurons in the superficial layers do possess discriminating activity. In addition, the hypotheses that there are distinct types of SC neurons in the deeper layers and that they are differently involved in saccade target selection were not substantiated. Third, we found that the discriminating quality of single-neuron activity substantially surpasses the ability of the monkeys to discriminate the target from distracters, raising the possibility that saccade target selection is a noisy process. We discuss these new findings in light of the visual search literature and the view that the SC is a visual salience map for orienting eye movements. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Neural mechanisms of coarse-to-fine discrimination in the visual cortex.
Purushothaman, Gopathy; Chen, Xin; Yampolsky, Dmitry; Casagrande, Vivien A
2014-12-01
Vision is a dynamic process that refines the spatial scale of analysis over time, as evidenced by a progressive improvement in the ability to detect and discriminate finer details. To understand coarse-to-fine discrimination, we studied the dynamics of spatial frequency (SF) response using reverse correlation in the primary visual cortex (V1) of the primate. In a majority of V1 cells studied, preferred SF either increased monotonically with time (group 1) or changed nonmonotonically, with an initial increase followed by a decrease (group 2). Monotonic shift in preferred SF occurred with or without an early suppression at low SFs. Late suppression at high SFs always accompanied nonmonotonic SF dynamics. Bayesian analysis showed that SF discrimination performance and best discriminable SF frequencies changed with time in different ways in the two groups of neurons. In group 1 neurons, SF discrimination performance peaked on both left and right flanks of the SF tuning curve at about the same time. In group 2 neurons, peak discrimination occurred on the right flank (high SFs) later than on the left flank (low SFs). Group 2 neurons were also better discriminators of high SFs. We examined the relationship between the time at which SF discrimination performance peaked on either flank of the SF tuning curve and the corresponding best discriminable SFs in both neuronal groups. This analysis showed that the population best discriminable SF increased with time in V1. These results suggest neural mechanisms for coarse-to-fine discrimination behavior and that this process originates in V1 or earlier. Copyright © 2014 the American Physiological Society.
Face-gender discrimination is possible in the near-absence of attention.
Reddy, Leila; Wilken, Patrick; Koch, Christof
2004-03-02
The attentional cost associated with the visual discrimination of the gender of a face was investigated. Participants performed a face-gender discrimination task either alone (single-task) or concurrently (dual-task) with a known attentional demanding task (5-letter T/L discrimination). Overall performance on face-gender discrimination suffered remarkably little under the dual-task condition compared to the single-task condition. Similar results were obtained in experiments that controlled for potential training effects or the use of low-level cues in this discrimination task. Our results provide further evidence against the notion that only low-level representations can be accessed outside the focus of attention.
Poltavski, Dmitri; Biberdorf, David
2015-01-01
Abstract In the growing field of sports vision little is still known about unique attributes of visual processing in ice hockey and what role visual processing plays in the overall athlete's performance. In the present study we evaluated whether visual, perceptual and cognitive/motor variables collected using the Nike SPARQ Sensory Training Station have significant relevance to the real game statistics of 38 Division I collegiate male and female hockey players. The results demonstrated that 69% of variance in the goals made by forwards in 2011-2013 could be predicted by their faster reaction time to a visual stimulus, better visual memory, better visual discrimination and a faster ability to shift focus between near and far objects. Approximately 33% of variance in game points was significantly related to better discrimination among competing visual stimuli. In addition, reaction time to a visual stimulus as well as stereoptic quickness significantly accounted for 24% of variance in the mean duration of the player's penalty time. This is one of the first studies to show that some of the visual skills that state-of-the-art generalised sports vision programmes are purported to target may indeed be important for hockey players' actual performance on the ice.
Graeber, R C; Schroeder, D M; Jane, J A; Ebbesson, S O
1978-07-15
An instrumental conditioning task was used to examine the role of the nurse shark telencephalon in black-white (BW) and horizontal-vertical stripes (HV) discrimination performance. In the first experiment, subjects initially received either bilateral anterior telencephalic control lesions or bilateral posterior telencephalic lesions aimed at destroying the central telencephalic nuclei (CN), which are known to receive direct input from the thalamic visual area. Postoperatively, the sharks were trained first on BW and then on HV. Those with anterior lesions learned both tasks as rapidly as unoperated subjects. Those with posterior lesions exhibited visual discrimination deficits related to the amount of damage to the CN and its connecting pathways. Severe damage resulted in an inability to learn either task but caused no impairments in motivation or general learning ability. In the second experiment, the sharks were first trained on BW and HV and then operated. Suction ablations were used to remove various portions of the CN. Sharks with 10% or less damage to the CN retained the preoperatively acquired discriminations almost perfectly. Those with 11-50% damage had to be retrained on both tasks. Almost total removal of the CN produced behavioral indications of blindness along with an inability to perform above the chance level on BW despite excellent retention of both discriminations over a 28-day period before surgery. It appears, however, that such sharks can still detect light. These results implicate the central telencephalic nuclei in the control of visually guided behavior in sharks.
Face adaptation improves gender discrimination.
Yang, Hua; Shen, Jianhong; Chen, Juan; Fang, Fang
2011-01-01
Adaptation to a visual pattern can alter the sensitivities of neuronal populations encoding the pattern. However, the functional roles of adaptation, especially in high-level vision, are still equivocal. In the present study, we performed three experiments to investigate if face gender adaptation could affect gender discrimination. Experiments 1 and 2 revealed that adapting to a male/female face could selectively enhance discrimination for male/female faces. Experiment 3 showed that the discrimination enhancement induced by face adaptation could transfer across a substantial change in three-dimensional face viewpoint. These results provide further evidence suggesting that, similar to low-level vision, adaptation in high-level vision could calibrate the visual system to current inputs of complex shapes (i.e. face) and improve discrimination at the adapted characteristic. Copyright © 2010 Elsevier Ltd. All rights reserved.
Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa
2017-02-01
The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.
Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R.
2017-01-01
Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee (Bombus terrestris) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. PMID:28978727
Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R; Chittka, Lars; Perry, Clint J
2017-10-11
Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee ( Bombus terrestris ) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. © 2017 The Authors.
Visual training improves perceptual grouping based on basic stimulus features.
Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M
2017-10-01
Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.
Visual Deficit in Albino Rats Following Fetal X Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
VAN DER ELST, DIRK H.; PORTER, PAUL B.; SHARP, JOSEPH C.
1963-02-01
To investigate the effect of radiation on visual ability, five groups of rats on the 15th day of gestation received x irradiation in doses of 0, 50, 75, 100, or 150 r at 50 r/ min. Two-thirds of the newborn rats died or were killed and eaten during the first postnatal week. The 75- and 50-r groups were lost entirely. The cannibalism occurred in all groups, so that its cause was uncertain. The remaining rats, which as fetuses had received 0, 100, and 150 r, were tested for visual discrimination in a water-flooded T. All 3 groups discriminated a lightedmore » escape ladder from the unlighted arm of the T with near- equal facility. Thereafter, as the light was dimmed progressively, performance declined in relation to dose. With the light turned off, but the bulb and ladder visible in ambient illumination, the 150-r group performed at chance, the 100-r group reliably better, and the control group better still. Thus, in the more precise task the irradiated animals failed. Since irradiation on the 15th day primarily damages the cortex, central blindness seems the most likely explanation. All animals had previously demonstrated their ability to solve the problem conceptually; hence a conclusion of visual deficiency seems justified. The similar performances of all groups during the easiest light discrimination test showed that the heavily irradiated and severely injured animals of the 150-r group were nonetheless able to learn readily. Finally, contrary to earlier studies in which irradiated rats were retarded in discriminating a light in a Skinner box, present tests reveal impairment neither in learning rate nor light discrimination.« less
Fornix and medial temporal lobe lesions lead to comparable deficits in complex visual perception.
Lech, Robert K; Koch, Benno; Schwarz, Michael; Suchan, Boris
2016-05-04
Recent research dealing with the structures of the medial temporal lobe (MTL) has shifted away from exclusively investigating memory-related processes and has repeatedly incorporated the investigation of complex visual perception. Several studies have demonstrated that higher level visual tasks can recruit structures like the hippocampus and perirhinal cortex in order to successfully perform complex visual discriminations, leading to a perceptual-mnemonic or representational view of the medial temporal lobe. The current study employed a complex visual discrimination paradigm in two patients suffering from brain lesions with differing locations and origin. Both patients, one with extensive medial temporal lobe lesions (VG) and one with a small lesion of the anterior fornix (HJK), were impaired in complex discriminations while showing otherwise mostly intact cognitive functions. The current data confirmed previous results while also extending the perceptual-mnemonic theory of the MTL to the main output structure of the hippocampus, the fornix. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Timing of target discrimination in human frontal eye fields.
O'Shea, Jacinta; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent
2004-01-01
Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.
The role of Broca's area in speech perception: evidence from aphasia revisited.
Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele
2011-12-01
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Do Visually Impaired People Develop Superior Smell Ability?
Majchrzak, Dorota; Eberhard, Julia; Kalaus, Barbara; Wagner, Karl-Heinz
2017-10-01
It is well known that visually impaired people perform better in orientation by sound than sighted individuals, but it is not clear whether this enhanced awareness also extends to other senses. Therefore, the aim of this study was to observe whether visually impaired subjects develop superior abilities in olfactory perception to compensate for their lack of vision. We investigated the odor perception of visually impaired individuals aged 7 to 89 ( n = 99; 52 women, 47 men) and compared them with subjects of a control group aged 8 to 82 years ( n = 100; 45 women, 55 men) without any visual impairment. The participants were evaluated by Sniffin' Sticks odor identification and discrimination test. Identification ability was assessed for 16 common odors presented in felt-tip pens. In the odor discrimination task, subjects had to determine which of three pens in 16 triplets had a different odor. The median number of correctly identified odorant pens in both groups was the same, 13 of the offered 16. In the discrimination test, there was also no significant difference observed. Gender did not influence results. Age-related changes were observed in both groups with olfactory perception decreasing after the age of 51. We could not confirm that visually impaired people were better in smell identification and discrimination ability than sighted individuals.
Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun
2016-01-01
Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.
Verhaeghe, Pieter-Paul; Van der Bracht, Koen; Van de Putte, Bart
2016-04-01
According to the social model of disability, physical 'impairments' become disabilities through exclusion in social relations. An obvious form of social exclusion might be discrimination, for instance on the rental housing market. Although discrimination has detrimental health effects, very few studies have examined discrimination of people with a visual impairment. We aim to study (1) the extent of discrimination of individuals with a visual impairment on the rental housing market and (2) differences in rates of discrimination between landowners and real estate agents. We conducted correspondence tests among 268 properties on the Belgian rental housing market. Using matched tests, we compared reactions by realtors and landowners to tenants with and tenants without a visual impairment. The results show that individuals with a visual impairment are substantially discriminated against in the rental housing market: at least one in three lessors discriminate against individuals with a visual impairment. We further discern differences in the propensity toward discrimination according to the type of lessor. Private landlords are at least twice as likely to discriminate against tenants with a visual impairment than real estate agents. At the same time, realtors still discriminate against one in five tenants with a visual impairment. This study shows the substantial discrimination against visually people with an impairment. Given the important consequences discrimination might have for physical and mental health, further research into this topic is needed. Copyright © 2016 Elsevier Inc. All rights reserved.
Image jitter enhances visual performance when spatial resolution is impaired.
Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko
2012-09-06
Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.
Simple and conditional visual discrimination with wheel running as reinforcement in rats.
Iversen, I H
1998-09-01
Three experiments explored whether access to wheel running is sufficient as reinforcement to establish and maintain simple and conditional visual discriminations in nondeprived rats. In Experiment 1, 2 rats learned to press a lit key to produce access to running; responding was virtually absent when the key was dark, but latencies to respond were longer than for customary food and water reinforcers. Increases in the intertrial interval did not improve the discrimination performance. In Experiment 2, 3 rats acquired a go-left/go-right discrimination with a trial-initiating response and reached an accuracy that exceeded 80%; when two keys showed a steady light, pressing the left key produced access to running whereas pressing the right key produced access to running when both keys showed blinking light. Latencies to respond to the lights shortened when the trial-initiation response was introduced and became much shorter than in Experiment 1. In Experiment 3, 1 rat acquired a conditional discrimination task (matching to sample) with steady versus blinking lights at an accuracy exceeding 80%. A trial-initiation response allowed self-paced trials as in Experiment 2. When the rat was exposed to the task for 19 successive 24-hr periods with access to food and water, the discrimination performance settled in a typical circadian pattern and peak accuracy exceeded 90%. When the trial-initiation response was under extinction, without access to running, the circadian activity pattern determined the time of spontaneous recovery. The experiments demonstrate that wheel-running reinforcement can be used to establish and maintain simple and conditional visual discriminations in nondeprived rats.
Pavan, Andrea; Boyce, Matthew; Ghin, Filippo
2016-10-01
Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.
The surprisingly high human efficiency at learning to recognize faces
Peterson, Matthew F.; Abbey, Craig K.; Eckstein, Miguel P.
2009-01-01
We investigated the ability of humans to optimize face recognition performance through rapid learning of individual relevant features. We created artificial faces with discriminating visual information heavily concentrated in single features (nose, eyes, chin or mouth). In each of 2500 learning blocks a feature was randomly selected and retained over the course of four trials, during which observers identified randomly sampled, noisy face images. Observers learned the discriminating feature through indirect feedback, leading to large performance gains. Performance was compared to a learning Bayesian ideal observer, resulting in unexpectedly high learning compared to previous studies with simpler stimuli. We explore various explanations and conclude that the higher learning measured with faces cannot be driven by adaptive eye movement strategies but can be mostly accounted for by suboptimalities in human face discrimination when observers are uncertain about the discriminating feature. We show that an initial bias of humans to use specific features to perform the task even though they are informed that each of four features is equally likely to be the discriminatory feature would lead to seemingly supra-optimal learning. We also examine the possibility of inefficient human integration of visual information across the spatially distributed facial features. Together, the results suggest that humans can show large performance improvement effects in discriminating faces as they learn to identify the feature containing the discriminatory information. PMID:19000918
Do rats use shape to solve “shape discriminations”?
Minini, Loredana; Jeffery, Kathryn J.
2006-01-01
Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did not use shape but instead relied on local luminance differences in the lower hemifield. A second experiment prevented this strategy by using stimuli—squares and rectangles—that varied in size and location, and for which the only constant predictor of reward was aspect ratio (ratio of height to width: a simple descriptor of “shape”). Rats eventually learned to use aspect ratio but only when no other discriminand was available, and performance remained very poor even at asymptote. These results suggest that although rats can process both dimensions simultaneously, they do not naturally solve shape discrimination tasks this way. This may reflect either a failure to visually process global shape information or a failure to discover shape as the discriminative stimulus in a simultaneous discrimination. Either way, our results suggest that simultaneous shape discrimination is not a good task for studies of visual perception in rodents. PMID:16705141
Distraction and Facilitation--Two Faces of the Same Coin?
ERIC Educational Resources Information Center
Wetzel, Nicole; Widmann, Andreas; Schroger, Erich
2012-01-01
Unexpected and task-irrelevant sounds can capture our attention and may cause distraction effects reflected by impaired performance in a primary task unrelated to the perturbing sound. The present auditory-visual oddball study examines the effect of the informational content of a sound on the performance in a visual discrimination task. The…
Exploring What’s Missing: What Do Target Absent Trials Reveal About Autism Search Superiority?
Keehn, Brandon; Joseph, Robert M.
2016-01-01
We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of discrimination or selection. Rather, consistent with prior ASD research, group differences were mainly the effect of faster performance on target-absent trials. Eye-tracking revealed a lack of left-visual-field search asymmetry in ASD, which may confer an additional advantage when the target is absent. Lastly, ASD symptomatology was positively associated with search superiority, the mechanisms of which may shed light on the atypical brain organization that underlies social-communicative impairment in ASD. PMID:26762114
Kim, Jahae; Cho, Sang-Geon; Song, Minchul; Kang, Sae-Ryung; Kwon, Seong Young; Choi, Kang-Ho; Choi, Seong-Min; Kim, Byeong-Chae; Song, Ho-Chun
2016-01-01
Abstract To compare diagnostic performance and confidence of a standard visual reading and combined 3-dimensional stereotactic surface projection (3D-SSP) results to discriminate between Alzheimer disease (AD)/mild cognitive impairment (MCI), dementia with Lewy bodies (DLB), and frontotemporal dementia (FTD). [18F]fluorodeoxyglucose (FDG) PET brain images were obtained from 120 patients (64 AD/MCI, 38 DLB, and 18 FTD) who were clinically confirmed over 2 years follow-up. Three nuclear medicine physicians performed the diagnosis and rated diagnostic confidence twice; once by standard visual methods, and once by adding of 3D-SSP. Diagnostic performance and confidence were compared between the 2 methods. 3D-SSP showed higher sensitivity, specificity, accuracy, positive, and negative predictive values to discriminate different types of dementia compared with the visual method alone, except for AD/MCI specificity and FTD sensitivity. Correction of misdiagnosis after adding 3D-SSP images was greatest for AD/MCI (56%), followed by DLB (13%) and FTD (11%). Diagnostic confidence also increased in DLB (visual: 3.2; 3D-SSP: 4.1; P < 0.001), followed by AD/MCI (visual: 3.1; 3D-SSP: 3.8; P = 0.002) and FTD (visual: 3.5; 3D-SSP: 4.2; P = 0.022). Overall, 154/360 (43%) cases had a corrected misdiagnosis or improved diagnostic confidence for the correct diagnosis. The addition of 3D-SSP images to visual analysis helped to discriminate different types of dementia in FDG PET scans, by correcting misdiagnoses and enhancing diagnostic confidence in the correct diagnosis. Improvement of diagnostic accuracy and confidence by 3D-SSP images might help to determine the cause of dementia and appropriate treatment. PMID:27930593
Acquisition of a visual discrimination and reversal learning task by Labrador retrievers.
Lazarowski, Lucia; Foster, Melanie L; Gruen, Margaret E; Sherman, Barbara L; Case, Beth C; Fish, Richard E; Milgram, Norton W; Dorman, David C
2014-05-01
Optimal cognitive ability is likely important for military working dogs (MWD) trained to detect explosives. An assessment of a dog's ability to rapidly learn discriminations might be useful in the MWD selection process. In this study, visual discrimination and reversal tasks were used to assess cognitive performance in Labrador retrievers selected for an explosives detection program using a modified version of the Toronto General Testing Apparatus (TGTA), a system developed for assessing performance in a battery of neuropsychological tests in canines. The results of the current study revealed that, as previously found with beagles tested using the TGTA, Labrador retrievers (N = 16) readily acquired both tasks and learned the discrimination task significantly faster than the reversal task. The present study confirmed that the modified TGTA system is suitable for cognitive evaluations in Labrador retriever MWDs and can be used to further explore effects of sex, phenotype, age, and other factors in relation to canine cognition and learning, and may provide an additional screening tool for MWD selection.
Visual perceptual load induces inattentional deafness.
Macdonald, James S P; Lavie, Nilli
2011-08-01
In this article, we establish a new phenomenon of "inattentional deafness" and highlight the level of load on visual attention as a critical determinant of this phenomenon. In three experiments, we modified an inattentional blindness paradigm to assess inattentional deafness. Participants made either a low- or high-load visual discrimination concerning a cross shape (respectively, a discrimination of line color or of line length with a subtle length difference). A brief pure tone was presented simultaneously with the visual task display on a final trial. Failures to notice the presence of this tone (i.e., inattentional deafness) reached a rate of 79% in the high-visual-load condition, significantly more than in the low-load condition. These findings establish the phenomenon of inattentional deafness under visual load, thereby extending the load theory of attention (e.g., Lavie, Journal of Experimental Psychology. Human Perception and Performance, 25, 596-616, 1995) to address the cross-modal effects of visual perceptual load.
Attentional Capture of Objects Referred to by Spoken Language
ERIC Educational Resources Information Center
Salverda, Anne Pier; Altmann, Gerry T. M.
2011-01-01
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…
Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency
Sripati, Arun P.; Olson, Carl R.
2010-01-01
Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054
Visual discrimination in an orangutan (Pongo pygmaeus): measuring visual preference.
Hanazuka, Yuki; Kurotori, Hidetoshi; Shimizu, Mika; Midorikawa, Akira
2012-04-01
Although previous studies have confirmed that trained orangutans visually discriminate between mammals and artificial objects, whether orangutans without operant conditioning can discriminate remains unknown. The visual discrimination ability in an orangutan (Pongo pygmaeus) with no experience in operant learning was examined using measures of visual preference. Sixteen color photographs of inanimate objects and of mammals with four legs were randomly presented to an orangutan. The results showed that the mean looking time at photographs of mammals with four legs was longer than that for inanimate objects, suggesting that the orangutan discriminated mammals with four legs from inanimate objects. The results implied that orangutans who have not experienced operant conditioning may possess the ability to discriminate visually.
Color-dependent learning in restrained Africanized honey bees.
Jernigan, C M; Roubik, D W; Wcislo, W T; Riveros, A J
2014-02-01
Associative color learning has been demonstrated to be very poor using restrained European honey bees unless the antennae are amputated. Consequently, our understanding of proximate mechanisms in visual information processing is handicapped. Here we test learning performance of Africanized honey bees under restrained conditions with visual and olfactory stimulation using the proboscis extension response (PER) protocol. Restrained individuals were trained to learn an association between a color stimulus and a sugar-water reward. We evaluated performance for 'absolute' learning (learned association between a stimulus and a reward) and 'discriminant' learning (discrimination between two stimuli). Restrained Africanized honey bees (AHBs) readily learned the association of color stimulus for both blue and green LED stimuli in absolute and discriminatory learning tasks within seven presentations, but not with violet as the rewarded color. Additionally, 24-h memory improved considerably during the discrimination task, compared with absolute association (15-55%). We found that antennal amputation was unnecessary and reduced performance in AHBs. Thus color learning can now be studied using the PER protocol with intact AHBs. This finding opens the way towards investigating visual and multimodal learning with application of neural techniques commonly used in restrained honey bees.
Lomber, S G; Payne, B R; Cornwell, P
1996-01-01
Extrastriate visual cortex of the ventral-posterior suprasylvian gyrus (vPS cortex) of freely behaving cats was reversibly deactivated with cooling to determine its role in performance on a battery of simple or masked two-dimensional pattern discriminations, and three-dimensional object discriminations. Deactivation of vPS cortex by cooling profoundly impaired the ability of the cats to recall the difference between all previously learned pattern and object discriminations. However, the cats' ability to learn or relearn pattern and object discriminations while vPS was deactivated depended upon the nature of the pattern or object and the cats' prior level of exposure to them. During cooling of vPS cortex, the cats could neither learn the novel object discriminations nor relearn a highly familiar masked or partially occluded pattern discrimination, although they could relearn both the highly familiar object and simple pattern discriminations. These cooling-induced deficits resemble those induced by cooling of the topologically equivalent inferotemporal cortex of monkeys and provides evidence that the equivalent regions contribute to visual processing in similar ways. Images Fig. 1 Fig. 3 PMID:8643686
Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2014-01-01
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403
Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé
2017-03-01
Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/-B/aa) and more intact onset responses for nonword repetition (Baz for/-B/az). Thus visual speech altered both discrimination and identification in the CHL-to a large extent for the/B/onsets but only minimally for the/G/onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children's discrimination skills (i.e., d' analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets-even after variation due to the other variables was controlled. These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé
2017-01-01
Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/–B/aa) and more intact onset responses for nonword repetition (Baz for/–B/az). Thus visual speech altered both discrimination and identification in the CHL—to a large extent for the /B/ onsets but only minimally for the /G/ onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children’s discrimination skills (i.e., d’ analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets—even after variation due to the other variables was controlled. Conclusions These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. PMID:28167003
Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.
Byers, Anna; Serences, John T
2014-09-01
Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.
Behavioral evaluation of visual function of rats using a visual discrimination apparatus.
Thomas, Biju B; Samant, Deedar M; Seiler, Magdalene J; Aramant, Robert B; Sheikholeslami, Sharzad; Zhang, Kevin; Chen, Zhenhai; Sadda, SriniVas R
2007-05-15
A visual discrimination apparatus was developed to evaluate the visual sensitivity of normal pigmented rats (n=13) and S334ter-line-3 retinal degenerate (RD) rats (n=15). The apparatus is a modified Y maze consisting of two chambers leading to the rats' home cage. Rats were trained to find a one-way exit door leading into their home cage, based on distinguishing between two different visual alternatives (either a dark background or black and white stripes at varying luminance levels) which were randomly displayed on the back of each chamber. Within 2 weeks of training, all rats were able to distinguish between these two visual patterns. The discrimination threshold of normal pigmented rats was a luminance level of -5.37+/-0.05 log cd/m(2); whereas the threshold level of 100-day-old RD rats was -1.14+/-0.09 log cd/m(2) with considerable variability in performance. When tested at a later age (about 150 days), the threshold level of RD rats was significantly increased (-0.82+/-0.09 log cd/m(2), p<0.03, paired t-test). This apparatus could be useful to train rats at a very early age to distinguish between two different visual stimuli and may be effective for visual functional evaluations following therapeutic interventions.
Humans do not have direct access to retinal flow during walking
Souman, Jan L.; Freeman, Tom C.A.; Eikmeier, Verena; Ernst, Marc O.
2013-01-01
Perceived visual speed has been reported to be reduced during walking. This reduction has been attributed to a partial subtraction of walking speed from visual speed (Durgin & Gigone, 2007; Durgin, Gigone, & Scott, 2005). We tested whether observers still have access to the retinal flow before subtraction takes place. Observers performed a 2IFC visual speed discrimination task while walking on a treadmill. In one condition, walking speed was identical in the two intervals, while in a second condition walking speed differed between intervals. If observers have access to the retinal flow before subtraction, any changes in walking speed across intervals should not affect their ability to discriminate retinal flow speed. Contrary to this “direct-access hypothesis”, we found that observers were worse at discrimination when walking speed differed between intervals. The results therefore suggest that observers do not have access to retinal flow before subtraction. We also found that the amount of subtraction depended on the visual speed presented, suggesting that the interaction between the processing of visual input and of self-motion is more complex than previously proposed. PMID:20884509
Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten
2010-01-01
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun
2016-01-01
Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer. PMID:26873777
Gennari, Silvia P; Millman, Rebecca E; Hymers, Mark; Mattys, Sven L
2018-06-12
Perceiving speech while performing another task is a common challenge in everyday life. How the brain controls resource allocation during speech perception remains poorly understood. Using functional magnetic resonance imaging (fMRI), we investigated the effect of cognitive load on speech perception by examining brain responses of participants performing a phoneme discrimination task and a visual working memory task simultaneously. The visual task involved holding either a single meaningless image in working memory (low cognitive load) or four different images (high cognitive load). Performing the speech task under high load, compared to low load, resulted in decreased activity in pSTG/pMTG and increased activity in visual occipital cortex and two regions known to contribute to visual attention regulation-the superior parietal lobule (SPL) and the paracingulate and anterior cingulate gyrus (PaCG, ACG). Critically, activity in PaCG/ACG was correlated with performance in the visual task and with activity in pSTG/pMTG: Increased activity in PaCG/ACG was observed for individuals with poorer visual performance and with decreased activity in pSTG/pMTG. Moreover, activity in a pSTG/pMTG seed region showed psychophysiological interactions with areas of the PaCG/ACG, with stronger interaction in the high-load than the low-load condition. These findings show that the acoustic analysis of speech is affected by the demands of a concurrent visual task and that the PaCG/ACG plays a role in allocating cognitive resources to concurrent auditory and visual information. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Roth, Zvi N
2016-01-01
Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream.
Roth, Zvi N.
2016-01-01
Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream. PMID:27242455
Turchi, Janita; Devan, Bryan; Yin, Pingbo; Sigrist, Emmalynn; Mishkin, Mortimer
2010-01-01
The monkey's ability to learn a set of visual discriminations presented concurrently just once a day on successive days (24-hr ITI task) is based on habit formation, which is known to rely on a visuo-striatal circuit and to be independent of visuo-rhinal circuits that support one-trial memory. Consistent with this dissociation, we recently reported that performance on the 24-hr ITI task is impaired by a striatal-function blocking agent, the dopaminergic antagonist haloperidol, and not by a rhinal-function blocking agent, the muscarinic cholinergic antagonist scopolamine. In the present study, monkeys were trained on a short-ITI form of concurrent visual discrimination learning, one in which a set of stimulus pairs is repeated not only across daily sessions but also several times within each session (in this case, at about 4-min ITIs). Asymptotic discrimination learning rates in the non-drug condition were reduced by half, from ~11 trials/pair on the 24-hr ITI task to ~5 trials/pair on the 4-min ITI task, and this faster learning was impaired by systemic injections of either haloperidol or scopolamine. The results suggest that in the version of concurrent discrimination learning used here, the short ITIs within a session recruit both visuo-rhinal and visuo-striatal circuits, and that the final performance level is driven by both cognitive memory and habit formation working in concert. PMID:20144631
Neural networks for Braille reading by the blind.
Sadato, N; Pascual-Leone, A; Grafman, J; Deiber, M P; Ibañez, V; Hallett, M
1998-07-01
To explore the neural networks used for Braille reading, we measured regional cerebral blood flow with PET during tactile tasks performed both by Braille readers blinded early in life and by sighted subjects. Eight proficient Braille readers were studied during Braille reading with both right and left index fingers. Eight-character, non-contracted Braille-letter strings were used, and subjects were asked to discriminate between words and non-words. To compare the behaviour of the brain of the blind and the sighted directly, non-Braille tactile tasks were performed by six different blind subjects and 10 sighted control subjects using the right index finger. The tasks included a non-discrimination task and three discrimination tasks (angle, width and character). Irrespective of reading finger (right or left), Braille reading by the blind activated the inferior parietal lobule, primary visual cortex, superior occipital gyri, fusiform gyri, ventral premotor area, superior parietal lobule, cerebellum and primary sensorimotor area bilaterally, also the right dorsal premotor cortex, right middle occipital gyrus and right prefrontal area. During non-Braille discrimination tasks, in blind subjects, the ventral occipital regions, including the primary visual cortex and fusiform gyri bilaterally were activated while the secondary somatosensory area was deactivated. The reverse pattern was found in sighted subjects where the secondary somatosensory area was activated while the ventral occipital regions were suppressed. These findings suggest that the tactile processing pathways usually linked in the secondary somatosensory area are rerouted in blind subjects to the ventral occipital cortical regions originally reserved for visual shape discrimination.
Turchi, Janita; Devan, Bryan; Yin, Pingbo; Sigrist, Emmalynn; Mishkin, Mortimer
2010-07-01
The monkey's ability to learn a set of visual discriminations presented concurrently just once a day on successive days (24-h ITI task) is based on habit formation, which is known to rely on a visuo-striatal circuit and to be independent of visuo-rhinal circuits that support one-trial memory. Consistent with this dissociation, we recently reported that performance on the 24-h ITI task is impaired by a striatal-function blocking agent, the dopaminergic antagonist haloperidol, and not by a rhinal-function blocking agent, the muscarinic cholinergic antagonist scopolamine. In the present study, monkeys were trained on a short-ITI form of concurrent visual discrimination learning, one in which a set of stimulus pairs is repeated not only across daily sessions but also several times within each session (in this case, at about 4-min ITIs). Asymptotic discrimination learning rates in the non-drug condition were reduced by half, from approximately 11 trials/pair on the 24-h ITI task to approximately 5 trials/pair on the 4-min ITI task, and this faster learning was impaired by systemic injections of either haloperidol or scopolamine. The results suggest that in the version of concurrent discrimination learning used here, the short ITIs within a session recruit both visuo-rhinal and visuo-striatal circuits, and that the final performance level is driven by both cognitive memory and habit formation working in concert.
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Comparative effect of lens care solutions on blink rate, ocular discomfort and visual performance.
Yang, Shun-nan; Tai, Yu-chi; Sheedy, James E; Kinoshita, Beth; Lampa, Matthew; Kern, Jami R
2012-09-01
To help maintain clear vision and ocular surface health, eye blinks occur to distribute natural tears over the ocular surface, especially the corneal surface. Contact lens wearers may suffer from poor vision and dry eye symptoms due to difficulty in lens surface wetting and reduced tear production. Sustained viewing of a computer screen reduces eye blinks and exacerbates such difficulties. The present study evaluated the wetting effect of lens care solutions (LCSs) on blink rate, dry eye symptoms, and vision performance. Sixty-five adult habitual soft contact lens wearers were recruited to adapt to different LCSs (Opti-free, ReNu, and ClearCare) in a cross-over design. Blink rate in pictorial viewing and reading (measured with an eyetracker), dry eye symptoms (measured with the Ocular Surface Disease Index questionnaire), and visual discrimination (identifying tumbling E) immediately before and after eye blinks were measured after 2 weeks of adaption to LCS. Repeated measures anova and mixed model ancova were conducted to evaluate effects of LCS on blink rate, symptom score, and discrimination accuracy. Opti-Free resulted in lower dry eye symptoms (p = 0.018) than ClearCare, and lower spontaneous blink rate (measured in picture viewing) than ClearCare (p = 0.014) and ReNu (p = 0.041). In reading, blink rate was higher for ClearCare compared to ReNu (p = 0.026) and control (p = 0.024). Visual discrimination time was longer for the control (daily disposable lens) than for Opti-Free (p = 0.007), ReNu (p = 0.009), and ClearCare (0.013) immediately before the blink. LCSs differently affected blink rate, subjective dry eye symptoms, and visual discrimination speed. Those with wetting agents led to significantly fewer eye blinks while affording better ocular comfort for contact lens wearers, compared to that without. LCSs with wetting agents also resulted in better visual performance compared to wearing daily disposable contact lenses. These presumably are because of improved tear film quality. © 2012 The College of Optometrists.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Comparing visual search and eye movements in bilinguals and monolinguals
Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.
2017-01-01
Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116
Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki
2015-10-01
Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.
Detection and recognition of simple spatial forms
NASA Technical Reports Server (NTRS)
Watson, A. B.
1983-01-01
A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.
Herrera-Guzmán, I; Peña-Casanova, J; Lara, J P; Gudayol-Ferré, E; Böhm, P
2004-08-01
The assessment of visual perception and cognition forms an important part of any general cognitive evaluation. We have studied the possible influence of age, sex, and education on a normal elderly Spanish population (90 healthy subjects) in performance in visual perception tasks. To evaluate visual perception and cognition, we have used the subjects performance with The Visual Object and Space Perception Battery (VOSP). The test consists of 8 subtests: 4 measure visual object perception (Incomplete Letters, Silhouettes, Object Decision, and Progressive Silhouettes) while the other 4 measure visual space perception (Dot Counting, Position Discrimination, Number Location, and Cube Analysis). The statistical procedures employed were either simple or multiple linear regression analyses (subtests with normal distribution) and Mann-Whitney tests, followed by ANOVA with Scheffe correction (subtests without normal distribution). Age and sex were found to be significant modifying factors in the Silhouettes, Object Decision, Progressive Silhouettes, Position Discrimination, and Cube Analysis subtests. Educational level was found to be a significant predictor of function for the Silhouettes and Object Decision subtests. The results of the sample were adjusted in line with the differences observed. Our study also offers preliminary normative data for the administration of the VOSP to an elderly Spanish population. The results are discussed and compared with similar studies performed in different cultural backgrounds.
Investigation of Neural Strategies of Visual Search
NASA Technical Reports Server (NTRS)
Krauzlis, Richard J.
2003-01-01
The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.
Stereoscopic processing of crossed and uncrossed disparities in the human visual cortex.
Li, Yuan; Zhang, Chuncheng; Hou, Chunping; Yao, Li; Zhang, Jiacai; Long, Zhiying
2017-12-21
Binocular disparity provides a powerful cue for depth perception in a stereoscopic environment. Despite increasing knowledge of the cortical areas that process disparity from neuroimaging studies, the neural mechanism underlying disparity sign processing [crossed disparity (CD)/uncrossed disparity (UD)] is still poorly understood. In the present study, functional magnetic resonance imaging (fMRI) was used to explore different neural features that are relevant to disparity-sign processing. We performed an fMRI experiment on 27 right-handed healthy human volunteers by using both general linear model (GLM) and multi-voxel pattern analysis (MVPA) methods. First, GLM was used to determine the cortical areas that displayed different responses to different disparity signs. Second, MVPA was used to determine how the cortical areas discriminate different disparity signs. The GLM analysis results indicated that shapes with UD induced significantly stronger activity in the sub-region (LO) of the lateral occipital cortex (LOC) than those with CD. The results of MVPA based on region of interest indicated that areas V3d and V3A displayed higher accuracy in the discrimination of crossed and uncrossed disparities than LOC. The results of searchlight-based MVPA indicated that the dorsal visual cortex showed significantly higher prediction accuracy than the ventral visual cortex and the sub-region LO of LOC showed high accuracy in the discrimination of crossed and uncrossed disparities. The results may suggest the dorsal visual areas are more discriminative to the disparity signs than the ventral visual areas although they are not sensitive to the disparity sign processing. Moreover, the LO in the ventral visual cortex is relevant to the recognition of shapes with different disparity signs and discriminative to the disparity sign.
Zhang, Yi; Chen, Lihan
2016-01-01
Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910
Figure ground discrimination in age-related macular degeneration.
Tran, Thi Ha Chau; Guyader, Nathalie; Guerin, Anne; Despretz, Pascal; Boucart, Muriel
2011-03-01
To investigate impairment in discriminating a figure from its background and to study its relation to visual acuity and lesion size in patients with neovascular age-related macular degeneration (AMD). Seventeen patients with neovascular AMD and visual acuity <20/50 were included. Seventeen age-matched healthy subjects participated as controls. Complete ophthalmologic examination was performed on all participants. The stimuli were photographs of scenes containing animals (targets) or other objects (distractors), displayed on a computer monitor screen. Performance was compared in four background conditions: the target in the natural scene; the target isolated on a white background; the target separated by a white space from a structured scene; the target separated by a white space from a nonstructured, shapeless background. Target discriminability (d') was recorded. Performance was lower for patients than for controls. For the patients, it was easier to detect the target when it was separated from its background (under isolated, structured, and nonstructured conditions) than it was when located in a scene. Performance was improved in patients with increasing exposure time but remained lower in controls. Correlations were found between visual acuity, lesion size, and sensitivity for patients. Figure/ground segregation is impaired in patients with AMD. A white space surrounding an object is sufficient to improve the object's detection and to facilitate figure/ground segregation. These results may have practical applications to the rehabilitation of the environment in patients with AMD.
Pre-cooling moderately enhances visual discrimination during exercise in the heat.
Clarke, Neil D; Duncan, Michael J; Smith, Mike; Hankey, Joanne
2017-02-01
Pre-cooling has been reported to attenuate the increase in core temperature, although, information regarding the effects of pre-cooling on cognitive function is limited. The present study investigated the effects of pre-cooling on visual discrimination during exercise in the heat. Eight male recreational runners completed 90 min of treadmill running at 65% [Formula: see text] 2max in the heat [32.4 ± 0.9°C and 46.8 ± 6.4% relative humidity (r.h.)] on two occasions in a randomised, counterbalanced crossover design. Participants underwent pre-cooling by means of water immersion (20.3 ± 0.3°C) for 60 min or remained seated for 60 min in a laboratory (20.2 ± 1.7°C and 60.2 ± 2.5% r.h.). Rectal temperature (T rec ) and mean skin temperature (T skin ) were monitored throughout the protocol. At 30-min intervals participants performed a visual discrimination task. Following pre-cooling, T rec (P = 0.040; [Formula: see text] = 0.48) was moderately lower at 0 and 30 min and T skin (P = 0.003; [Formula: see text] = 0.75) lower to a large extent at 0 min of exercise. Visual discrimination was moderately more accurate at 60 and 90 min of exercise following pre-cooling (P = 0.067; [Formula: see text] = 0.40). Pre-cooling resulted in small improvements in visual discrimination sensitivity (F 1,7 = 2.188; P = 0.183; [Formula: see text] = 0.24), criterion (F 1,7 = 1.298; P = 0.292; [Formula: see text] = 0.16) and bias (F 1,7 = 2.202; P = 0.181; [Formula: see text] = 0.24). Pre-cooling moderately improves visual discrimination accuracy during exercise in the heat.
Convergent-Discriminant Validity of the Jewish Employment Vocational System (JEVS).
ERIC Educational Resources Information Center
Tryjankowski, Elaine M.
This study investigated the construct validity of five perceptual traits (auditory discrimination, visual discrimination, visual memory, visual-motor coordination, and auditory to visual-motor coordination) with five simulated work samples (union assembly, resistor reading, budgette assembly, lock assembly, and nail and screw sort) from the Jewish…
Smid, H G; Jakob, A; Heinze, H J
1999-03-01
What cognitive processes underlie event-related brain potential (ERP) effects related to visual multidimensional selective attention and how are these processes organized? We recorded ERPs when participants attended to one conjunction of color, global shape and local shape and ignored other conjunctions of these attributes in three discriminability conditions. Attending to color and shape produced three ERP effects: frontal selection positivity (FSP), central negativity (N2b), and posterior selection negativity (SN). The results suggested that the processes underlying SN and N2b perform independent within-dimension selections, whereas the process underlying the FSP performs hierarchical between-dimension selections. At posterior electrodes, manipulation of discriminability changed the ERPs to the relevant but not to the irrelevant stimuli, suggesting that the SN does not concern the selection process itself but rather a cognitive process initiated after selection is finished. Other findings suggested that selection of multiple visual attributes occurs in parallel.
Correlation Filter Learning Toward Peak Strength for Visual Tracking.
Sui, Yao; Wang, Guanghui; Zhang, Li
2018-04-01
This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.
Atypical Face Perception in Autism: A Point of View?
Morin, Karine; Guy, Jacalyn; Habak, Claudine; Wilson, Hugh R; Pagani, Linda; Mottron, Laurent; Bertone, Armando
2015-10-01
Face perception is the most commonly used visual metric of social perception in autism. However, when found to be atypical, the origin of face perception differences in autism is contentious. One hypothesis proposes that a locally oriented visual analysis, characteristic of individuals with autism, ultimately affects performance on face tasks where a global analysis is optimal. The objective of this study was to evaluate this hypothesis by assessing face identity discrimination with synthetic faces presented with and without changes in viewpoint, with the former condition minimizing access to local face attributes used for identity discrimination. Twenty-eight individuals with autism and 30 neurotypical participants performed a face identity discrimination task. Stimuli were synthetic faces extracted from traditional face photographs in both front and 20° side viewpoints, digitized from 37 points to provide a continuous measure of facial geometry. Face identity discrimination thresholds were obtained using a two-alternative, temporal forced choice match-to-sample paradigm. Analyses revealed an interaction between group and condition, with group differences found only for the viewpoint change condition, where performance in the autism group was decreased compared to that of neurotypical participants. The selective decrease in performance for the viewpoint change condition suggests that face identity discrimination in autism is more difficult when access to local cues is minimized, and/or when dependence on integrative analysis is increased. These results lend support to a perceptual contribution of atypical face perception in autism. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Gálosi, Rita; Szalay, Csaba; Aradi, Mihály; Perlaki, Gábor; Pál, József; Steier, Roy; Lénárd, László; Karádi, Zoltán
2017-04-01
Manganese-enhanced magnetic resonance imaging (MEMRI) offers unique advantages such as studying brain activation in freely moving rats, but its usefulness has not been previously evaluated during operant behavior training. Manganese in a form of MnCl 2 , at a dose of 20mg/kg, was intraperitoneally infused. The administration was repeated and separated by 24h to reach the dose of 40mg/kg or 60mg/kg, respectively. Hepatotoxicity of the MnCl 2 was evaluated by determining serum aspartate aminotransferase, alanine aminotransferase, total bilirubin, albumin and protein levels. Neurological examination was also carried out. The animals were tested in visual cue discriminated operant task. Imaging was performed using a 3T clinical MR scanner. T1 values were determined before and after MnCl 2 administrations. Manganese-enhanced images of each animal were subtracted from their baseline images to calculate decrease in the T1 value (ΔT1) voxel by voxel. The subtracted T1 maps of trained animals performing visual cue discriminated operant task, and those of naive rats were compared. The dose of 60mg/kg MnCl 2 showed hepatotoxic effect, but even these animals did not exhibit neurological symptoms. The dose of 20 and 40mg/kg MnCl 2 increased the number of omissions and did not affect the accuracy of performing the visual cue discriminated operant task. Using the accumulated dose of 40mg/kg, voxels with a significant enhanced ΔT1 value were detected in the following brain areas of the visual cue discriminated operant behavior performed animals compared to those in the controls: the visual, somatosensory, motor and premotor cortices, the insula, cingulate, ectorhinal, entorhinal, perirhinal and piriform cortices, hippocampus, amygdala with amygdalohippocampal areas, dorsal striatum, nucleus accumbens core, substantia nigra, and retrorubral field. In conclusion, the MEMRI proved to be a reliable method to accomplish brain activity mapping in correlation with the operant behavior of freely moving rodents. Copyright © 2016 Elsevier Inc. All rights reserved.
A Role for Mouse Primary Visual Cortex in Motion Perception.
Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo
2018-06-04
Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.
Color Vision in Color Display Night Vision Goggles.
Liggins, Eric P; Serle, William P
2017-05-01
Aircrew viewing eyepiece-injected symbology on color display night vision goggles (CDNVGs) are performing a visual task involving color under highly unnatural viewing conditions. Their performance in discriminating different colors and responding to color cues is unknown. Experimental laboratory measurements of 1) color discrimination and 2) visual search performance are reported under adaptation conditions representative of a CDNVG. Color discrimination was measured using a two-alternative forced choice (2AFC) paradigm that probes color space uniformly around a white point. Search times in the presence of different degrees of clutter (distractors in the scene) are measured for different potential symbology colors. The discrimination data support previous data suggesting that discrimination is best for colors close to the adapting point in color space (P43 phosphor in this case). There were highly significant effects of background adaptation (white or green) and test color. The search time data show that saturated colors with the greatest chromatic contrast with respect to the background lead to the shortest search times, associated with the greatest saliency. Search times for the green background were around 150 ms longer than for the white. Desaturated colors, along with those close to a typical CDNVG display phosphor in color space, should be avoided by CDNVG designers if the greatest conspicuity of symbology is desired. The results can be used by CDNVG symbology designers to optimize aircrew performance subject to wider constraints arising from the way color is used in the existing conventional cockpit instruments and displays.Liggins EP, Serle WP. Color vision in color display night vision goggles. Aerosp Med Hum Perform. 2017; 88(5):448-456.
Effects of Age and Reading Ability on Visual Discrimination.
ERIC Educational Resources Information Center
Musatti, Tullia; And Others
1981-01-01
Sixty children, prereaders and readers aged 4-6 years, matched color, shape, and letter features in pairs of cartoons. Older children and those able to read performed better, confirming the hypothesis that the development of some visual skills is a by-product of learning to read. (Author/SJL)
[Discrimination of varieties of brake fluid using visual-near infrared spectra].
Jiang, Lu-lu; Tan, Li-hong; Qiu, Zheng-jun; Lu, Jiang-feng; He, Yong
2008-06-01
A new method was developed to fast discriminate brands of brake fluid by means of visual-near infrared spectroscopy. Five different brands of brake fluid were analyzed using a handheld near infrared spectrograph, manufactured by ASD Company, and 60 samples were gotten from each brand of brake fluid. The samples data were pretreated using average smoothing and standard normal variable method, and then analyzed using principal component analysis (PCA). A 2-dimensional plot was drawn based on the first and the second principal components, and the plot indicated that the clustering characteristic of different brake fluid is distinct. The foregoing 6 principal components were taken as input variable, and the band of brake fluid as output variable to build the discriminate model by stepwise discriminant analysis method. Two hundred twenty five samples selected randomly were used to create the model, and the rest 75 samples to verify the model. The result showed that the distinguishing rate was 94.67%, indicating that the method proposed in this paper has good performance in classification and discrimination. It provides a new way to fast discriminate different brands of brake fluid.
Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E
2016-01-01
Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.
Characteristic and intermingled neocortical circuits encode different visual object discriminations.
Zhang, Guo-Rong; Zhao, Hua; Cook, Nathan; Svestka, Michael; Choi, Eui M; Jan, Mary; Cook, Robert G; Geller, Alfred I
2017-07-28
Synaptic plasticity and neural network theories hypothesize that the essential information for advanced cognitive tasks is encoded in specific circuits and neurons within distributed neocortical networks. However, these circuits are incompletely characterized, and we do not know if a specific discrimination is encoded in characteristic circuits among multiple animals. Here, we determined the spatial distribution of active neurons for a circuit that encodes some of the essential information for a cognitive task. We genetically activated protein kinase C pathways in several hundred spatially-grouped glutamatergic and GABAergic neurons in rat postrhinal cortex, a multimodal associative area that is part of a distributed circuit that encodes visual object discriminations. We previously established that this intervention enhances accuracy for specific discriminations. Moreover, the genetically-modified, local circuit in POR cortex encodes some of the essential information, and this local circuit is preferentially activated during performance, as shown by activity-dependent gene imaging. Here, we mapped the positions of the active neurons, which revealed that two image sets are encoded in characteristic and different circuits. While characteristic circuits are known to process sensory information, in sensory areas, this is the first demonstration that characteristic circuits encode specific discriminations, in a multimodal associative area. Further, the circuits encoding the two image sets are intermingled, and likely overlapping, enabling efficient encoding. Consistent with reconsolidation theories, intermingled and overlapping encoding could facilitate formation of associations between related discriminations, including visually similar discriminations or discriminations learned at the same time or place. Copyright © 2017 Elsevier B.V. All rights reserved.
Unsupervised visual discrimination learning of complex stimuli: Accuracy, bias and generalization.
Montefusco-Siegmund, Rodrigo; Toro, Mauricio; Maldonado, Pedro E; Aylwin, María de la L
2018-07-01
Through same-different judgements, we can discriminate an immense variety of stimuli and consequently, they are critical in our everyday interaction with the environment. The quality of the judgements depends on familiarity with stimuli. A way to improve the discrimination is through learning, but to this day, we lack direct evidence of how learning shapes the same-different judgments with complex stimuli. We studied unsupervised visual discrimination learning in 42 participants, as they performed same-different judgments with two types of unfamiliar complex stimuli in the absence of labeling or individuation. Across nine daily training sessions with equiprobable same and different stimuli pairs, participants increased the sensitivity and the criterion by reducing the errors with both same and different pairs. With practice, there was a superior performance for different pairs and a bias for different response. To evaluate the process underlying this bias, we manipulated the proportion of same and different pairs, which resulted in an additional proportion-induced bias, suggesting that the bias observed with equal proportions was a stimulus processing bias. Overall, these results suggest that unsupervised discrimination learning occurs through changes in the stimulus processing that increase the sensory evidence and/or the precision of the working memory. Finally, the acquired discrimination ability was fully transferred to novel exemplars of the practiced stimuli category, in agreement with the acquisition of a category specific perceptual expertise. Copyright © 2018 Elsevier Ltd. All rights reserved.
Surround-Masking Affects Visual Estimation Ability
Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.
2017-01-01
Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845
ERIC Educational Resources Information Center
Squire, Larry R.; Levy, Daniel A.; Shrager, Yael
2005-01-01
The perirhinal cortex is known to be important for memory, but there has recently been interest in the possibility that it might also be involved in visual perceptual functions. In four experiments, we assessed visual discrimination ability and visual discrimination learning in severely amnesic patients with large medial temporal lobe lesions that…
ERIC Educational Resources Information Center
Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve
2018-01-01
To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…
A dual-task investigation of automaticity in visual word processing
NASA Technical Reports Server (NTRS)
McCann, R. S.; Remington, R. W.; Van Selst, M.
2000-01-01
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.
Ferraro, M; Foster, D H
1991-01-01
Under certain experimental conditions, visual discrimination performance in multielement images is closely related to visual identification performance: elements of the image are distinguished only insofar as they appear to have distinct, discrete, internal characterizations. This report is concerned with the detailed relationship between such internal characterizations and observable discrimination performance. Two types of general processes that might underline discrimination are considered. The first is based on computing all possible internal image characterizations that could allow a correct decision, each characterization weighted by the probability of its occurrence and of a correct decision being made. The second process is based on computing the difference between the probabilities associated with the internal characterizations of the individual image elements, the difference quantified naturally with an l(p) norm. The relationship between the two processes was investigated analytically and by Monte Carlo simulations over a plausible range of numbers n of the internal characterizations of each of the m elements in the image. The predictions of the two processes were found to be closely similar. The relationship was precisely one-to-one, however, only for n = 2, m = 3, 4, 6, and for n greater than 2, m = 3, 4, p = 2. For all other cases tested, a one-to-one relationship was shown to be impossible.
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Colour discrimination and categorisation in Williams syndrome.
Farran, Emily K; Cranwell, Matthew B; Alvarez, James; Franklin, Anna
2013-10-01
Individuals with Williams syndrome (WS) present with impaired functioning of the dorsal visual stream relative to the ventral visual stream. As such, little attention has been given to ventral stream functions in WS. We investigated colour processing, a predominantly ventral stream function, for the first time in nineteen individuals with Williams syndrome. Colour discrimination was assessed using the Farnsworth-Munsell 100 hue test. Colour categorisation was assessed using a match-to-sample test and a colour naming task. A visual search task was also included as a measure of sensitivity to the size of perceptual colour difference. Results showed that individuals with WS have reduced colour discrimination relative to typically developing participants matched for chronological age; performance was commensurate with a typically developing group matched for non-verbal ability. In contrast, categorisation was typical in WS, although there was some evidence that sensitivity to the size of perceptual colour differences was reduced in this group. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hippocampus, perirhinal cortex, and complex visual discriminations in rats and humans
Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.
2015-01-01
Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with perirhinal lesions were impaired and did not exhibit the normal preference for exploring the odd object. Notably, rats with hippocampal lesions exhibited the same impairment. Thus, the deficit is unlikely to illuminate functions attributed specifically to perirhinal cortex. Both lesion groups were able to acquire visual discriminations involving the same objects used in the oddity task. Patients with hippocampal damage or larger medial temporal lobe lesions were intact in a similar oddity task that allowed participants to explore objects quickly using eye movements. We suggest that humans were able to rely on an intact working memory capacity to perform this task, whereas rats (who moved slowly among the objects) needed to rely on long-term memory. PMID:25593294
Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.
Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T
2012-01-02
Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.
Neurons Forming Optic Glomeruli Compute Figure–Ground Discriminations in Drosophila
Aptekar, Jacob W.; Keleş, Mehmet F.; Lu, Patrick M.; Zolotova, Nadezhda M.
2015-01-01
Many animals rely on visual figure–ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure–ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula—one of the four, primary neuropiles of the fly optic lobe—performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure–ground stimuli in a homologous manner to the behavior; “figure-like” stimuli are coded similar to one another and “ground-like” stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. PMID:25972183
Neurons forming optic glomeruli compute figure-ground discriminations in Drosophila.
Aptekar, Jacob W; Keleş, Mehmet F; Lu, Patrick M; Zolotova, Nadezhda M; Frye, Mark A
2015-05-13
Many animals rely on visual figure-ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure-ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula--one of the four, primary neuropiles of the fly optic lobe--performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure-ground stimuli in a homologous manner to the behavior; "figure-like" stimuli are coded similar to one another and "ground-like" stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. Copyright © 2015 the authors 0270-6474/15/357587-13$15.00/0.
Viewing the body modulates tactile receptive fields.
Haggard, Patrick; Christakou, Anastasia; Serino, Andrea
2007-06-01
Tactile discrimination performance depends on the receptive field (RF) size of somatosensory cortical (SI) neurons. Psychophysical masking effects can reveal the RF of an idealized "virtual" somatosensory neuron. Previous studies show that top-down factors strongly affect tactile discrimination performance. Here, we show that non-informative vision of the touched body part influences tactile discrimination by modulating tactile RFs. Ten subjects performed spatial discrimination between touch locations on the forearm. Performance was improved when subjects saw their forearm compared to viewing a neutral object in the same location. The extent of visual information was relevant, since restricted view of the forearm did not have this enhancing effect. Vibrotactile maskers were placed symmetrically on either side of the tactile target locations, at two different distances. Overall, masking significantly impaired discrimination performance, but the spatial gradient of masking depended on what subjects viewed. Viewing the body reduced the effect of distant maskers, but enhanced the effect of close maskers, as compared to viewing a neutral object. We propose that viewing the body improves functional touch by sharpening tactile RFs in an early somatosensory map. Top-down modulation of lateral inhibition could underlie these effects.
Optimal visuotactile integration for velocity discrimination of self-hand movements
Chancel, M.; Blanchard, C.; Guerraz, M.; Montagnini, A.
2016-01-01
Illusory hand movements can be elicited by a textured disk or a visual pattern rotating under one's hand, while proprioceptive inputs convey immobility information (Blanchard C, Roll R, Roll JP, Kavounoudias A. PLoS One 8: e62475, 2013). Here, we investigated whether visuotactile integration can optimize velocity discrimination of illusory hand movements in line with Bayesian predictions. We induced illusory movements in 15 volunteers by visual and/or tactile stimulation delivered at six angular velocities. Participants had to compare hand illusion velocities with a 5°/s hand reference movement in an alternative forced choice paradigm. Results showed that the discrimination threshold decreased in the visuotactile condition compared with unimodal (visual or tactile) conditions, reflecting better bimodal discrimination. The perceptual strength (gain) of the illusions also increased: the stimulation required to give rise to a 5°/s illusory movement was slower in the visuotactile condition compared with each of the two unimodal conditions. The maximum likelihood estimation model satisfactorily predicted the improved discrimination threshold but not the increase in gain. When we added a zero-centered prior, reflecting immobility information, the Bayesian model did actually predict the gain increase but systematically overestimated it. Interestingly, the predicted gains better fit the visuotactile performances when a proprioceptive noise was generated by covibrating antagonist wrist muscles. These findings show that kinesthetic information of visual and tactile origins is optimally integrated to improve velocity discrimination of self-hand movements. However, a Bayesian model alone could not fully describe the illusory phenomenon pointing to the crucial importance of the omnipresent muscle proprioceptive cues with respect to other sensory cues for kinesthesia. PMID:27385802
Visual Search Performance in Patients with Vision Impairment: A Systematic Review.
Senger, Cassia; Margarido, Maria Rita Rodrigues Alves; De Moraes, Carlos Gustavo; De Fendi, Ligia Issa; Messias, André; Paula, Jayter Silva
2017-11-01
Patients with visual impairment are constantly facing challenges to achieve an independent and productive life, which depends upon both a good visual discrimination and search capacities. Given that visual search is a critical skill for several daily tasks and could be used as an index of the overall visual function, we investigated the relationship between vision impairment and visual search performance. A comprehensive search was undertaken using electronic PubMed, EMBASE, LILACS, and Cochrane databases from January 1980 to December 2016, applying the following terms: "visual search", "visual search performance", "visual impairment", "visual exploration", "visual field", "hemianopia", "search time", "vision lost", "visual loss", and "low vision". Two hundred seventy six studies from 12,059 electronic database files were selected, and 40 of them were included in this review. Studies included participants of all ages, both sexes, and the sample sizes ranged from 5 to 199 participants. Visual impairment was associated with worse visual search performance in several ophthalmologic conditions, which were either artificially induced, or related to specific eye and neurological diseases. This systematic review details all the described circumstances interfering with visual search tasks, highlights the need for developing technical standards, and outlines patterns for diagnosis and therapy using visual search capabilities.
Cognitive tunneling: use of visual information under stress.
Dirkin, G R
1983-02-01
References to "tunnel vision" under stress are considered to describe a process of attentional, rather than visual, narrowing. The hypothesis of Easterbrook that the range of cue utilization is reduced under stress was tested with a primary task located in the visual periphery. High school volunteers performed a visual discrimination task with choice reaction time (RT) as the dependent variable. A 2 X 3 order of presentation by practice design, with repeated measures on the last factor, was employed. Two levels of stress, high and low, were operationalized by the subject's performing in the presence of an evaluative audience or alone. Pulse rate was employed as a manipulation check on arousal. The results partially supported the hypothesis that a peripherally visual primary task could be attended to under stress without decrement in performance.
Multiple task performance as a predictor of the potential of air traffic controller trainees.
DOT National Transportation Integrated Search
1972-01-01
Two hundred and twenty-nine air traffic controller trainees were tested on the CAMI Multiple Task Performance Battery. The battery provides objective measures of monitoring, arithmetical skills, visual discrimination, and group problem solving. The c...
The Influence of Visual Ability on Learning and Memory Performance in 13 Strains of Mice
ERIC Educational Resources Information Center
Brown, Richard E.; Wong, Aimee A.
2007-01-01
We calculated visual ability in 13 strains of mice (129SI/Sv1mJ, A/J, AKR/J, BALB/cByJ, C3H/HeJ, C57BL/6J, CAST/EiJ, DBA/2J, FVB/NJ, MOLF/EiJ, SJL/J, SM/J, and SPRET/EiJ) on visual detection, pattern discrimination, and visual acuity and tested these and other mice of the same strains in a behavioral test battery that evaluated visuo-spatial…
Seeing visual word forms: spatial summation, eccentricity and spatial configuration.
Kao, Chien-Hui; Chen, Chien-Chung
2012-06-01
We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hecker, Elizabeth A.; Serences, John T.; Srinivasan, Ramesh
2013-01-01
Interacting with the environment requires the ability to flexibly direct attention to relevant features. We examined the degree to which individuals attend to visual features within and across Detection, Fine Discrimination, and Coarse Discrimination tasks. Electroencephalographic (EEG) responses were measured to an unattended peripheral flickering (4 or 6 Hz) grating while individuals (n = 33) attended to orientations that were offset by 0°, 10°, 20°, 30°, 40°, and 90° from the orientation of the unattended flicker. These unattended responses may be sensitive to attentional gain at the attended spatial location, since attention to features enhances early visual responses throughout the visual field. We found no significant differences in tuning curves across the three tasks in part due to individual differences in strategies. We sought to characterize individual attention strategies using hierarchical Bayesian modeling, which grouped individuals into families of curves that reflect attention to the physical target orientation (“on-channel”) or away from the target orientation (“off-channel”) or a uniform distribution of attention. The different curves were related to behavioral performance; individuals with “on-channel” curves had lower thresholds than individuals with uniform curves. Individuals with “off-channel” curves during Fine Discrimination additionally had lower thresholds than those assigned to uniform curves, highlighting the perceptual benefits of attending away from the physical target orientation during fine discriminations. Finally, we showed that a subset of individuals with optimal curves (“on-channel”) during Detection also demonstrated optimal curves (“off-channel”) during Fine Discrimination, indicating that a subset of individuals can modulate tuning optimally for detection and discrimination. PMID:23678013
A description of discrete internal representation schemes for visual pattern discrimination.
Foster, D H
1980-01-01
A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.
Visual performance with sport-tinted contact lenses in natural sunlight.
Erickson, Graham B; Horn, Fraser C; Barney, Tyler; Pexton, Brett; Baird, Richard Y
2009-05-01
The use of tinted and clear contact lenses (CLs) in all aspects of life is becoming a more popular occurrence, particularly in athletic activities. This study broadens previous research regarding performance-tinted CLs and their effects on measures of visual performance. Thirty-three subjects (14 male, 19 female) were fitted with clear B&L Optima 38, 50% visible light transmission Amber and 36% visible light transmission Gray-Green Nike Maxsight CLs in an individualized randomized sequence. Subjects were dark-adapted with welding goggles before testing and in between subtests involving a Bailey-Lovie chart and the Haynes Distance Rock test. The sequence of testing was repeated for each lens modality. The Amber and Gray-Green lenses enabled subjects to recover vision faster in bright sunlight compared with clear lenses. Also, subjects were able to achieve better visual recognition in bright sunlight when compared with clear lenses. Additionally, the lenses allowed the subjects to alternate fixation between a bright and shaded target at a more rapid rate in bright sunlight as compared with clear lenses. Subjects preferred both the Amber and Gray-Green lenses over clear lenses in the bright and shadowed target conditions. The results of the current study show that Maxsight Amber and Gray-Green lenses provide better contrast discrimination in bright sunlight, better contrast discrimination when alternating between bright and shaded target conditions, better speed of visual recovery in bright sunlight, and better overall visual performance in bright and shaded target conditions compared with clear lenses.
Visual feature discrimination versus compression ratio for polygonal shape descriptors
NASA Astrophysics Data System (ADS)
Heuer, Joerg; Sanahuja, Francesc; Kaup, Andre
2000-10-01
In the last decade several methods for low level indexing of visual features appeared. Most often these were evaluated with respect to their discrimination power using measures like precision and recall. Accordingly, the targeted application was indexing of visual data within databases. During the standardization process of MPEG-7 the view on indexing of visual data changed, taking also communication aspects into account where coding efficiency is important. Even if the descriptors used for indexing are small compared to the size of images, it is recognized that there can be several descriptors linked to an image, characterizing different features and regions. Beside the importance of a small memory footprint for the transmission of the descriptor and the memory footprint in a database, eventually the search and filtering can be sped up by reducing the dimensionality of the descriptor if the metric of the matching can be adjusted. Based on a polygon shape descriptor presented for MPEG-7 this paper compares the discrimination power versus memory consumption of the descriptor. Different methods based on quantization are presented and their effect on the retrieval performance are measured. Finally an optimized computation of the descriptor is presented.
Bastien, Maude; Moffet, Hélène; Bouyer, Laurent; Perron, Marc; Hébert, Luc J; Leblond, Jean
2014-02-01
The Star Excursion Balance Test (SEBT) has frequently been used to measure motor control and residual functional deficits at different stages of recovery from lateral ankle sprain (LAS) in various populations. However, the validity of the measure used to characterize performance--the maximal reach distance (MRD) measured by visual estimation--is still unknown. To evaluate the concurrent validity of the MRD in the SEBT estimated visually vs the MRD measured with a 3D motion-capture system and evaluate and compare the discriminant validity of 2 MRD-normalization methods (by height or by lower-limb length) in participants with or without LAS (n = 10 per group). There is a high concurrent validity and a good degree of accuracy between the visual estimation measurement and the MRD gold-standard measurement for both groups and under all conditions. The Cohen d ratios between groups and MANOVA products were higher when computed from MRD data normalized by height. The results support the concurrent validity of visual estimation of the MRD and the use of the SEBT to evaluate motor control. Moreover, normalization of MRD data by height appears to increase the discriminant validity of this test.
2018-01-01
Many individuals with posttraumatic stress disorder (PTSD) report experiencing frequent intrusive memories of the original traumatic event (e.g., flashbacks). These memories can be triggered by situations or stimuli that reflect aspects of the trauma and may reflect basic processes in learning and memory, such as generalization. It is possible that, through increased generalization, non-threatening stimuli that once evoked normal memories become associated with traumatic memories. Previous research has reported increased generalization in PTSD, but the role of visual discrimination processes has not been examined. To investigate visual discrimination in PTSD, 143 participants (Veterans and civilians) self-assessed for symptom severity were grouped according to the presence of severe PTSD symptoms (PTSS) vs. few/no symptoms (noPTSS). Participants were given a visual match-to-sample pattern separation task that varied trials by spatial separation (Low, Medium, High) and temporal delays (5, 10, 20, 30 s). Unexpectedly, the PTSS group demonstrated better discrimination performance than the noPTSS group at the most difficult spatial trials (Low spatial separation). Further assessment of accuracy and reaction time using diffusion drift modeling indicated that the better performance by the PTSS group on the hardest trials was not explained by slower reaction times, but rather a faster accumulation of evidence during decision making in conjunction with a reduced threshold, indicating a tendency in the PTSS group to decide quickly rather than waiting for additional evidence to support the decision. This result supports the need for future studies examining the precise role of discrimination and generalization in PTSD, and how these cognitive processes might contribute to expression and maintenance of PTSD symptoms. PMID:29736339
Exploring What's Missing: What Do Target Absent Trials Reveal about Autism Search Superiority?
ERIC Educational Resources Information Center
Keehn, Brandon; Joseph, Robert M.
2016-01-01
We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of…
Right Hand Presence Modulates Shifts of Exogenous Visuospatial Attention in Near Perihand Space
ERIC Educational Resources Information Center
Lloyd, Donna M.; Azanon, Elena; Poliakoff, Ellen
2010-01-01
To investigate attentional shifting in perihand space, we measured performance on a covert visual orienting task under different hand positions. Participants discriminated visual shapes presented on a screen and responded using footpedals placed under their right foot. With the right hand positioned by the right side of the screen, mean cueing…
Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2011-10-01
The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wihardi, Y.; Setiawan, W.; Nugraha, E.
2018-01-01
On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.
Exposure to Organic Solvents Used in Dry Cleaning Reduces Low and High Level Visual Function
Jiménez Barbosa, Ingrid Astrid
2015-01-01
Purpose To investigate whether exposure to occupational levels of organic solvents in the dry cleaning industry is associated with neurotoxic symptoms and visual deficits in the perception of basic visual features such as luminance contrast and colour, higher level processing of global motion and form (Experiment 1), and cognitive function as measured in a visual search task (Experiment 2). Methods The Q16 neurotoxic questionnaire, a commonly used measure of neurotoxicity (by the World Health Organization), was administered to assess the neurotoxic status of a group of 33 dry cleaners exposed to occupational levels of organic solvents (OS) and 35 age-matched non dry-cleaners who had never worked in the dry cleaning industry. In Experiment 1, to assess visual function, contrast sensitivity, colour/hue discrimination (Munsell Hue 100 test), global motion and form thresholds were assessed using computerised psychophysical tests. Sensitivity to global motion or form structure was quantified by varying the pattern coherence of global dot motion (GDM) and Glass pattern (oriented dot pairs) respectively (i.e., the percentage of dots/dot pairs that contribute to the perception of global structure). In Experiment 2, a letter visual-search task was used to measure reaction times (as a function of the number of elements: 4, 8, 16, 32, 64 and 100) in both parallel and serial search conditions. Results Dry cleaners exposed to organic solvents had significantly higher scores on the Q16 compared to non dry-cleaners indicating that dry cleaners experienced more neurotoxic symptoms on average. The contrast sensitivity function for dry cleaners was significantly lower at all spatial frequencies relative to non dry-cleaners, which is consistent with previous studies. Poorer colour discrimination performance was also noted in dry cleaners than non dry-cleaners, particularly along the blue/yellow axis. In a new finding, we report that global form and motion thresholds for dry cleaners were also significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners. Conclusions Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour) and high level visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance. PMID:25933026
Blindness enhances tactile acuity and haptic 3-D shape discrimination.
Norman, J Farley; Bartholomew, Ashley N
2011-10-01
This study compared the sensory and perceptual abilities of the blind and sighted. The 32 participants were required to perform two tasks: tactile grating orientation discrimination (to determine tactile acuity) and haptic three-dimensional (3-D) shape discrimination. The results indicated that the blind outperformed their sighted counterparts (individually matched for both age and sex) on both tactile tasks. The improvements in tactile acuity that accompanied blindness occurred for all blind groups (congenital, early, and late). However, the improvements in haptic 3-D shape discrimination only occurred for the early-onset and late-onset blindness groups; the performance of the congenitally blind was no better than that of the sighted controls. The results of the present study demonstrate that blindness does lead to an enhancement of tactile abilities, but they also suggest that early visual experience may play a role in facilitating haptic 3-D shape discrimination.
Transfer of perceptual learning between different visual tasks
McGovern, David P.; Webb, Ben S.; Peirce, Jonathan W.
2012-01-01
Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this ‘perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a ‘global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks. PMID:23048211
Transfer of perceptual learning between different visual tasks.
McGovern, David P; Webb, Ben S; Peirce, Jonathan W
2012-10-09
Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this 'perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a 'global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks.
Statistical learning and auditory processing in children with music training: An ERP study.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne
2017-07-01
The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition
2013-01-01
Discriminative Visual Recognition ∗ Felix X. Yu†, Liangliang Cao§, Rogerio S. Feris§, John R. Smith§, Shih-Fu Chang† † Columbia University § IBM T. J...for Designing Category-Level Attributes for Dis- criminative Visual Recognition [3]. We first provide an overview of the proposed ap- proach in...2013 to 00-00-2013 4. TITLE AND SUBTITLE Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition 5a
Discrimination of holograms and real objects by pigeons (Columba livia) and humans (Homo sapiens).
Stephan, Claudia; Steurer, Michael M; Aust, Ulrike
2014-08-01
The type of stimulus material employed in visual tasks is crucial to all comparative cognition research that involves object recognition. There is considerable controversy about the use of 2-dimensional stimuli and the impact that the lack of the 3rd dimension (i.e., depth) may have on animals' performance in tests for their visual and cognitive abilities. We report evidence of discrimination learning using a completely novel type of stimuli, namely, holograms. Like real objects, holograms provide full 3-dimensional shape information but they also offer many possibilities for systematically modifying the appearance of a stimulus. Hence, they provide a promising means for investigating visual perception and cognition of different species in a comparative way. We trained pigeons and humans to discriminate either between 2 real objects or between holograms of the same 2 objects, and we subsequently tested both species for the transfer of discrimination to the other presentation mode. The lack of any decrements in accuracy suggests that real objects and holograms were perceived as equivalent in both species and shows the general appropriateness of holograms as stimuli in visual tasks. A follow-up experiment involving the presentation of novel views of the training objects and holograms revealed some interspecies differences in rotational invariance, thereby confirming and extending the results of previous studies. Taken together, these results suggest that holograms may not only provide a promising tool for investigating yet unexplored issues, but their use may also lead to novel insights into some crucial aspects of comparative visual perception and categorization.
Preschoolers Benefit From Visually Salient Speech Cues
Holt, Rachael Frush
2015-01-01
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336
Simultaneous Visual Discrimination in Asian Elephants
ERIC Educational Resources Information Center
Nissani, Moti; Hoefler-Nissani, Donna; Lay, U. Tin; Htun, U. Wan
2005-01-01
Two experiments explored the behavior of 20 Asian elephants ("Elephas aximus") in simultaneous visual discrimination tasks. In Experiment 1, 7 Burmese logging elephants acquired a white+/black- discrimination, reaching criterion in a mean of 2.6 sessions and 117 discrete trials, whereas 4 elephants acquired a black+/white- discrimination in 5.3…
DOT National Transportation Integrated Search
1974-11-01
Two hundred and twenty-nine air traffic controller trainees were tested on the CAMI Multiple Task Performance Battery. The battery provides objective measures of monitoring, arithmetical skills, visual discrimination, and group problem solving. The c...
The reliability and clinical correlates of figure-ground perception in schizophrenia.
Malaspina, Dolores; Simon, Naomi; Goetz, Raymond R; Corcoran, Cheryl; Coleman, Eliza; Printz, David; Mujica-Parodi, Lilianne; Wolitzky, Rachel
2004-01-01
Schizophrenia subjects are impaired in a number of visual attention paradigms. However, their performance on tests of figure-ground visual perception (FGP), which requires subjects to visually discriminate figures embedded in a rival background, is relatively unstudied. We examined FGP in 63 schizophrenia patients and 27 control subjects and found that the patients performed the FGP test reliably and had significantly lower FGP scores than the control subjects. Figure-ground visual perception was significantly correlated with other neuropsychological test scores and was inversely related to negative symptoms. It was unrelated to antipsychotic medication treatment. Figure-ground visual perception depends on "top down" processing of visual stimuli, and thus this data suggests that dysfunction in the higher-level pathways that modulate visual perceptual processes may also be related to a core defect in schizophrenia.
The nootropic properties of ginseng saponin Rb1 are linked to effects on anxiety.
Churchill, James D; Gerson, Jennifer L; Hinton, Kendra A; Mifek, Jennifer L; Walter, Michael J; Winslow, Cynthia L; Deyo, Richard A
2002-01-01
Previous studies have shown that crude ginseng extracts enhance performance on shock-motivated tasks. Whether such performance enhancements are due to memory-enhancing (nootropic) properties of ginseng, or to other non-specific effects such as an influence on anxiety has not been determined. In the present study, we evaluated both the nootropic and anxiolytic effects of the ginseng saponin Rb1. In the first experiment, 80 five-day-old male chicks received intraperitoneal injections of 0, 0.25, 2.5 or 5.0 mg/kg Rb1. Performance on a visual discrimination task was evaluated 15 minutes, 24 and 72 hours later. Acquisition of a visual discrimination task was unaffected by drug treatment, but the number of errors was significantly reduced in the 0.25 mg/kg group during retention trials completed 24 and 72 hours after injection. Animals receiving higher dosages showed trends towards enhancement initially, but demonstrated impaired performance when tested 72 hours later. Rb1 had no effect on response rates or body weight. In the second experiment, 64 five-day-old male chicks received similar injections of Rb1 (0, 0.25, 2.5 or 5.0 mg/kg) and separation distress was evaluated 15 minutes, 24 and 72 hours later. Rb1 produced a change in separation distress that depended on the dose and environmental condition under which distress was recorded. These data suggest that Rb1 can improve memory for a visual discrimination task and that the nootropic effect may be related to changes in anxiety.
Concurrent visuomotor behaviour improves form discrimination in a patient with visual form agnosia.
Schenk, Thomas; Milner, A David
2006-09-01
It is now well established that the visual brain is divided into two visual streams, the ventral and the dorsal stream. Milner and Goodale have suggested that the ventral stream is dedicated for processing vision for perception and the dorsal stream vision for action [A.D. Milner & M.A. Goodale (1995) The Visual Brain in Action, Oxford University Press, Oxford]. However, it is possible that ongoing processes in the visuomotor stream will nevertheless have an effect on perceptual processes. This possibility was examined in the present study. We have examined the visual form-discrimination performance of the form-agnosic patient D.F. with and without a concurrent visuomotor task, and found that her performance was significantly improved in the former condition. This suggests that the visuomotor behaviour provides cues that enhance her ability to recognize the form of the target object. In control experiments we have ruled out proprioceptive and efferent cues, and therefore propose that D.F. can, to a significant degree, access the object's visuomotor representation in the dorsal stream. Moreover, we show that the grasping-induced perceptual improvement disappears if the target objects only differ with respect to their shape but not their width. This suggests that shape information per se is not used for this grasping task.
Delhey, Kaspar; Hall, Michelle; Kingma, Sjouke A; Peters, Anne
2013-01-07
Colour signals are expected to match visual sensitivities of intended receivers. In birds, evolutionary shifts from violet-sensitive (V-type) to ultraviolet-sensitive (U-type) vision have been linked to increased prevalence of colours rich in shortwave reflectance (ultraviolet/blue), presumably due to better perception of such colours by U-type vision. Here we provide the first test of this widespread idea using fairy-wrens and allies (Family Maluridae) as a model, a family where shifts in visual sensitivities from V- to U-type eyes are associated with male nuptial plumage rich in ultraviolet/blue colours. Using psychophysical visual models, we compared the performance of both types of visual systems at two tasks: (i) detecting contrast between male plumage colours and natural backgrounds, and (ii) perceiving intraspecific chromatic variation in male plumage. While U-type outperforms V-type vision at both tasks, the crucial test here is whether U-type vision performs better at detecting and discriminating ultraviolet/blue colours when compared with other colours. This was true for detecting contrast between plumage colours and natural backgrounds (i), but not for discriminating intraspecific variability (ii). Our data indicate that selection to maximize conspicuousness to conspecifics may have led to the correlation between ultraviolet/blue colours and U-type vision in this clade of birds.
Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292
Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.
Spatial vision in older adults: perceptual changes and neural bases.
McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N
2018-05-17
The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.
ERIC Educational Resources Information Center
Lee, Inah; Shin, Ji Yun
2012-01-01
The exact roles of the medial prefrontal cortex (mPFC) in conditional choice behavior are unknown and a visual contextual response selection task was used for examining the issue. Inactivation of the mPFC severely disrupted performance in the task. mPFC inactivations, however, did not disrupt the capability of perceptual discrimination for visual…
Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz
2016-01-01
The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692
Wang, Zhengke; Cheng-Lai, Alice; Song, Yan; Cutting, Laurie; Jiang, Yuzheng; Lin, Ou; Meng, Xiangzhi; Zhou, Xiaolin
2014-08-01
Learning to read involves discriminating between different written forms and establishing connections with phonology and semantics. This process may be partially built upon visual perceptual learning, during which the ability to process the attributes of visual stimuli progressively improves with practice. The present study investigated to what extent Chinese children with developmental dyslexia have deficits in perceptual learning by using a texture discrimination task, in which participants were asked to discriminate the orientation of target bars. Experiment l demonstrated that, when all of the participants started with the same initial stimulus-to-mask onset asynchrony (SOA) at 300 ms, the threshold SOA, adjusted according to response accuracy for reaching 80% accuracy, did not show a decrement over 5 days of training for children with dyslexia, whereas this threshold SOA steadily decreased over the training for the control group. Experiment 2 used an adaptive procedure to determine the threshold SOA for each participant during training. Results showed that both the group of dyslexia and the control group attained perceptual learning over the sessions in 5 days, although the threshold SOAs were significantly higher for the group of dyslexia than for the control group; moreover, over individual participants, the threshold SOA negatively correlated with their performance in Chinese character recognition. These findings suggest that deficits in visual perceptual processing and learning might, in part, underpin difficulty in reading Chinese. Copyright © 2014 John Wiley & Sons, Ltd.
Bratzke, Daniel; Seifried, Tanja; Ulrich, Rolf
2012-08-01
This study assessed possible cross-modal transfer effects of training in a temporal discrimination task from vision to audition as well as from audition to vision. We employed a pretest-training-post-test design including a control group that performed only the pretest and the post-test. Trained participants showed better discrimination performance with their trained interval than the control group. This training effect transferred to the other modality only for those participants who had been trained with auditory stimuli. The present study thus demonstrates for the first time that training on temporal discrimination within the auditory modality can transfer to the visual modality but not vice versa. This finding represents a novel illustration of auditory dominance in temporal processing and is consistent with the notion that time is primarily encoded in the auditory system.
Lambert, Anthony J; Wootton, Adrienne
2017-08-01
Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Basic quantitative assessment of visual performance in patients with very low vision.
Bach, Michael; Wilke, Michaela; Wilhelm, Barbara; Zrenner, Eberhart; Wilke, Robert
2010-02-01
A variety of approaches to developing visual prostheses are being pursued: subretinal, epiretinal, via the optic nerve, or via the visual cortex. This report presents a method of comparing their efficacy at genuinely improving visual function, starting at no light perception (NLP). A test battery (a computer program, Basic Assessment of Light and Motion [BaLM]) was developed in four basic visual dimensions: (1) light perception (light/no light), with an unstructured large-field stimulus; (2) temporal resolution, with single versus double flash discrimination; (3) localization of light, where a wedge extends from the center into four possible directions; and (4) motion, with a coarse pattern moving in one of four directions. Two- or four-alternative, forced-choice paradigms were used. The participants' responses were self-paced and delivered with a keypad. The feasibility of the BaLM was tested in 73 eyes of 51 patients with low vision. The light and time test modules discriminated between NLP and light perception (LP). The localization and motion modules showed no significant response for NLP but discriminated between LP and hand movement (HM). All four modules reached their ceilings in the acuity categories higher than HM. BaLM results systematically differed between the very-low-acuity categories NLP, LP, and HM. Light and time yielded similar results, as did localization and motion; still, for assessing the visual prostheses with differing temporal characteristics, they are not redundant. The results suggest that this simple test battery provides a quantitative assessment of visual function in the very-low-vision range from NLP to HM.
Color vision but not visual attention is altered in migraine.
Shepherd, Alex J
2006-04-01
To examine visual search performance in migraine and headache-free control groups and to determine whether reports of selective color vision deficits in migraine occur preattentively. Visual search is a classic technique to measure certain components of visual attention. The technique can be manipulated to measure both preattentive (automatic) and attentive processes. Here, visual search for colored targets was employed to extend earlier reports that the detection or discrimination of colors selective for the short-wavelength sensitive cone photoreceptors in the retina (S or "blue" cones) is impaired in migraine. Visual search performance for small and large color differences was measured in 34 migraine and 34 control participants. Small and large color differences were included to assess attentive and preattentive processing, respectively. In separate conditions, colored stimuli were chosen that would be detected selectively by either the S-, or by the long- (L or "red") and middle (M or "green")-wavelength sensitive cone photoreceptors. The results showed no preattentive differences between the migraine and control groups. For active, or attentive, search, differences between the migraine and control groups occurred for colors detected by the S-cones only, there were no differences for colors detected by the L- and M-cones. The migraine group responded significantly more slowly than the control group for the S-cone colors. The pattern of results indicates that there are no overall differences in search performance between migraine and control groups. The differences found for the S-cone colors are attributed to impaired discrimination of these colors in migraine and not to differences in attention.
Brain activity associated with selective attention, divided attention and distraction.
Salo, Emma; Salmela, Viljami; Salmi, Juha; Numminen, Jussi; Alho, Kimmo
2017-06-01
Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks. Copyright © 2017 Elsevier B.V. All rights reserved.
Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.
Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio
2015-07-08
When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. Copyright © 2015 the authors 0270-6474/15/359786-13$15.00/0.
Countermeasures to Improve the Driving Performance of Older Drivers.
ERIC Educational Resources Information Center
Ashman, Richard D.; And Others
1994-01-01
In a 2-year project, 105 older drivers were given physical therapy (flexibility exercises), perceptual therapy (to improve visual discrimination), and driver education; traffic engineering modifications were also made. All four interventions improved performance an average of 7.9%. Engineering was most cost effective on high-volume roads, the…
Pina Rodrigues, Ana; Rebola, José; Jorge, Helena; Ribeiro, Maria José; Pereira, Marcelino; van Asselen, Marieke; Castelo-Branco, Miguel
2017-01-01
The specificity of visual channel impairment in dyslexia has been the subject of much controversy. The purpose of this study was to determine if a differential pattern of impairment can be verified between visual channels in children with developmental dyslexia, and in particular, if the pattern of deficits is more conspicuous in tasks where the magnocellular-dorsal system recruitment prevails. Additionally, we also aimed at investigating the association between visual perception thresholds and reading. In the present case-control study, we compared perception thresholds of 33 children diagnosed with developmental dyslexia and 34 controls in a speed discrimination task, an achromatic contrast sensitivity task, and a chromatic contrast sensitivity task. Moreover, we addressed the correlation between the different perception thresholds and reading performance, as assessed by means of a standardized reading test (accuracy and fluency). Group comparisons were performed by the Mann-Whitney U test, and Spearman's rho was used as a measure of correlation. Results showed that, when compared to controls, children with dyslexia were more impaired in the speed discrimination task, followed by the achromatic contrast sensitivity task, with no impairment in the chromatic contrast sensitivity task. These results are also consistent with the magnocellular theory since the impairment profile of children with dyslexia in the visual threshold tasks reflected the amount of magnocellular-dorsal stream involvement. Moreover, both speed and achromatic thresholds were significantly correlated with reading performance, in terms of accuracy and fluency. Notably, chromatic contrast sensitivity thresholds did not correlate with any of the reading measures. Our evidence stands in favor of a differential visual channel deficit in children with developmental dyslexia and contributes to the debate on the pathophysiology of reading impairments.
Visual Equivalence and Amodal Completion in Cuttlefish
Lin, I-Rong; Chiao, Chuan-Chin
2017-01-01
Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis, including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods. PMID:28220075
Visual body perception in anorexia nervosa.
Urgesi, Cosimo; Fornasari, Livia; Perini, Laura; Canalaz, Francesca; Cremaschi, Silvana; Faleschini, Laura; Balestrieri, Matteo; Fabbro, Franco; Aglioti, Salvatore Maria; Brambilla, Paolo
2012-05-01
Disturbance of body perception is a central aspect of anorexia nervosa (AN) and several neuroimaging studies have documented structural and functional alterations of occipito-temporal cortices involved in visual body processing. However, it is unclear whether these perceptual deficits involve more basic aspects of others' body perception. A consecutive sample of 15 adolescent patients with AN were compared with a group of 15 age- and gender-matched controls in delayed matching to sample tasks requiring the visual discrimination of the form or of the action of others' body. Patients showed better visual discrimination performance than controls in detail-based processing of body forms but not of body actions, which positively correlated with their increased tendency to convert a signal of punishment into a signal of reinforcement (higher persistence scores). The paradoxical advantage of patients with AN in detail-based body processing may be associated to their tendency to routinely explore body parts as a consequence of their obsessive worries about body appearance. Copyright © 2012 Wiley Periodicals, Inc.
Toomey, Matthew B.; McGraw, Kevin J.
2011-01-01
Background For many bird species, vision is the primary sensory modality used to locate and assess food items. The health and spectral sensitivities of the avian visual system are influenced by diet-derived carotenoid pigments that accumulate in the retina. Among wild House Finches (Carpodacus mexicanus), we have found that retinal carotenoid accumulation varies significantly among individuals and is related to dietary carotenoid intake. If diet-induced changes in retinal carotenoid accumulation alter spectral sensitivity, then they have the potential to affect visually mediated foraging performance. Methodology/Principal Findings In two experiments, we measured foraging performance of house finches with dietarily manipulated retinal carotenoid levels. We tested each bird's ability to extract visually contrasting food items from a matrix of inedible distracters under high-contrast (full) and dimmer low-contrast (red-filtered) lighting conditions. In experiment one, zeaxanthin-supplemented birds had significantly increased retinal carotenoid levels, but declined in foraging performance in the high-contrast condition relative to astaxanthin-supplemented birds that showed no change in retinal carotenoid accumulation. In experiments one and two combined, we found that retinal carotenoid concentrations predicted relative foraging performance in the low- vs. high-contrast light conditions in a curvilinear pattern. Performance was positively correlated with retinal carotenoid accumulation among birds with low to medium levels of accumulation (∼0.5–1.5 µg/retina), but declined among birds with very high levels (>2.0 µg/retina). Conclusion/Significance Our results suggest that carotenoid-mediated spectral filtering enhances color discrimination, but that this improvement is traded off against a reduction in sensitivity that can compromise visual discrimination. Thus, retinal carotenoid levels may be optimized to meet the visual demands of specific behavioral tasks and light environments. PMID:21747917
The effect of age upon the perception of 3-D shape from motion.
Norman, J Farley; Cheeseman, Jacob R; Pyles, Jessica; Baxter, Michael W; Thomason, Kelsey E; Calloway, Autum B
2013-12-18
Two experiments evaluated the ability of 50 older, middle-aged, and younger adults to discriminate the 3-dimensional (3-D) shape of curved surfaces defined by optical motion. In Experiment 1, temporal correspondence was disrupted by limiting the lifetimes of the moving surface points. In order to discriminate 3-D surface shape reliably, the younger and middle-aged adults needed a surface point lifetime of approximately 4 views (in the apparent motion sequences). In contrast, the older adults needed a much longer surface point lifetime of approximately 9 views in order to reliably perform the same task. In Experiment 2, the negative effect of age upon 3-D shape discrimination from motion was replicated. In this experiment, however, the participants' abilities to discriminate grating orientation and speed were also assessed. Edden et al. (2009) have recently demonstrated that behavioral grating orientation discrimination correlates with GABA (gamma aminobutyric acid) concentration in human visual cortex. Our results demonstrate that the negative effect of age upon 3-D shape perception from motion is not caused by impairments in the ability to perceive motion per se, but does correlate significantly with grating orientation discrimination. This result suggests that the age-related decline in 3-D shape discrimination from motion is related to decline in GABA concentration in visual cortex. Copyright © 2013 Elsevier B.V. All rights reserved.
2018-01-01
Objective To study the performance of multifocal-visual-evoked-potential (mfVEP) signals filtered using empirical mode decomposition (EMD) in discriminating, based on amplitude, between control and multiple sclerosis (MS) patient groups, and to reduce variability in interocular latency in control subjects. Methods MfVEP signals were obtained from controls, clinically definitive MS and MS-risk progression patients (radiologically isolated syndrome (RIS) and clinically isolated syndrome (CIS)). The conventional method of processing mfVEPs consists of using a 1–35 Hz bandpass frequency filter (XDFT). The EMD algorithm was used to decompose the XDFT signals into several intrinsic mode functions (IMFs). This signal processing was assessed by computing the amplitudes and latencies of the XDFT and IMF signals (XEMD). The amplitudes from the full visual field and from ring 5 (9.8–15° eccentricity) were studied. The discrimination index was calculated between controls and patients. Interocular latency values were computed from the XDFT and XEMD signals in a control database to study variability. Results Using the amplitude of the mfVEP signals filtered with EMD (XEMD) obtains higher discrimination index values than the conventional method when control, MS-risk progression (RIS and CIS) and MS subjects are studied. The lowest variability in interocular latency computations from the control patient database was obtained by comparing the XEMD signals with the XDFT signals. Even better results (amplitude discrimination and latency variability) were obtained in ring 5 (9.8–15° eccentricity of the visual field). Conclusions Filtering mfVEP signals using the EMD algorithm will result in better identification of subjects at risk of developing MS and better accuracy in latency studies. This could be applied to assess visual cortex activity in MS diagnosis and evolution studies. PMID:29677200
de Rivera, Christina; Boutet, Isabelle; Zicker, Steven C; Milgram, Norton W
2005-03-01
Tasks requiring visual discrimination are commonly used in assessment of canine cognitive function. However, little is known about canine visual processing, and virtually nothing is known about the effects of age on canine visual function. This study describes a novel behavioural method developed to assess one aspect of canine visual function, namely contrast sensitivity. Four age groups (young, middle aged, old, and senior) were studied. We also included a group of middle aged to old animals that had been maintained for at least 4 years on a specially formulated food containing a broad spectrum of antioxidants and mitochondrial cofactors. Performance of this group was compared with a group in the same age range maintained on a control diet. In the first phase, all animals were trained to discriminate between two high contrast shapes. In the second phase, contrast was progressively reduced by increasing the luminance of the shapes. Performance decreased as a function of age, but the differences did not achieve statistical significance, possibly because of a small sample size in the young group. All age groups were able to acquire the initial discrimination, although the two older age groups showed slower learning. Errors increased with decreasing contrast with the maximal number of errors for the 1% contrast shape. Also, all animals on the antioxidant diet learned the task and had significantly fewer errors at the high contrast compared with the animals on the control diet. The initial results suggest that contrast sensitivity deteriorates with age in the canine while form perception is largely unaffected by age.
Measuring the effect of multiple eye fixations on memory for visual attributes.
Palmer, J; Ames, C T
1992-09-01
Because of limited peripheral vision, many visual tasks depend on multiple eye fixations. Good performance in such tasks demonstrates that some memory must survive from one fixation to the next. One factor that must influence performance is the degree to which multiple eye fixations interfere with the critical memories. In the present study, the amount of interference was measured by comparing visual discriminations based on multiple fixations to visual discriminations based on a single fixation. The procedure resembled partial report, but used a discrimination measure. In the prototype study, two lines were presented, followed by a single line and a cue. The cue pointed toward one of the positions of the first two lines. Observers were required to judge if the single line in the second display was longer or shorter than the cued line of the first display. These judgments were used to estimate a length threshold. The critical manipulation was to instruct observers either to maintain fixation between the lines of the first display or to fixate each line in sequence. The results showed an advantage for multiple fixations despite the intervening eye movements. In fact, thresholds for the multiple-fixation condition were nearly as good as those in a control condition where the lines were foveally viewed without eye movements. Thus, eye movements had little or no interfering effect in this task. Additional studies generalized the procedure and the stimuli. In conclusion, information about a variety of size and shape attributes was remembered with essentially no interference across eye fixations.
Auditory processing deficits in bipolar disorder with and without a history of psychotic features.
Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N
2015-11-01
Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Knutson, Ashley R.; Hopkins, Ramona O.; Squire, Larry R.
2013-01-01
We tested proposals that medial temporal lobe (MTL) structures support not just memory but certain kinds of visual perception as well. Patients with hippocampal lesions or larger MTL lesions attempted to identify the unique object among twin pairs of objects that had a high degree of feature overlap. Patients were markedly impaired under the more…
NMDA receptor antagonist ketamine impairs feature integration in visual perception.
Meuwese, Julia D I; van Loon, Anouk M; Scholte, H Steven; Lirk, Philipp B; Vulink, Nienke C C; Hollmann, Markus W; Lamme, Victor A F
2013-01-01
Recurrent interactions between neurons in the visual cortex are crucial for the integration of image elements into coherent objects, such as in figure-ground segregation of textured images. Blocking N-methyl-D-aspartate (NMDA) receptors in monkeys can abolish neural signals related to figure-ground segregation and feature integration. However, it is unknown whether this also affects perceptual integration itself. Therefore, we tested whether ketamine, a non-competitive NMDA receptor antagonist, reduces feature integration in humans. We administered a subanesthetic dose of ketamine to healthy subjects who performed a texture discrimination task in a placebo-controlled double blind within-subject design. We found that ketamine significantly impaired performance on the texture discrimination task compared to the placebo condition, while performance on a control fixation task was much less impaired. This effect is not merely due to task difficulty or a difference in sedation levels. We are the first to show a behavioral effect on feature integration by manipulating the NMDA receptor in humans.
ERIC Educational Resources Information Center
Behrmann, Polly; Millman, Joan
The activities collected in this handbook are planned for parents to use with their children in a learning experience. They can also be used in the classroom. Sections contain games designed to develop visual discrimination, auditory discrimination, motor coordination and oral expression. An objective is given for each game, and directions for…
Errorless discrimination and picture fading as techniques for teaching sight words to TMR students.
Walsh, B F; Lamberts, F
1979-03-01
The effectiveness of two approaches for teaching beginning sight words to 30 TMR students was compared. In Dorry and Zeaman's picture-fading technique, words are taught through association with pictures that are faded out over a series of trials, while in the Edmark program errorless-discrimination technique, words are taught through shaped sequences of visual and auditory--visual matching-to-sample, with the target word first appearing alone and eventually appearing with orthographically similar words. Students were instructed on two lists of 10 words each, one list in the picture-fading and one in the discrimination method, in a double counter-balanced, repeated-measures design. Covariance analysis on three measures (word identification, word recognition, and picture--word matching) showed highly significant differences between the two methods. Students' performance was better after instruction with the errorless-discrimination method than after instruction with the picture-fading method. The findings on picture fading were interpreted as indicating a possible failure of the shifting of control from picture to printed word that earlier researchers have hypothesized as occurring.
Li, Xuan; Allen, Philip A; Lien, Mei-Ching; Yamamoto, Naohide
2017-02-01
Previous studies on perceptual learning, acquiring a new skill through practice, appear to stimulate brain plasticity and enhance performance (Fiorentini & Berardi, 1981). The present study aimed to determine (a) whether perceptual learning can be used to compensate for age-related declines in perceptual abilities, and (b) whether the effect of perceptual learning can be transferred to untrained stimuli and subsequently improve capacity of visual working memory (VWM). We tested both healthy younger and older adults in a 3-day training session using an orientation discrimination task. A matching-to-sample psychophysical method was used to measure improvements in orientation discrimination thresholds and reaction times (RTs). Results showed that both younger and older adults improved discrimination thresholds and RTs with similar learning rates and magnitudes. Furthermore, older adults exhibited a generalization of improvements to 3 untrained orientations that were close to the training orientation and benefited more compared with younger adults from the perceptual learning as they transferred learning effects to the VWM performance. We conclude that through perceptual learning, older adults can partially counteract age-related perceptual declines, generalize the learning effect to other stimulus conditions, and further overcome the limitation of using VWM capacity to perform a perceptual task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Rolfs, Martin; Carrasco, Marisa
2012-01-01
Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086
Scotopic vision deficits in young monkeys exposed to lead.
Bushnell, P J; Bowman, R E; Allen, J R; Marlar, R J
1977-04-15
Rhesus monkeys were reared on diets designed to produce blood lead concentrations of 14 (untreated), 55, or 85 micrograms per 100 milliliters for the first year of life. Eighteen months later, blood lead levels were normal in all animals. At this time, however, visual discrimination performance in the 85-microgram group was impaired under dim light relative both to their own performance under bright light and to the performance of the other groups under all light levels used. We interpret these results to reflect a deleterious, enduring impairment of scotopic visual function (night blindness) as a result of early lead intoxication.
Delhey, Kaspar; Hall, Michelle; Kingma, Sjouke A.; Peters, Anne
2013-01-01
Colour signals are expected to match visual sensitivities of intended receivers. In birds, evolutionary shifts from violet-sensitive (V-type) to ultraviolet-sensitive (U-type) vision have been linked to increased prevalence of colours rich in shortwave reflectance (ultraviolet/blue), presumably due to better perception of such colours by U-type vision. Here we provide the first test of this widespread idea using fairy-wrens and allies (Family Maluridae) as a model, a family where shifts in visual sensitivities from V- to U-type eyes are associated with male nuptial plumage rich in ultraviolet/blue colours. Using psychophysical visual models, we compared the performance of both types of visual systems at two tasks: (i) detecting contrast between male plumage colours and natural backgrounds, and (ii) perceiving intraspecific chromatic variation in male plumage. While U-type outperforms V-type vision at both tasks, the crucial test here is whether U-type vision performs better at detecting and discriminating ultraviolet/blue colours when compared with other colours. This was true for detecting contrast between plumage colours and natural backgrounds (i), but not for discriminating intraspecific variability (ii). Our data indicate that selection to maximize conspicuousness to conspecifics may have led to the correlation between ultraviolet/blue colours and U-type vision in this clade of birds. PMID:23118438
Associative visual learning by tethered bees in a controlled visual environment.
Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin
2017-10-10
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.
DOT National Transportation Integrated Search
1988-01-01
Operational monitoring situations, in contrast to typical laboratory vigilance tasks, generally involve more than just stimulus detection and recognition. They frequently involve complex multidimensional discriminations, interpretations of significan...
Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search
Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.
2012-01-01
Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766
Peripheral vision of youths with low vision: motion perception, crowding, and visual search.
Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S
2012-08-24
Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.
Colour vision in ADHD: part 1--testing the retinal dopaminergic hypothesis.
Kim, Soyeon; Al-Haj, Mohamed; Chen, Samantha; Fuller, Stuart; Jain, Umesh; Carrasco, Marisa; Tannock, Rosemary
2014-10-24
To test the retinal dopaminergic hypothesis, which posits deficient blue color perception in ADHD, resulting from hypofunctioning CNS and retinal dopamine, to which blue cones are exquisitely sensitive. Also, purported sex differences in red color perception were explored. 30 young adults diagnosed with ADHD and 30 healthy young adults, matched on age and gender, performed a psychophysical task to measure blue and red color saturation and contrast discrimination ability. Visual function measures, such as the Visual Activities Questionnaire (VAQ) and Farnsworth-Munsell 100 hue test (FMT), were also administered. Females with ADHD were less accurate in discriminating blue and red color saturation relative to controls but did not differ in contrast sensitivity. Female control participants were better at discriminating red saturation than males, but no sex difference was present within the ADHD group. Poorer discrimination of red as well as blue color saturation in the female ADHD group may be partly attributable to a hypo-dopaminergic state in the retina, given that color perception (blue-yellow and red-green) is based on input from S-cones (short wavelength cone system) early in the visual pathway. The origin of female superiority in red perception may be rooted in sex-specific functional specialization in hunter-gather societies. The absence of this sexual dimorphism for red colour perception in ADHD females warrants further investigation.
Visuomotor sensitivity to visual information about surface orientation.
Knill, David C; Kersten, Daniel
2004-03-01
We measured human visuomotor sensitivity to visual information about three-dimensional surface orientation by analyzing movements made to place an object on a slanted surface. We applied linear discriminant analysis to the kinematics of subjects' movements to surfaces with differing slants (angle away form the fronto-parallel) to derive visuomotor d's for discriminating surfaces differing in slant by 5 degrees. Subjects' visuomotor sensitivity to information about surface orientation was very high, with discrimination "thresholds" ranging from 2 to 3 degrees. In a first experiment, we found that subjects performed only slightly better using binocular cues alone than monocular texture cues and that they showed only weak evidence for combining the cues when both were available, suggesting that monocular cues can be just as effective in guiding motor behavior in depth as binocular cues. In a second experiment, we measured subjects' perceptual discrimination and visuomotor thresholds in equivalent stimulus conditions to decompose visuomotor sensitivity into perceptual and motor components. Subjects' visuomotor thresholds were found to be slightly greater than their perceptual thresholds for a range of memory delays, from 1 to 3 s. The data were consistent with a model in which perceptual noise increases with increasing delay between stimulus presentation and movement initiation, but motor noise remains constant. This result suggests that visuomotor and perceptual systems rely on the same visual estimates of surface slant for memory delays ranging from 1 to 3 s.
Smets, Karolien; Moors, Pieter; Reynvoet, Bert
2016-01-01
Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967
Engineering Data Compendium. Human Perception and Performance. Volume 2
1988-01-01
Stimulation 5.1014 5.1004 Auditory Detection in the Presence of Visual Stimulation 5.1015 5.1005 Tactual Detection and Discrimination in the Presence of...Accessory Stimulation 5.1016 5.1006 Tactile Versus Auditory Localization of Sound 5.1007 Spatial Localization in the Presence of Inter- 5.1017...York: Wiley. Cross References 5.1004 Auditory detection in the presence of visual stimulation ; 5.1005 Tactual detection and dis- crimination in
Discriminative object tracking via sparse representation and online dictionary learning.
Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua
2014-04-01
We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.
Object versus spatial visual mental imagery in patients with schizophrenia
Aleman, André; de Haan, Edward H.F.; Kahn, René S.
2005-01-01
Objective Recent research has revealed a larger impairment of object perceptual discrimination than of spatial perceptual discrimination in patients with schizophrenia. It has been suggested that mental imagery may share processing systems with perception. We investigated whether patients with schizophrenia would show greater impairment regarding object imagery than spatial imagery. Methods Forty-four patients with schizophrenia and 20 healthy control subjects were tested on a task of object visual mental imagery and on a task of spatial visual mental imagery. Both tasks included a condition in which no imagery was needed for adequate performance, but which was in other respects identical to the imagery condition. This allowed us to adjust for nonspecific differences in individual performance. Results The results revealed a significant difference between patients and controls on the object imagery task (F1,63 = 11.8, p = 0.001) but not on the spatial imagery task (F1,63 = 0.14, p = 0.71). To test for a differential effect, we conducted a 2 (patients v. controls) х 2 (object task v. spatial task) analysis of variance. The interaction term was statistically significant (F1,62 = 5.2, p = 0.026). Conclusions Our findings suggest a differential dysfunction of systems mediating object and spatial visual mental imagery in schizophrenia. PMID:15644999
Retter, Talia L; Rossion, Bruno
2016-07-01
Discrimination of facial identities is a fundamental function of the human brain that is challenging to examine with macroscopic measurements of neural activity, such as those obtained with functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). Although visual adaptation or repetition suppression (RS) stimulation paradigms have been successfully implemented to this end with such recording techniques, objective evidence of an identity-specific discrimination response due to adaptation at the level of the visual representation is lacking. Here, we addressed this issue with fast periodic visual stimulation (FPVS) and EEG recording combined with a symmetry/asymmetry adaptation paradigm. Adaptation to one facial identity is induced through repeated presentation of that identity at a rate of 6 images per second (6 Hz) over 10 sec. Subsequently, this identity is presented in alternation with another facial identity (i.e., its anti-face, both faces being equidistant from an average face), producing an identity repetition rate of 3 Hz over a 20 sec testing sequence. A clear EEG response at 3 Hz is observed over the right occipito-temporal (ROT) cortex, indexing discrimination between the two facial identities in the absence of an explicit behavioral discrimination measure. This face identity discrimination occurs immediately after adaptation and disappears rapidly within 20 sec. Importantly, this 3 Hz response is not observed in a control condition without the single-identity 10 sec adaptation period. These results indicate that visual adaptation to a given facial identity produces an objective (i.e., at a pre-defined stimulation frequency) electrophysiological index of visual discrimination between that identity and another, and provides a unique behavior-free quantification of the effect of visual adaptation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Clark, Kait; Appelbaum, L Gregory; van den Berg, Berry; Mitroff, Stephen R; Woldorff, Marty G
2015-04-01
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. Copyright © 2015 the authors 0270-6474/15/355351-09$15.00/0.
Carrara, Verena I; Darakomon, Mue Chae; Thin, Nant War War; Paw, Naw Ta Kaw; Wah, Naw; Wah, Hser Gay; Helen, Naw; Keereecharoen, Suporn; Paw, Naw Ta Mlar; Jittamala, Podjanee; Nosten, François H; Ricci, Daniela; McGready, Rose
2016-01-01
Neurological examination, including visual fixation and tracking of a target, is routinely performed in the Shoklo Malaria Research Unit postnatal care units on the Thailand-Myanmar border. We aimed to evaluate a simple visual newborn test developed in Italy and performed by non-specialized personnel working in neonatal care units. An intensive training of local health staff in Thailand was conducted prior to performing assessments at 24, 48 and 72 hours of life in healthy, low-risk term singletons. The 48 and 72 hours results were then compared to values obtained to those from Italy. Parents and staff administering the test reported on acceptability. One hundred and seventy nine newborns, between June 2011 and October 2012, participated in the study. The test was rapidly completed if the infant remained in an optimal behavioral stage (7 ± 2 minutes) but the test duration increased significantly (12 ± 4 minutes, p < 0.001) if its behavior changed. Infants were able to fix a target and to discriminate a colored face at 24 hours of life. Horizontal tracking of a target was achieved by 96% (152/159) of the infants at 48 hours. Circular tracking, stripe discrimination and attention to distance significantly improved between each 24-hour test period. The test was easily performed by non-specialized local staff and well accepted by the parents. Healthy term singletons in this limited-resource setting have a visual response similar to that obtained to gestational age matched newborns in Italy. It is possible to use these results as a reference set of values for the visual assessment in Karen and Burmese infants in the first 72 hours of life. The utility of the 24 hours test should be pursued.
The dual rod system of amphibians supports colour discrimination at the absolute visual threshold
Yovanovich, Carola A. M.; Koskela, Sanna M.; Nevala, Noora; Kondrashev, Sergei L.
2017-01-01
The presence of two spectrally different kinds of rod photoreceptors in amphibians has been hypothesized to enable purely rod-based colour vision at very low light levels. The hypothesis has never been properly tested, so we performed three behavioural experiments at different light intensities with toads (Bufo) and frogs (Rana) to determine the thresholds for colour discrimination. The thresholds of toads were different in mate choice and prey-catching tasks, suggesting that the differential sensitivities of different spectral cone types as well as task-specific factors set limits for the use of colour in these behavioural contexts. In neither task was there any indication of rod-based colour discrimination. By contrast, frogs performing phototactic jumping were able to distinguish blue from green light down to the absolute visual threshold, where vision relies only on rod signals. The remarkable sensitivity of this mechanism comparing signals from the two spectrally different rod types approaches theoretical limits set by photon fluctuations and intrinsic noise. Together, the results indicate that different pathways are involved in processing colour cues depending on the ecological relevance of this information for each task. This article is part of the themed issue ‘Vision in dim light’. PMID:28193811
Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome.
Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A; Mottron, Laurent
2010-07-01
Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Dynamic functional brain networks involved in simple visual discrimination learning.
Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis
2014-10-01
Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.
Campbell, Dana L M; Hauber, Mark E
2009-08-01
Female zebra finches (Taeniopygia guttata) use visual and acoustic traits for accurate recognition of male conspecifics. Evidence from video playbacks confirms that both sensory modalities are important for conspecific and species discrimination, but experimental evidence of the individual roles of these cue types affecting live conspecific recognition is limited. In a spatial paradigm to test discrimination, the authors used live male zebra finch stimuli of 2 color morphs, wild-type (conspecific) and white with a painted black beak (foreign), producing 1 of 2 vocalization types: songs and calls learned from zebra finch parents (conspecific) or cross-fostered songs and calls learned from Bengalese finch (Lonchura striata vars. domestica) foster parents (foreign). The authors found that female zebra finches consistently preferred males with conspecific visual and acoustic cues over males with foreign cues, but did not discriminate when the conspecific and foreign visual and acoustic cues were mismatched. These results indicate the importance of both visual and acoustic features for female zebra finches when discriminating between live conspecific males. Copyright 2009 APA, all rights reserved.
Honeybees can discriminate between Monet and Picasso paintings.
Wu, Wen; Moreno, Antonio M; Tangen, Jason M; Reinhard, Judith
2013-01-01
Honeybees (Apis mellifera) have remarkable visual learning and discrimination abilities that extend beyond learning simple colours, shapes or patterns. They can discriminate landscape scenes, types of flowers, and even human faces. This suggests that in spite of their small brain, honeybees have a highly developed capacity for processing complex visual information, comparable in many respects to vertebrates. Here, we investigated whether this capacity extends to complex images that humans distinguish on the basis of artistic style: Impressionist paintings by Monet and Cubist paintings by Picasso. We show that honeybees learned to simultaneously discriminate between five different Monet and Picasso paintings, and that they do not rely on luminance, colour, or spatial frequency information for discrimination. When presented with novel paintings of the same style, the bees even demonstrated some ability to generalize. This suggests that honeybees are able to discriminate Monet paintings from Picasso ones by extracting and learning the characteristic visual information inherent in each painting style. Our study further suggests that discrimination of artistic styles is not a higher cognitive function that is unique to humans, but simply due to the capacity of animals-from insects to humans-to extract and categorize the visual characteristics of complex images.
Braun, J
1994-02-01
In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.
Alpha-Band Rhythms in Visual Task Performance: Phase-Locking by Rhythmic Sensory Stimulation
de Graaf, Tom A.; Gross, Joachim; Paterson, Gavin; Rusch, Tessa; Sack, Alexander T.; Thut, Gregor
2013-01-01
Oscillations are an important aspect of neuronal activity. Interestingly, oscillatory patterns are also observed in behaviour, such as in visual performance measures after the presentation of a brief sensory event in the visual or another modality. These oscillations in visual performance cycle at the typical frequencies of brain rhythms, suggesting that perception may be closely linked to brain oscillations. We here investigated this link for a prominent rhythm of the visual system (the alpha-rhythm, 8–12 Hz) by applying rhythmic visual stimulation at alpha-frequency (10.6 Hz), known to lead to a resonance response in visual areas, and testing its effects on subsequent visual target discrimination. Our data show that rhythmic visual stimulation at 10.6 Hz: 1) has specific behavioral consequences, relative to stimulation at control frequencies (3.9 Hz, 7.1 Hz, 14.2 Hz), and 2) leads to alpha-band oscillations in visual performance measures, that 3) correlate in precise frequency across individuals with resting alpha-rhythms recorded over parieto-occipital areas. The most parsimonious explanation for these three findings is entrainment (phase-locking) of ongoing perceptually relevant alpha-band brain oscillations by rhythmic sensory events. These findings are in line with occipital alpha-oscillations underlying periodicity in visual performance, and suggest that rhythmic stimulation at frequencies of intrinsic brain-rhythms can be used to reveal influences of these rhythms on task performance to study their functional roles. PMID:23555873
Stimulus discriminability in visual search.
Verghese, P; Nakayama, K
1994-09-01
We measured the probability of detecting the target in a visual search task, as a function of the following parameters: the discriminability of the target from the distractors, the duration of the display, and the number of elements in the display. We examined the relation between these parameters at criterion performance (80% correct) to determine if the parameters traded off according to the predictions of a limited capacity model. For the three dimensions that we studied, orientation, color, and spatial frequency, the observed relationship between the parameters deviates significantly from a limited capacity model. The data relating discriminability to display duration are better than predicted over the entire range of orientation and color differences that we examined, and are consistent with the prediction for only a limited range of spatial frequency differences--from 12 to 23%. The relation between discriminability and number varies considerably across the three dimensions and is better than the limited capacity prediction for two of the three dimensions that we studied. Orientation discrimination shows a strong number effect, color discrimination shows almost no effect, and spatial frequency discrimination shows an intermediate effect. The different trading relationships in each dimension are more consistent with early filtering in that dimension, than with a common limited capacity stage. Our results indicate that higher-level processes that group elements together also play a strong role. Our experiments provide little support for limited capacity mechanisms over the range of stimulus differences that we examined in three different dimensions.
Visual Discrimination of Color Normals and Color Deficients. Final Report.
ERIC Educational Resources Information Center
Chen, Yih-Wen
Since visual discrimination is one of the factors involved in learning from instructional media, the present study was designed (1) to investigate the effects of hue contrast, illuminant intensity, brightness contrast, and viewing distance on the discrimination accuracy of those who see color normally and those who do not, and (2) to investigate…
Cortical activity patterns predict speech discrimination ability
Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P
2010-01-01
Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123
Study of blur discrimination for 3D stereo viewing
NASA Astrophysics Data System (ADS)
Subedar, Mahesh; Karam, Lina J.
2014-03-01
Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.
A hierarchical word-merging algorithm with class separability measure.
Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan
2014-03-01
In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.
The influence of visual ability on learning and memory performance in 13 strains of mice.
Brown, Richard E; Wong, Aimée A
2007-03-01
We calculated visual ability in 13 strains of mice (129SI/Sv1mJ, A/J, AKR/J, BALB/cByJ, C3H/HeJ, C57BL/6J, CAST/EiJ, DBA/2J, FVB/NJ, MOLF/EiJ, SJL/J, SM/J, and SPRET/EiJ) on visual detection, pattern discrimination, and visual acuity and tested these and other mice of the same strains in a behavioral test battery that evaluated visuo-spatial learning and memory, conditioned odor preference, and motor learning. Strain differences in visual acuity accounted for a significant proportion of the variance between strains in measures of learning and memory in the Morris water maze. Strain differences in motor learning performance were not influenced by visual ability. Conditioned odor preference was enhanced in mice with visual defects. These results indicate that visual ability must be accounted for when testing for strain differences in learning and memory in mice because differences in performance in many tasks may be due to visual deficits rather than differences in higher order cognitive functions. These results have significant implications for the search for the neural and genetic basis of learning and memory in mice.
Sequential Ideal-Observer Analysis of Visual Discriminations.
ERIC Educational Resources Information Center
Geisler, Wilson S.
1989-01-01
A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)
The effect of short-term training on cardinal and oblique orientation discrimination: an ERP study.
Song, Yan; Sun, Li; Wang, You; Zhang, Xuemin; Kang, Jing; Ma, Xiaoli; Yang, Bin; Guan, Yijie; Ding, Yulong
2010-03-01
The adult brain shows remarkable plasticity, as demonstrated by the improvement in most visual discrimination tasks after intensive practice. However, previous studies have demonstrated that practice improved the discrimination only around oblique orientations, while performance around cardinal orientations (vertical or horizontal orientations) remained stable despite extensive training. The two experiments described here used event-related potentials (ERPs) to investigate the neural substrates underlying different training effects in the two kinds of orientation. Event-related potentials were recorded from subjects when they were trained with a grating orientation discrimination task. Psychophysical threshold measurements were performed before and after the training. For oblique gratings, psychophysical thresholds decreased significantly across training sessions. ERPs showed larger P2 and P3 amplitudes and smaller N1 amplitudes over the parietal/occipital areas with more practice. In line with the psychophysical thresholds, the training effect on the P2 and P3 was specific to stimulus orientation. However, the N1 effect was generalized over differently oriented gratings stimuli. For cardinally oriented gratings, no significant changes were found in the psychophysical thresholds during the training. ERPs still showed similar generalized N1 effect as the oblique gratings. However, the amplitudes of P2 and P3 were unchanged during the whole training. Compared with cardinal orientations, more visual processing stages and later ERP components were involved in the training of oblique orientation discrimination. These results contribute to understanding the neural basis of the asymmetry between cardinal and oblique orientation training effects. Copyright 2009 Elsevier B.V. All rights reserved.
Lim, Jongil; Palmer, Christopher J; Busa, Michael A; Amado, Avelino; Rosado, Luis D; Ducharme, Scott W; Simon, Darnell; Van Emmerik, Richard E A
2017-06-01
The pickup of visual information is critical for controlling movement and maintaining situational awareness in dangerous situations. Altered coordination while wearing protective equipment may impact the likelihood of injury or death. This investigation examined the consequences of load magnitude and distribution on situational awareness, segmental coordination and head gaze in several protective equipment ensembles. Twelve soldiers stepped down onto force plates and were instructed to quickly and accurately identify visual information while establishing marksmanship posture in protective equipment. Time to discriminate visual information was extended when additional pack and helmet loads were added, with the small increase in helmet load having the largest effect. Greater head-leading and in-phase trunk-head coordination were found with lighter pack loads, while trunk-leading coordination increased and head gaze dynamics were more disrupted in heavier pack loads. Additional armour load in the vest had no consequences for Time to discriminate, coordination or head dynamics. This suggests that the addition of head borne load be carefully considered when integrating new technology and that up-armouring does not necessarily have negative consequences for marksmanship performance. Practitioner Summary: Understanding the trade-space between protection and reductions in task performance continue to challenge those developing personal protective equipment. These methods provide an approach that can help optimise equipment design and loading techniques by quantifying changes in task performance and the emergent coordination dynamics that underlie that performance.
Time course of discrimination between emotional facial expressions: the role of visual saliency.
Calvo, Manuel G; Nummenmaa, Lauri
2011-08-01
Saccadic and manual responses were used to investigate the speed of discrimination between happy and non-happy facial expressions in two-alternative-forced-choice tasks. The minimum latencies of correct saccadic responses indicated that the earliest time point at which discrimination occurred ranged between 200 and 280ms, depending on type of expression. Corresponding minimum latencies for manual responses ranged between 440 and 500ms. For both response modalities, visual saliency of the mouth region was a critical factor in facilitating discrimination: The more salient the mouth was in happy face targets in comparison with non-happy distracters, the faster discrimination was. Global image characteristics (e.g., luminance) and semantic factors (i.e., categorical similarity and affective valence of expression) made minor or no contribution to discrimination efficiency. This suggests that visual saliency of distinctive facial features, rather than the significance of expression, is used to make both early and later expression discrimination decisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Einstein, Michael C; Polack, Pierre-Olivier; Tran, Duy T; Golshani, Peyman
2017-05-17
Low-frequency membrane potential ( V m ) oscillations were once thought to only occur in sleeping and anesthetized states. Recently, low-frequency V m oscillations have been described in inactive awake animals, but it is unclear whether they shape sensory processing in neurons and whether they occur during active awake behavioral states. To answer these questions, we performed two-photon guided whole-cell V m recordings from primary visual cortex layer 2/3 excitatory and inhibitory neurons in awake mice during passive visual stimulation and performance of visual and auditory discrimination tasks. We recorded stereotyped 3-5 Hz V m oscillations where the V m baseline hyperpolarized as the V m underwent high amplitude rhythmic fluctuations lasting 1-2 s in duration. When 3-5 Hz V m oscillations coincided with visual cues, excitatory neuron responses to preferred cues were significantly reduced. Despite this disruption to sensory processing, visual cues were critical for evoking 3-5 Hz V m oscillations when animals performed discrimination tasks and passively viewed drifting grating stimuli. Using pupillometry and animal locomotive speed as indicators of arousal, we found that 3-5 Hz oscillations were not restricted to unaroused states and that they occurred equally in aroused and unaroused states. Therefore, low-frequency V m oscillations play a role in shaping sensory processing in visual cortical neurons, even during active wakefulness and decision making. SIGNIFICANCE STATEMENT A neuron's membrane potential ( V m ) strongly shapes how information is processed in sensory cortices of awake animals. Yet, very little is known about how low-frequency V m oscillations influence sensory processing and whether they occur in aroused awake animals. By performing two-photon guided whole-cell recordings from layer 2/3 excitatory and inhibitory neurons in the visual cortex of awake behaving animals, we found visually evoked stereotyped 3-5 Hz V m oscillations that disrupt excitatory responsiveness to visual stimuli. Moreover, these oscillations occurred when animals were in high and low arousal states as measured by animal speed and pupillometry. These findings show, for the first time, that low-frequency V m oscillations can significantly modulate sensory signal processing, even in awake active animals. Copyright © 2017 the authors 0270-6474/17/375084-15$15.00/0.
NASA Astrophysics Data System (ADS)
Wan, Qianwen; Panetta, Karen; Agaian, Sos
2017-05-01
Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.
Efficiencies for the statistics of size discrimination.
Solomon, Joshua A; Morgan, Michael; Chubb, Charles
2011-10-19
Different laboratories have achieved a consensus regarding how well human observers can estimate the average orientation in a set of N objects. Such estimates are not only limited by visual noise, which perturbs the visual signal of each object's orientation, they are also inefficient: Observers effectively use only √N objects in their estimates (e.g., S. C. Dakin, 2001; J. A. Solomon, 2010). More controversial is the efficiency with which observers can estimate the average size in an array of circles (e.g., D. Ariely, 2001, 2008; S. C. Chong, S. J. Joo, T.-A. Emmanouil, & A. Treisman, 2008; K. Myczek & D. J. Simons, 2008). Of course, there are some important differences between orientation and size; nonetheless, it seemed sensible to compare the two types of estimate against the same ideal observer. Indeed, quantitative evaluation of statistical efficiency requires this sort of comparison (R. A. Fisher, 1925). Our first step was to measure the noise that limits size estimates when only two circles are compared. Our results (Weber fractions between 0.07 and 0.14 were necessary for 84% correct 2AFC performance) are consistent with the visual system adding the same amount of Gaussian noise to all logarithmically transduced circle diameters. We exaggerated this visual noise by randomly varying the diameters in (uncrowded) arrays of 1, 2, 4, and 8 circles and measured its effect on discrimination between mean sizes. Efficiencies inferred from all four observers significantly exceed 25% and, in two cases, approach 100%. More consistent are our measurements of just-noticeable differences in size variance. These latter results suggest between 62 and 75% efficiency for variance discriminations. Although our observers were no more efficient comparing size variances than they were at comparing mean sizes, they were significantly more precise. In other words, our results contain evidence for a non-negligible source of late noise that limits mean discriminations but not variance discriminations.
Motor and cognitive growth following a Football Training Program.
Alesi, Marianna; Bianco, Antonino; Padulo, Johnny; Luppina, Giorgio; Petrucci, Marco; Paoli, Antonio; Palma, Antonio; Pepi, Annamaria
2015-01-01
Motor and cognitive growth in children may be influenced by football practice. Therefore the aim of this study was to assess whether a Football Training Program taken over 6 months would improve motor and cognitive performances in children. Motor skills concerned coordinative skills, running, and explosive legs strength. Cognitive abilities involved visual discrimination times and visual selective attention times. Forty-six children with chronological age of ∼9.10 years, were divided into two groups: Group 1 (n = 24) attended a Football Exercise Program and Group 2 (n = 22) was composed of sedentary children. Their abilities were measured by a battery of tests including motor and cognitive tasks. Football Exercise Program resulted in improved running, coordination, and explosive leg strength performances as well as shorter visual discrimination times in children regularly attending football courses compared with their sedentary peers. On the whole these results support the thesis that the improvement of motor and cognitive abilities is related not only to general physical activity but also to specific ability related to the ball. Football Exercise Programs is assumed to be a "natural and enjoyable tool" to enhance cognitive resources as well as promoting and encouraging the participation in sport activities from early development.
Motor and cognitive growth following a Football Training Program
Alesi, Marianna; Bianco, Antonino; Padulo, Johnny; Luppina, Giorgio; Petrucci, Marco; Paoli, Antonio; Palma, Antonio; Pepi, Annamaria
2015-01-01
Motor and cognitive growth in children may be influenced by football practice. Therefore the aim of this study was to assess whether a Football Training Program taken over 6 months would improve motor and cognitive performances in children. Motor skills concerned coordinative skills, running, and explosive legs strength. Cognitive abilities involved visual discrimination times and visual selective attention times. Forty-six children with chronological age of ∼9.10 years, were divided into two groups: Group 1 (n = 24) attended a Football Exercise Program and Group 2 (n = 22) was composed of sedentary children. Their abilities were measured by a battery of tests including motor and cognitive tasks. Football Exercise Program resulted in improved running, coordination, and explosive leg strength performances as well as shorter visual discrimination times in children regularly attending football courses compared with their sedentary peers. On the whole these results support the thesis that the improvement of motor and cognitive abilities is related not only to general physical activity but also to specific ability related to the ball. Football Exercise Programs is assumed to be a “natural and enjoyable tool” to enhance cognitive resources as well as promoting and encouraging the participation in sport activities from early development. PMID:26579014
Perceptual and academic patterns of learning-disabled/gifted students.
Waldron, K A; Saphire, D G
1992-04-01
This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.
Hager, Audrey M; Dringenberg, Hans C
2012-12-01
The rat visual system is structured such that the large (>90 %) majority of retinal ganglion axons reach the contralateral lateral geniculate nucleus (LGN) and visual cortex (V1). This anatomical design allows for the relatively selective activation of one cerebral hemisphere under monocular viewing conditions. Here, we describe the design of a harness and face mask allowing simple and noninvasive monocular occlusion in rats. The harness is constructed from synthetic fiber (shoelace-type material) and fits around the girth region and neck, allowing for easy adjustments to fit rats of various weights. The face mask consists of soft rubber material that is attached to the harness by Velcro strips. Eyeholes in the mask can be covered by additional Velcro patches to occlude either one or both eyes. Rats readily adapt to wearing the device, allowing behavioral testing under different types of viewing conditions. We show that rats successfully acquire a water-maze-based visual discrimination task under monocular viewing conditions. Following task acquisition, interocular transfer was assessed. Performance with the previously occluded, "untrained" eye was impaired, suggesting that training effects were partially confined to one cerebral hemisphere. The method described herein provides a simple and noninvasive means to restrict visual input for studies of visual processing and learning in various rodent species.
Seki, Yoshimasa; Okanoya, Kazuo
2008-02-01
Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.
Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina
2014-01-01
Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the Intersensory Redundancy Hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech, and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. PMID:23244407
Real-Time Strategy Video Game Experience and Visual Perceptual Learning.
Kim, Yong-Hwan; Kang, Dong-Wha; Kim, Dongho; Kim, Hye-Jin; Sasaki, Yuka; Watanabe, Takeo
2015-07-22
Visual perceptual learning (VPL) is defined as long-term improvement in performance on a visual-perception task after visual experiences or training. Early studies have found that VPL is highly specific for the trained feature and location, suggesting that VPL is associated with changes in the early visual cortex. However, the generality of visual skills enhancement attributable to action video-game experience suggests that VPL can result from improvement in higher cognitive skills. If so, experience in real-time strategy (RTS) video-game play, which may heavily involve cognitive skills, may also facilitate VPL. To test this hypothesis, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and elucidated underlying structural and functional neural mechanisms. Healthy young human subjects underwent six training sessions on a texture discrimination task. Diffusion-tensor and functional magnetic resonance imaging were performed before and after training. VGPs performed better than NVGPs in the early phase of training. White-matter connectivity between the right external capsule and visual cortex and neuronal activity in the right inferior frontal gyrus (IFG) and anterior cingulate cortex (ACC) were greater in VGPs than NVGPs and were significantly correlated with RTS video-game experience. In both VGPs and NVGPs, there was task-related neuronal activity in the right IFG, ACC, and striatum, which was strengthened after training. These results indicate that RTS video-game experience, associated with changes in higher-order cognitive functions and connectivity between visual and cognitive areas, facilitates VPL in early phases of training. The results support the hypothesis that VPL can occur without involvement of only visual areas. Significance statement: Although early studies found that visual perceptual learning (VPL) is associated with involvement of the visual cortex, generality of visual skills enhancement by action video-game experience suggests that higher-order cognition may be involved in VPL. If so, real-time strategy (RTS) video-game experience may facilitate VPL as a result of heavy involvement of cognitive skills. Here, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and investigated the underlying neural mechanisms. VGPs showed better performance in the early phase of training on the texture discrimination task and greater level of neuronal activity in cognitive areas and structural connectivity between visual and cognitive areas than NVGPs. These results support the hypothesis that VPL can occur beyond the visual cortex. Copyright © 2015 the authors 0270-6474/15/3510485-08$15.00/0.
Aging and the visual, haptic, and cross-modal perception of natural object shape.
Norman, J Farley; Crabtree, Charles E; Norman, Hideko F; Moncrief, Brandon K; Herrmann, Molly; Kapley, Noah
2006-01-01
One hundred observers participated in two experiments designed to investigate aging and the perception of natural object shape. In the experiments, younger and older observers performed either a same/different shape discrimination task (experiment 1) or a cross-modal matching task (experiment 2). Quantitative effects of age were found in both experiments. The effect of age in experiment 1 was limited to cross-modal shape discrimination: there was no effect of age upon unimodal (ie within a single perceptual modality) shape discrimination. The effect of age in experiment 2 was eliminated when the older observers were either given an unlimited amount of time to perform the task or when the number of response alternatives was decreased. Overall, the results of the experiments reveal that older observers can effectively perceive 3-D shape from both vision and haptics.
What visual information is used for stereoscopic depth displacement discrimination?
Nefs, Harold T; Harris, Julie M
2010-01-01
There are two ways to detect a displacement in stereoscopic depth, namely by monitoring the change in disparity over time (CDOT) or by monitoring the interocular velocity difference (IOVD). Though previous studies have attempted to understand which cue is most significant for the visual system, none has designed stimuli that provide a comparison in terms of relative efficiency between them. Here we used two-frame motion and random-dot noise to deliver equivalent strengths of CDOT and IOVD information to the visual system. Using three kinds of random-dot stimuli, we were able to isolate CDOT or IOVD or deliver both simultaneously. The proportion of dots delivering CDOT or IOVD signals could be varied, and we defined the discrimination threshold as the proportion needed to detect the direction of displacement (towards or away). Thresholds were similar for stimuli containing CDOT only, and containing both CDOT and IOVD, but only one participant was able to consistently perceive the displacement for stimuli containing only IOVD. We also investigated the effect of disparity pedestals on discrimination. Performance was best when the displacement crossed the reference plane, but was not significantly different for stimuli containing CDOT only and those containing both CDOT and IOVD. When stimuli are specifically designed to provide equivalent two-frame motion or disparity-change, few participants can reliably detect displacement when IOVD is the only cue. This challenges the notion that IOVD is involved in the discrimination of direction of displacement in two-frame motion displays.
Plescia, Fulvio; Sardo, Pierangelo; Rizzo, Valerio; Cacace, Silvana; Marino, Rosa Anna Maria; Brancato, Anna; Ferraro, Giuseppe; Carletti, Fabio; Cannizzaro, Carla
2014-01-01
Neurosteroids can alter neuronal excitability interacting with specific neurotransmitter receptors, thus affecting several functions such as cognition and emotionality. In this study we investigated, in adult male rats, the effects of the acute administration of pregnenolone-sulfate (PREGS) (10mg/kg, s.c.) on cognitive processes using the Can test, a non aversive spatial/visual task which allows the assessment of both spatial orientation-acquisition and object discrimination in a simple and in a complex version of the visual task. Electrophysiological recordings were also performed in vivo, after acute PREGS systemic administration in order to investigate on the neuronal activation in the hippocampus and the perirhinal cortex. Our results indicate that, PREGS induces an improvement in spatial orientation-acquisition and in object discrimination in the simple and in the complex visual task; the behavioural responses were also confirmed by electrophysiological recordings showing a potentiation in the neuronal activity of the hippocampus and the perirhinal cortex. In conclusion, this study demonstrates that PREGS systemic administration in rats exerts cognitive enhancing properties which involve both the acquisition and utilization of spatial information, and object discrimination memory, and also correlates the behavioural potentiation observed to an increase in the neuronal firing of discrete cerebral areas critical for spatial learning and object recognition. This provides further evidence in support of the role of PREGS in exerting a protective and enhancing role on human memory. Copyright © 2013. Published by Elsevier B.V.
Tanaka, Tomohiro; Nishida, Satoshi
2015-01-01
The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344
Sounds Activate Visual Cortex and Improve Visual Discrimination
Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.
2014-01-01
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419
Cavanaugh, Matthew R; Barbot, Antoine; Carrasco, Marisa; Huxlin, Krystel R
2017-12-10
Training chronic, cortically-blind (CB) patients on a coarse [left-right] direction discrimination and integration (CDDI) task recovers performance on this task at trained, blind field locations. However, fine direction difference (FDD) thresholds remain elevated at these locations, limiting the usefulness of recovered vision in daily life. Here, we asked if this FDD impairment can be overcome by training CB subjects with endogenous, feature-based attention (FBA) cues. Ten CB subjects were recruited and trained on CDDI and FDD with an FBA cue or FDD with a neutral cue. After completion of each training protocol, FDD thresholds were re-measured with both neutral and FBA cues at trained, blind-field locations and at corresponding, intact-field locations. In intact portions of the visual field, FDD thresholds were lower when tested with FBA than neutral cues. Training subjects in the blind field on the CDDI task improved FDD performance to the point that a threshold could be measured, but these locations remained impaired relative to the intact field. FDD training with neutral cues resulted in better blind field FDD thresholds than CDDI training, but thresholds remained impaired relative to intact field levels, regardless of testing cue condition. Importantly, training FDD in the blind field with FBA lowered FDD thresholds relative to CDDI training, and allowed the blind field to reach thresholds similar to the intact field, even when FBA trained subjects were tested with a neutral rather than FBA cue. Finally, FDD training appeared to also recover normal integration thresholds at trained, blind-field locations, providing an interesting double dissociation with respect to CDDI training. In summary, mechanisms governing FBA appear to function normally in both intact and impaired regions of the visual field following V1 damage. Our results mark the first time that FDD thresholds in CB fields have been seen to reach intact field levels of performance. Moreover, FBA can be leveraged during visual training to recover normal, fine direction discrimination and integration performance at trained, blind-field locations, potentiating visual recovery of more complex and precise aspects of motion perception in cortically-blinded fields. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visual Acuity Using Head-fixed Displays During Passive Self and Surround Motion
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Black, F. Owen; Stallings, Valerie; Peters, Brian
2007-01-01
The ability to read head-fixed displays on various motion platforms requires the suppression of vestibulo-ocular reflexes. This study examined dynamic visual acuity while viewing a head-fixed display during different self and surround rotation conditions. Twelve healthy subjects were asked to report the orientation of Landolt C optotypes presented on a micro-display fixed to a rotating chair at 50 cm distance. Acuity thresholds were determined by the lowest size at which the subjects correctly identified 3 of 5 optotype orientations at peak velocity. Visual acuity was compared across four different conditions, each tested at 0.05 and 0.4 Hz (peak amplitude of 57 deg/s). The four conditions included: subject rotated in semi-darkness (i.e., limited to background illumination of the display), subject stationary while visual scene rotated, subject rotated around a stationary visual background, and both subject and visual scene rotated together. Visual acuity performance was greatest when the subject rotated around a stationary visual background; i.e., when both vestibular and visual inputs provided concordant information about the motion. Visual acuity performance was most reduced when the subject and visual scene rotated together; i.e., when the visual scene provided discordant information about the motion. Ranges of 4-5 logMAR step sizes across the conditions indicated the acuity task was sufficient to discriminate visual performance levels. The background visual scene can influence the ability to read head-fixed displays during passive motion disturbances. Dynamic visual acuity using head-fixed displays can provide an operationally relevant screening tool for visual performance during exposure to novel acceleration environments.
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne
2016-12-01
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.
Liu, Xiaona; Zhang, Qiao; Wu, Zhisheng; Shi, Xinyuan; Zhao, Na; Qiao, Yanjiang
2015-01-01
Laser-induced breakdown spectroscopy (LIBS) was applied to perform a rapid elemental analysis and provenance study of Blumea balsamifera DC. Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were implemented to exploit the multivariate nature of the LIBS data. Scores and loadings of computed principal components visually illustrated the differing spectral data. The PLS-DA algorithm showed good classification performance. The PLS-DA model using complete spectra as input variables had similar discrimination performance to using selected spectral lines as input variables. The down-selection of spectral lines was specifically focused on the major elements of B. balsamifera samples. Results indicated that LIBS could be used to rapidly analyze elements and to perform provenance study of B. balsamifera. PMID:25558999
Vernier But Not Grating Acuity Contributes to an Early Stage of Visual Word Processing.
Tan, Yufei; Tong, Xiuhong; Chen, Wei; Weng, Xuchu; He, Sheng; Zhao, Jing
2018-03-28
The process of reading words depends heavily on efficient visual skills, including analyzing and decomposing basic visual features. Surprisingly, previous reading-related studies have almost exclusively focused on gross aspects of visual skills, while only very few have investigated the role of finer skills. The present study filled this gap and examined the relations of two finer visual skills measured by grating acuity (the ability to resolve periodic luminance variations across space) and Vernier acuity (the ability to detect/discriminate relative locations of features) to Chinese character-processing as measured by character form-matching and lexical decision tasks in skilled adult readers. The results showed that Vernier acuity was significantly correlated with performance in character form-matching but not visual symbol form-matching, while no correlation was found between grating acuity and character processing. Interestingly, we found no correlation of the two visual skills with lexical decision performance. These findings provide for the first time empirical evidence that the finer visual skills, particularly as reflected in Vernier acuity, may directly contribute to an early stage of hierarchical word processing.
Devue, Christel; Barsics, Catherine
2016-10-01
Most humans seem to demonstrate astonishingly high levels of skill in face processing if one considers the sophisticated level of fine-tuned discrimination that face recognition requires. However, numerous studies now indicate that the ability to process faces is not as fundamental as once thought and that performance can range from despairingly poor to extraordinarily high across people. Here we studied people who are super specialists of faces, namely portrait artists, to examine how their specific visual experience with faces relates to a range of face processing skills (perceptual discrimination, short- and longer term recognition). Artists show better perceptual discrimination and, to some extent, recognition of newly learned faces than controls. They are also more accurate on other perceptual tasks (i.e., involving non-face stimuli or mental rotation). By contrast, artists do not display an advantage compared to controls on longer term face recognition (i.e., famous faces) nor on person recognition from other sensorial modalities (i.e., voices). Finally, the face inversion effect exists in artists and controls and is not modulated by artistic practice. Advantages in face processing for artists thus seem to closely mirror perceptual and visual short term memory skills involved in portraiture. Copyright © 2016 Elsevier Ltd. All rights reserved.
THE ROLE OF THE HIPPOCAMPUS IN OBJECT DISCRIMINATION BASED ON VISUAL FEATURES.
Levcik, David; Nekovarova, Tereza; Antosova, Eliska; Stuchlik, Ales; Klement, Daniel
2018-06-07
The role of rodent hippocampus has been intensively studied in different cognitive tasks. However, its role in discrimination of objects remains controversial due to conflicting findings. We tested whether the number and type of features available for the identification of objects might affect the strategy (hippocampal-independent vs. hippocampal-dependent) that rats adopt to solve object discrimination tasks. We trained rats to discriminate 2D visual objects presented on a computer screen. The objects were defined either by their shape only or by multiple-features (a combination of filling pattern and brightness in addition to the shape). Our data showed that objects displayed as simple geometric shapes are not discriminated by trained rats after their hippocampi had been bilaterally inactivated by the GABA A -agonist muscimol. On the other hand, objects containing a specific combination of non-geometric features in addition to the shape are discriminated even without the hippocampus. Our results suggest that the involvement of the hippocampus in visual object discrimination depends on the abundance of object's features. Copyright © 2018. Published by Elsevier Inc.
The visual discrimination of negative facial expressions by younger and older adults.
Mienaltowski, Andrew; Johnson, Ellen R; Wittman, Rebecca; Wilson, Anne-Taylor; Sturycz, Cassandra; Norman, J Farley
2013-04-05
Previous research has demonstrated that older adults are not as accurate as younger adults at perceiving negative emotions in facial expressions. These studies rely on emotion recognition tasks that involve choosing between many alternatives, creating the possibility that age differences emerge for cognitive rather than perceptual reasons. In the present study, an emotion discrimination task was used to investigate younger and older adults' ability to visually discriminate between negative emotional facial expressions (anger, sadness, fear, and disgust) at low (40%) and high (80%) expressive intensity. Participants completed trials blocked by pairs of emotions. Discrimination ability was quantified from the participants' responses using signal detection measures. In general, the results indicated that older adults had more difficulty discriminating between low intensity expressions of negative emotions than did younger adults. However, younger and older adults did not differ when discriminating between anger and sadness. These findings demonstrate that age differences in visual emotion discrimination emerge when signal detection measures are used but that these differences are not uniform and occur only in specific contexts.
Development of a computerized visual search test.
Reid, Denise; Babani, Harsha; Jon, Eugenia
2009-09-01
Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed to provide an alternative to existing cancellation tests. Data from two pilot studies will be reported that examined some aspects of the test's validity. To date, our assessment of the test shows that it discriminates between healthy and head-injured persons. More research and development work is required to examine task performance changes in relation to task complexity. It is suggested that the conceptual design for the test is worthy of further investigation.
A toolbox to visually explore cerebellar shape changes in cerebellar disease and dysfunction.
Abulnaga, S Mazdak; Yang, Zhen; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M; Onyike, Chiadi U; Ying, Sarah H; Prince, Jerry L
2016-02-27
The cerebellum plays an important role in motor control and is also involved in cognitive processes. Cerebellar function is specialized by location, although the exact topographic functional relationship is not fully understood. The spinocerebellar ataxias are a group of neurodegenerative diseases that cause regional atrophy in the cerebellum, yielding distinct motor and cognitive problems. The ability to study the region-specific atrophy patterns can provide insight into the problem of relating cerebellar function to location. In an effort to study these structural change patterns, we developed a toolbox in MATLAB to provide researchers a unique way to visually explore the correlation between cerebellar lobule shape changes and function loss, with a rich set of visualization and analysis modules. In this paper, we outline the functions and highlight the utility of the toolbox. The toolbox takes as input landmark shape representations of subjects' cerebellar substructures. A principal component analysis is used for dimension reduction. Following this, a linear discriminant analysis and a regression analysis can be performed to find the discriminant direction associated with a specific disease type, or the regression line of a specific functional measure can be generated. The characteristic structural change pattern of a disease type or of a functional score is visualized by sampling points on the discriminant or regression line. The sampled points are used to reconstruct synthetic cerebellar lobule shapes. We showed a few case studies highlighting the utility of the toolbox and we compare the analysis results with the literature.
A toolbox to visually explore cerebellar shape changes in cerebellar disease and dysfunction
NASA Astrophysics Data System (ADS)
Abulnaga, S. Mazdak; Yang, Zhen; Carass, Aaron; Kansal, Kalyani; Jedynak, Bruno M.; Onyike, Chiadi U.; Ying, Sarah H.; Prince, Jerry L.
2016-03-01
The cerebellum plays an important role in motor control and is also involved in cognitive processes. Cerebellar function is specialized by location, although the exact topographic functional relationship is not fully understood. The spinocerebellar ataxias are a group of neurodegenerative diseases that cause regional atrophy in the cerebellum, yielding distinct motor and cognitive problems. The ability to study the region-specific atrophy patterns can provide insight into the problem of relating cerebellar function to location. In an effort to study these structural change patterns, we developed a toolbox in MATLAB to provide researchers a unique way to visually explore the correlation between cerebellar lobule shape changes and function loss, with a rich set of visualization and analysis modules. In this paper, we outline the functions and highlight the utility of the toolbox. The toolbox takes as input landmark shape representations of subjects' cerebellar substructures. A principal component analysis is used for dimension reduction. Following this, a linear discriminant analysis and a regression analysis can be performed to find the discriminant direction associated with a specific disease type, or the regression line of a specific functional measure can be generated. The characteristic structural change pattern of a disease type or of a functional score is visualized by sampling points on the discriminant or regression line. The sampled points are used to reconstruct synthetic cerebellar lobule shapes. We showed a few case studies highlighting the utility of the toolbox and we compare the analysis results with the literature.
Orientation Discrimination with Macular Changes Associated with Early AMD
Bedell, Harold E.; Tong, Jianliang; Woo, Stanley Y.; House, Jon R.; Nguyen, Tammy
2010-01-01
Purpose Age-related macular degeneration (AMD) is a condition that progressively reduces central vision in elderly individuals, resulting in a reduced capacity to perform many daily activities and a diminished quality of life. Recent studies identified clinical treatments that can slow or reverse the progression of exudative (wet) AMD and ongoing research is evaluating earlier interventions. Because early diagnosis is critical for an optimal outcome, the goal of this study is to assess psychophysical orientation discrimination for randomly positioned short line segments as a potential indicator of subtle macular changes in eyes with early AMD. Methods Orientation discrimination was measured in a sample of 74 eyes of patients aged 47 to 82 years old, none of which had intermediate or advanced AMD. Amsler-grid testing was performed as well. A masked examiner graded each eye as level 0, 1, 2, or 3 on a streamlined version of the Age-Related Eye Disease Study (AREDS) scale for AMD, based on the presence and extent of macular drusen or retinal pigment epithelium (RPE) changes. Visual acuity in the 74 eyes ranged from 20/15 to 20/40+1, with no significant differences among the grading levels. Humphrey 10–2 and Nidek MP-1 micro-perimetry were used to assess retinal sensitivity at test locations 1° from the locus of fixation. Results Average orientation-discrimination thresholds increased systematically from 7.4° to 11.3° according to the level of macular changes. In contrast, only 3 of 74 eyes exhibited abnormalities on the Amsler grid and central-field perimetric defects occurred with approximately equal probability at all grading levels. Conclusions In contrast to Amsler grid and central-visual-field testing, psychophysical orientation discrimination has the capability to distinguish between eyes with and without subtle age-related macular changes. PMID:19319009
The role of explicit and implicit standards in visual speed discrimination.
Norman, J Farley; Pattison, Kristina F; Norman, Hideko F; Craft, Amy E; Wiesemann, Elizabeth Y; Taylor, M Jett
2008-01-01
Five experiments were designed to investigate visual speed discrimination. Variations of the method of constant stimuli were used to obtain speed discrimination thresholds in experiments 1, 2, 4, and 5, while the method of single stimuli was used in experiment 3. The observers' thresholds were significantly influenced by the choice of psychophysical method and by changes in the standard speed. The observers' judgments were unaffected, however, by changes in the magnitude of random variations in stimulus duration, reinforcing the conclusions of Lappin et al (1975 Journal of Experimental Psychology: Human Perception and Performance 1 383 394). When an implicit standard was used, the observers produced relatively low discrimination thresholds (7.0% of the standard speed), verifying the results of McKee (1981 Vision Research 21 491-500). When an explicit standard was used in a 2AFC variant of the method of constant stimuli, however, the observers' discrimination thresholds increased by 74% (to 12.2%), resembling the high thresholds obtained by Mandriota et al (1962 Science 138 437-438). A subsequent signal-detection analysis revealed that the observers' actual sensitivities to differences in speed were in fact equivalent for both psychophysical methods. The formation of an implicit standard in the method of single stimuli allows human observers to make judgments of speed that are as precise as those obtained when explicit standards are available.
Mattys, Sven L; Scharenborg, Odette
2014-03-01
This study investigates the extent to which age-related language processing difficulties are due to a decline in sensory processes or to a deterioration of cognitive factors, specifically, attentional control. Two facets of attentional control were examined: inhibition of irrelevant information and divided attention. Younger and older adults were asked to categorize the initial phoneme of spoken syllables ("Was it m or n?"), trying to ignore the lexical status of the syllables. The phonemes were manipulated to range in eight steps from m to n. Participants also did a discrimination task on syllable pairs ("Were the initial sounds the same or different?"). Categorization and discrimination were performed under either divided attention (concurrent visual-search task) or focused attention (no visual task). The results showed that even when the younger and older adults were matched on their discrimination scores: (1) the older adults had more difficulty inhibiting lexical knowledge than did younger adults, (2) divided attention weakened lexical inhibition in both younger and older adults, and (3) divided attention impaired sound discrimination more in older than younger listeners. The results confirm the independent and combined contribution of sensory decline and deficit in attentional control to language processing difficulties associated with aging. The relative weight of these variables and their mechanisms of action are discussed in the context of theories of aging and language. (c) 2014 APA, all rights reserved.
Rodríguez-Gironés, Miguel A.; Trillo, Alejandro; Corcobado, Guadalupe
2013-01-01
The results of behavioural experiments provide important information about the structure and information-processing abilities of the visual system. Nevertheless, if we want to infer from behavioural data how the visual system operates, it is important to know how different learning protocols affect performance and to devise protocols that minimise noise in the response of experimental subjects. The purpose of this work was to investigate how reinforcement schedule and individual variability affect the learning process in a colour discrimination task. Free-flying bumblebees were trained to discriminate between two perceptually similar colours. The target colour was associated with sucrose solution, and the distractor could be associated with water or quinine solution throughout the experiment, or with one substance during the first half of the experiment and the other during the second half. Both acquisition and final performance of the discrimination task (measured as proportion of correct choices) were determined by the choice of reinforcer during the first half of the experiment: regardless of whether bees were trained with water or quinine during the second half of the experiment, bees trained with quinine during the first half learned the task faster and performed better during the whole experiment. Our results confirm that the choice of stimuli used during training affects the rate at which colour discrimination tasks are acquired and show that early contact with a strongly aversive stimulus can be sufficient to maintain high levels of attention during several hours. On the other hand, bees which took more time to decide on which flower to alight were more likely to make correct choices than bees which made fast decisions. This result supports the existence of a trade-off between foraging speed and accuracy, and highlights the importance of measuring choice latencies during behavioural experiments focusing on cognitive abilities. PMID:23951186
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.
Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh
2004-11-01
Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.
Self-rated imagery and encoding strategies in visual memory.
Berger, G H; Gaunitz, S C
1979-02-01
The value of self-rated vividness of imagery in predicting performance was investigated, taking into account the mnemonic strategies utilized among subjects performing a visual-memory task. Subjects classified as 'good' or 'poor' imagers, according to their scores in the Vividness of Visual Imagery Questionnaire (VVIQ; Marks, 1972), were to detect as rapidly as possible differences between pairs of similar pictures presented consecutively. No coding instructions were given and the mnemonic strategies used were analysed by studying subjective reports and objective performance measurements. The results indicated that the subjects utilized two main strategies--a detail or an image strategy. The detail strategy was the more efficient. In accordance with a previous study (Berger & Gaunitz, 1977), it was found that the VVIQ did not discriminate between performance by 'good' and 'poor' imagers. However, among subjects who used the image strategy, 'good' imagers performed more rapidly than 'poor' imagers. Self-rated imagery may then have some value in predicting performance among individuals shown to have utilized an image strategy.
Roijendijk, Linsey; Farquhar, Jason; van Gerven, Marcel; Jensen, Ole; Gielen, Stan
2013-01-01
Objective Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. Approach We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. Main Results Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. Significance Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research. PMID:24312477
Roijendijk, Linsey; Farquhar, Jason; van Gerven, Marcel; Jensen, Ole; Gielen, Stan
2013-01-01
Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research.
Dye-enhanced visualization of rat whiskers for behavioral studies.
Rigosa, Jacopo; Lucantonio, Alessandro; Noselli, Giovanni; Fassihi, Arash; Zorzin, Erik; Manzino, Fabrizio; Pulecchi, Francesca; Diamond, Mathew E
2017-06-14
Visualization and tracking of the facial whiskers is required in an increasing number of rodent studies. Although many approaches have been employed, only high-speed videography has proven adequate for measuring whisker motion and deformation during interaction with an object. However, whisker visualization and tracking is challenging for multiple reasons, primary among them the low contrast of the whisker against its background. Here, we demonstrate a fluorescent dye method suitable for visualization of one or more rat whiskers. The process makes the dyed whisker(s) easily visible against a dark background. The coloring does not influence the behavioral performance of rats trained on a vibrissal vibrotactile discrimination task, nor does it affect the whiskers' mechanical properties.
Visual adaptation enhances action sound discrimination.
Barraclough, Nick E; Page, Steve A; Keefe, Bruce D
2017-01-01
Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.
Task-irrelevant emotion facilitates face discrimination learning.
Lorenzino, Martina; Caudek, Corrado
2015-03-01
We understand poorly how the ability to discriminate faces from one another is shaped by visual experience. The purpose of the present study is to determine whether face discrimination learning can be facilitated by facial emotions. To answer this question, we used a task-irrelevant perceptual learning paradigm because it closely mimics the learning processes that, in daily life, occur without a conscious intention to learn and without an attentional focus on specific facial features. We measured face discrimination thresholds before and after training. During the training phase (4 days), participants performed a contrast discrimination task on face images. They were not informed that we introduced (task-irrelevant) subtle variations in the face images from trial to trial. For the Identity group, the task-irrelevant features were variations along a morphing continuum of facial identity. For the Emotion group, the task-irrelevant features were variations along an emotional expression morphing continuum. The Control group did not undergo contrast discrimination learning and only performed the pre-training and post-training tests, with the same temporal gap between them as the other two groups. Results indicate that face discrimination improved, but only for the Emotion group. Participants in the Emotion group, moreover, showed face discrimination improvements also for stimulus variations along the facial identity dimension, even if these (task-irrelevant) stimulus features had not been presented during training. The present results highlight the importance of emotions for face discrimination learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Congenital Blindness Leads to Enhanced Vibrotactile Perception
ERIC Educational Resources Information Center
Wan, Catherine Y.; Wood, Amanda G.; Reutens, David C.; Wilson, Sarah J.
2010-01-01
Previous studies have shown that in comparison with the sighted, blind individuals display superior non-visual perceptual abilities and differ in brain organisation. In this study, we investigated the performance of blind and sighted participants on a vibrotactile discrimination task. Thirty-three blind participants were classified into one of…
Concentration of Swiss Elite Orienteers.
ERIC Educational Resources Information Center
Seiler, Roland; Wetzel, Jorg
1997-01-01
A visual discrimination task was used to measure concentration among 43 members of Swiss national orienteering teams. Subjects were above average in the number of target objects dealt with and in duration of continuous concentration. For females only, ranking in orienteering performance was related to quality of concentration (ratio of correct to…
Yoon, Jong H.; Maddock, Richard J.; Rokem, Ariel; Silver, Michael A.; Minzenberg, Michael J.; Ragland, J. Daniel; Carter, Cameron S.
2010-01-01
The neural mechanisms underlying cognitive deficits in schizophrenia remain largely unknown. The gamma-aminobutyric acid (GABA) hypothesis proposes that reduced neuronal GABA concentration and neurotransmission results in cognitive impairments in schizophrenia. However, few in vivo studies have directly examined this hypothesis. We employed magnetic resonance spectroscopy (MRS) at high field to measure visual cortical GABA levels in 13 subjects with schizophrenia and 13 demographically matched healthy control subjects. We found that the schizophrenia group had an approximately 10% reduction in GABA concentration. We further tested the GABA hypothesis by examining the relationship between visual cortical GABA levels and orientation-specific surround suppression (OSSS), a behavioral measure of visual inhibition thought to be dependent on GABAergic synaptic transmission. Previous work has shown that subjects with schizophrenia exhibit reduced OSSS of contrast discrimination (Yoon et al., 2009). For subjects with both MRS and OSSS data (n=16), we found a highly significant positive correlation (r=0.76) between these variables. GABA concentration was not correlated with overall contrast discrimination performance for stimuli without a surround (r=-0.10). These results suggest that a neocortical GABA deficit in subjects with schizophrenia leads to impaired cortical inhibition and that GABAergic synaptic transmission in visual cortex plays a critical role in OSSS. PMID:20220012
Yoon, Jong H; Maddock, Richard J; Rokem, Ariel; Silver, Michael A; Minzenberg, Michael J; Ragland, J Daniel; Carter, Cameron S
2010-03-10
The neural mechanisms underlying cognitive deficits in schizophrenia remain essentially unknown. The GABA hypothesis proposes that reduced neuronal GABA concentration and neurotransmission results in cognitive impairments in schizophrenia. However, few in vivo studies have directly examined this hypothesis. We used magnetic resonance spectroscopy (MRS) at high field to measure visual cortical GABA levels in 13 subjects with schizophrenia and 13 demographically matched healthy control subjects. We found that the schizophrenia group had an approximately 10% reduction in GABA concentration. We further tested the GABA hypothesis by examining the relationship between visual cortical GABA levels and orientation-specific surround suppression (OSSS), a behavioral measure of visual inhibition thought to be dependent on GABAergic synaptic transmission. Previous work has shown that subjects with schizophrenia exhibit reduced OSSS of contrast discrimination (Yoon et al., 2009). For subjects with both MRS and OSSS data (n = 16), we found a highly significant positive correlation (r = 0.76) between these variables. GABA concentration was not correlated with overall contrast discrimination performance for stimuli without a surround (r = -0.10). These results suggest that a neocortical GABA deficit in subjects with schizophrenia leads to impaired cortical inhibition and that GABAergic synaptic transmission in visual cortex plays a critical role in OSSS.
Sounds activate visual cortex and improve visual discrimination.
Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A
2014-07-16
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.
NASA Astrophysics Data System (ADS)
Brainard, George C.; Coyle, William; Ayers, Melissa; Kemp, John; Warfield, Benjamin; Maida, James; Bowen, Charles; Bernecker, Craig; Lockley, Steven W.; Hanifin, John P.
2013-11-01
The International Space Station (ISS) uses General Luminaire Assemblies (GLAs) that house fluorescent lamps for illuminating the astronauts' working and living environments. Solid-state light emitting diodes (LEDs) are attractive candidates for replacing the GLAs on the ISS. The advantages of LEDs over conventional fluorescent light sources include lower up-mass, power consumption and heat generation, as well as fewer toxic materials, greater resistance to damage and long lamp life. A prototype Solid-State Lighting Assembly (SSLA) was developed and successfully installed on the ISS. The broad aim of the ongoing work is to test light emitted by prototype SSLAs for supporting astronaut vision and assessing neuroendocrine, circadian, neurobehavioral and sleep effects. Three completed ground-based studies are presented here including experiments on visual performance, color discrimination, and acute plasma melatonin suppression in cohorts of healthy, human subjects under different SSLA light exposure conditions within a high-fidelity replica of the ISS Crew Quarters (CQ). All visual tests were done under indirect daylight at 201 lx, fluorescent room light at 531 lx and 4870 K SSLA light in the CQ at 1266 lx. Visual performance was assessed with numerical verification tests (NVT). NVT data show that there are no significant differences in score (F=0.73, p=0.48) or time (F=0.14, p=0.87) for subjects performing five contrast tests (10%-100%). Color discrimination was assessed with Farnsworth-Munsell 100 Hue tests (FM-100). The FM-100 data showed no significant differences (F=0.01, p=0.99) in color discrimination for indirect daylight, fluorescent room light and 4870 K SSLA light in the CQ. Plasma melatonin suppression data show that there are significant differences (F=29.61, p<0.0001) across the percent change scores of plasma melatonin for five corneal irradiances, ranging from 0 to 405 μW/cm2 of 4870 K SSLA light in the CQ (0-1270 lx). Risk factors for the health and safety of astronauts include disturbed circadian rhythms and altered sleep-wake patterns. These studies will help determine if SSLA lighting can be used both to support astronaut vision and serve as an in-flight countermeasure for circadian desynchrony, sleep disruption and cognitive performance deficits on the ISS.
Perceptual Learning Selectively Refines Orientation Representations in Early Visual Cortex
Jehee, Janneke F.M.; Ling, Sam; Swisher, Jascha D.; van Bergen, Ruben S.; Tong, Frank
2013-01-01
Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily one-hour training sessions. Training on average led to a two-fold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1–V4) using signal detection measures, both pre- and post-training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2–V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information. PMID:23175828
Perceptual learning selectively refines orientation representations in early visual cortex.
Jehee, Janneke F M; Ling, Sam; Swisher, Jascha D; van Bergen, Ruben S; Tong, Frank
2012-11-21
Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily 1 h training sessions. Training on average led to a twofold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1-V4) using signal detection measures, both before and after training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2-V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information.
Howard, Christina J; Wilding, Robert; Guest, Duncan
2017-02-01
There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.
Object recognition with hierarchical discriminant saliency networks.
Han, Sunhyoung; Vasconcelos, Nuno
2014-01-01
The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and computer vision literatures. This demonstrates benefits for all the functional enhancements of the HDSN, the class tuning inherent to discriminant saliency, and saliency layers based on templates of increasing target selectivity and invariance. Altogether, these experiments suggest that there are non-trivial benefits in integrating attention and recognition.
Calculation and application of activity discriminants in lead optimization.
Luo, Xincai; Krumrine, Jennifer R; Shenvi, Ashok B; Pierson, M Edward; Bernstein, Peter R
2010-11-01
We present a technique for computing activity discriminants of in vitro (pharmacological, DMPK, and safety) assays and the application to the prediction of in vitro activities of proposed synthetic targets during the lead optimization phase of drug discovery projects. This technique emulates how medicinal chemists perform SAR analysis and activity prediction. The activity discriminants that are functions of 6 commonly used medicinal chemistry descriptors can be interpreted easily by medicinal chemists. Further, visualization with Spotfire allows medicinal chemists to analyze how the query molecule is related to compounds tested previously, and to evaluate easily the relevance of the activity discriminants to the activities of the query molecule. Validation with all compounds synthesized and tested in AstraZeneca Wilmington since 2006 demonstrates that this approach is useful for prioritizing new synthetic targets for synthesis. Copyright © 2010 Elsevier Inc. All rights reserved.
Kelly, Debbie M; Cook, Robert G
2003-06-01
Three experiment examined the role of contextual information during line orientation and line position discriminations by pigeons (Columba livia) and humans (Homo sapiens). Experiment 1 tested pigeons' performance with these stimuli in a target localization task using texture displays. Experiments 2 and 3 tested pigeons and humans, respectively, with small and large variations of these stimuli in a same-different task. Humans showed a configural superiority effect when tested with displays constructed from large elements but not when tested with the smaller, more densely packed texture displays. The pigeons, in contrast, exhibited a configural inferiority effect when required to discriminate line orientation, regardless of stimulus size. These contrasting results suggest a species difference in the perceptionand use of features and contextual information in the discrimination of line information.
Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein
2017-08-01
Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of events. Copyright © 2017 the American Physiological Society.
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
Sleep-dependent consolidation benefits fast transfer of time interval training.
Chen, Lihan; Guo, Lu; Bao, Ming
2017-03-01
Previous study has shown that short training (15 min) for explicitly discriminating temporal intervals between two paired auditory beeps, or between two paired tactile taps, can significantly improve observers' ability to classify the perceptual states of visual Ternus apparent motion while the training of task-irrelevant sensory properties did not help to improve visual timing (Chen and Zhou in Exp Brain Res 232(6):1855-1864, 2014). The present study examined the role of 'consolidation' after training of temporal task-irrelevant properties, or whether a pure delay (i.e., blank consolidation) following pretest of the target task would give rise to improved ability of visual interval timing, typified in visual Ternus display. A procedure of pretest-training-posttest was adopted, with the probe of discriminating Ternus apparent motion. The extended implicit training of timing in which the time intervals between paired auditory beeps or paired tactile taps were manipulated but the task was discrimination of the auditory pitches or tactile intensities, did not lead to the training benefits (Exps 1 and 3); however, a delay of 24 h after implicit training of timing, including solving 'Sudoku puzzles,' made the otherwise absent training benefits observable (Exps 2, 4, 5 and 6). The above improvements in performance were not due to a practice effect of Ternus motion (Exp 7). A general 'blank' consolidation period of 24 h also made improvements of visual timing observable (Exp 8). Taken together, the current findings indicated that sleep-dependent consolidation imposed a general effect, by potentially triggering and maintaining neuroplastic changes in the intrinsic (timing) network to enhance the ability of time perception.
Deep neural networks for modeling visual perceptual learning.
Wenliang, Li; Seitz, Aaron R
2018-05-23
Understanding visual perceptual learning (VPL) has become increasingly more challenging as new phenomena are discovered with novel stimuli and training paradigms. While existing models aid our knowledge of critical aspects of VPL, the connections shown by these models between behavioral learning and plasticity across different brain areas are typically superficial. Most models explain VPL as readout from simple perceptual representations to decision areas and are not easily adaptable to explain new findings. Here, we show that a well-known instance of deep neural network (DNN), while not designed specifically for VPL, provides a computational model of VPL with enough complexity to be studied at many levels of analyses. After learning a Gabor orientation discrimination task, the DNN model reproduced key behavioral results, including increasing specificity with higher task precision, and also suggested that learning precise discriminations could asymmetrically transfer to coarse discriminations when the stimulus conditions varied. In line with the behavioral findings, the distribution of plasticity moved towards lower layers when task precision increased, and this distribution was also modulated by tasks with different stimulus types. Furthermore, learning in the network units demonstrated close resemblance to extant electrophysiological recordings in monkey visual areas. Altogether, the DNN fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. Although the comparisons were mostly qualitative, the DNN provides a new method of studying VPL and can serve as a testbed for theories and assist in generating predictions for physiological investigations. SIGNIFICANCE STATEMENT Visual perceptual learning (VPL) has been found to cause changes at multiple stages of the visual hierarchy. We found that training a deep neural network (DNN) on an orientation discrimination task produced similar behavioral and physiological patterns found in human and monkey experiments. Unlike existing VPL models, the DNN was pre-trained on natural images to reach high performance in object recognition but was not designed specifically for VPL, and yet it fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. When used with care, this unbiased and deep-hierarchical model can provide new ways of studying VPL from behavior to physiology. Copyright © 2018 the authors.
Kato, Shigeki; Kuramochi, Masahito; Kobayashi, Kenta; Fukabori, Ryoji; Okada, Kana; Uchigashima, Motokazu; Watanabe, Masahiko; Tsutsui, Yuji; Kobayashi, Kazuto
2011-11-23
The dorsal striatum receives converging excitatory inputs from diverse brain regions, including the cerebral cortex and the intralaminar/midline thalamic nuclei, and mediates learning processes contributing to instrumental motor actions. However, the roles of each striatal input pathway in these learning processes remain uncertain. We developed a novel strategy to target specific neural pathways and applied this strategy for studying behavioral roles of the pathway originating from the parafascicular nucleus (PF) and projecting to the dorsolateral striatum. A highly efficient retrograde gene transfer vector encoding the recombinant immunotoxin (IT) receptor was injected into the dorsolateral striatum in mice to express the receptor in neurons innervating the striatum. IT treatment into the PF of the vector-injected animals caused a selective elimination of neurons of the PF-derived thalamostriatal pathway. The elimination of this pathway impaired the response selection accuracy and delayed the motor response in the acquisition of a visual cue-dependent discrimination task. When the pathway elimination was induced after learning acquisition, it disturbed the response accuracy in the task performance with no apparent change in the response time. The elimination did not influence spontaneous locomotion, methamphetamine-induced hyperactivity, and motor skill learning that demand the function of the dorsal striatum. These results demonstrate that thalamostriatal projection derived from the PF plays essential roles in the acquisition and execution of discrimination learning in response to sensory stimulus. The temporal difference in the pathway requirement for visual discrimination suggests a stage-specific role of thalamostriatal pathway in the modulation of response time of learned motor actions.
Figure-ground discrimination in the avian brain: the nucleus rotundus and its inhibitory complex.
Acerbo, Martin J; Lazareva, Olga F; McInnerney, John; Leiker, Emily; Wasserman, Edward A; Poremba, Amy
2012-10-01
In primates, neurons sensitive to figure-ground status are located in striate cortex (area V1) and extrastriate cortex (area V2). Although much is known about the anatomical structure and connectivity of the avian visual pathway, the functional organization of the avian brain remains largely unexplored. To pinpoint the areas associated with figure-ground segregation in the avian brain, we used a radioactively labeled glucose analog to compare differences in glucose uptake after figure-ground, color, and shape discriminations. We also included a control group that received food on a variable-interval schedule, but was not required to learn a visual discrimination. Although the discrimination task depended on group assignment, the stimulus displays were identical for all three experimental groups, ensuring that all animals were exposed to the same visual input. Our analysis concentrated on the primary thalamic nucleus associated with visual processing, the nucleus rotundus (Rt), and two nuclei providing regulatory feedback, the pretectum (PT) and the nucleus subpretectalis/interstitio-pretecto-subpretectalis complex (SP/IPS). We found that figure-ground discrimination was associated with strong and nonlateralized activity of Rt and SP/IPS, whereas color discrimination produced strong and lateralized activation in Rt alone. Shape discrimination was associated with lower activity of Rt than in the control group. Taken together, our results suggest that figure-ground discrimination is associated with Rt and that SP/IPS may be a main source of inhibitory control. Thus, figure-ground segregation in the avian brain may occur earlier than in the primate brain. Copyright © 2012 Elsevier Ltd. All rights reserved.
Figure-ground discrimination in the avian brain: The nucleus rotundus and its inhibitory complex
Acerbo, Martin J.; Lazareva, Olga F.; McInnerney, John; Leiker, Emily; Wasserman, Edward A.; Poremba, Amy
2012-01-01
In primates, neurons sensitive to figure-ground status are located in striate cortex (area V1) and extrastriate cortex (area V2). Although much is known about the anatomical structure and connectivity of the avian visual pathway, the functional organization of the avian brain remains largely unexplored. To pinpoint the areas associated with figure-ground segregation in the avian brain, we used a radioactively labeled glucose analog to compare differences in glucose uptake after figure-ground, color, and shape discriminations. We also included a control group that received food on a variable-interval schedule, but was not required to learn a visual discrimination. Although the discrimination task depended on group assignment, the stimulus displays were identical for all three experimental groups, ensuring that all animals were exposed to the same visual input. Our analysis concentrated on the primary thalamic nucleus associated with visual processing, the nucleus rotundus (Rt), and two nuclei providing regulatory feedback, the pretectum (PT) and the nucleus subpretectalis/interstitio-pretecto-subpretectalis complex (SP/IPS). We found that figure-ground discrimination was associated with strong and nonlateralized activity of Rt and SP/IPS, whereas color discrimination produced strong and lateralized activation in Rt alone. Shape discrimination was associated with lower activity of Rt than in the control group. Taken together, our results suggest that figure-ground discrimination is associated with Rt and that SP/IPS may be a main source of inhibitory control. Thus, figure-ground segregation in the avian brain may occur earlier than in the primate brain. PMID:22917681
The Mechanisms Underlying the ASD Advantage in Visual Search.
Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S; Blaser, Erik
2016-05-01
A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in Neuron 48:497-507, 2005; Simmons et al. in Vis Res 49:2705-2739, 2009). This "ASD advantage" was first identified in the domain of visual search by Plaisted et al. (J Child Psychol Psychiatry 39:777-783, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that-across development and a broad range of symptom severity-individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to 'enhanced perceptual discrimination', a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O'Riordan in Cognition 77:81-96, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn et al. in J Child Psychol Psychiatry 37:164-183, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests.
Gould, R W; Dencker, D; Grannan, M; Bubser, M; Zhan, X; Wess, J; Xiang, Z; Locuson, C; Lindsley, C W; Conn, P J; Jones, C K
2015-10-21
The M1 muscarinic acetylcholine receptor (mAChR) subtype has been implicated in the underlying mechanisms of learning and memory and represents an important potential pharmacotherapeutic target for the cognitive impairments observed in neuropsychiatric disorders such as schizophrenia. Patients with schizophrenia show impairments in top-down processing involving conflict between sensory-driven and goal-oriented processes that can be modeled in preclinical studies using touchscreen-based cognition tasks. The present studies used a touchscreen visual pairwise discrimination task in which mice discriminated between a less salient and a more salient stimulus to assess the influence of the M1 mAChR on top-down processing. M1 mAChR knockout (M1 KO) mice showed a slower rate of learning, evidenced by slower increases in accuracy over 12 consecutive days, and required more days to acquire (achieve 80% accuracy) this discrimination task compared to wild-type mice. In addition, the M1 positive allosteric modulator BQCA enhanced the rate of learning this discrimination in wild-type, but not in M1 KO, mice when BQCA was administered daily prior to testing over 12 consecutive days. Importantly, in discriminations between stimuli of equal salience, M1 KO mice did not show impaired acquisition and BQCA did not affect the rate of learning or acquisition in wild-type mice. These studies are the first to demonstrate performance deficits in M1 KO mice using touchscreen cognitive assessments and enhanced rate of learning and acquisition in wild-type mice through M1 mAChR potentiation when the touchscreen discrimination task involves top-down processing. Taken together, these findings provide further support for M1 potentiation as a potential treatment for the cognitive symptoms associated with schizophrenia.
ERIC Educational Resources Information Center
Giersch, Anne; Glaser, Bronwyn; Pasca, Catherine; Chabloz, Mélanie; Debbané, Martin; Eliez, Stephan
2014-01-01
Individuals with 22q11.2 deletion syndrome (22q11.2DS) are impaired at exploring visual information in space; however, not much is known about visual form discrimination in the syndrome. Thirty-five individuals with 22q11.2DS and 41 controls completed a form discrimination task with global forms made up of local elements. Affected individuals…
Visual Aversive Learning Compromises Sensory Discrimination.
Shalev, Lee; Paz, Rony; Avidan, Galia
2018-03-14
Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.
Bublitz, Alexander; Weinhold, Severine R.; Strobel, Sophia; Dehnhardt, Guido; Hanke, Frederike D.
2017-01-01
Octopuses (Octopus vulgaris) are generally considered to possess extraordinary cognitive abilities including the ability to successfully perform in a serial reversal learning task. During reversal learning, an animal is presented with a discrimination problem and after reaching a learning criterion, the signs of the stimuli are reversed: the former positive becomes the negative stimulus and vice versa. If an animal improves its performance over reversals, it is ascribed advanced cognitive abilities. Reversal learning has been tested in octopus in a number of studies. However, the experimental procedures adopted in these studies involved pre-training on the new positive stimulus after a reversal, strong negative reinforcement or might have enabled secondary cueing by the experimenter. These procedures could have all affected the outcome of reversal learning. Thus, in this study, serial visual reversal learning was revisited in octopus. We trained four common octopuses (O. vulgaris) to discriminate between 2-dimensional stimuli presented on a monitor in a simultaneous visual discrimination task and reversed the signs of the stimuli each time the animals reached the learning criterion of ≥80% in two consecutive sessions. The animals were trained using operant conditioning techniques including a secondary reinforcer, a rod that was pushed up and down the feeding tube, which signaled the correctness of a response and preceded the subsequent primary reinforcement of food. The experimental protocol did not involve negative reinforcement. One animal completed four reversals and showed progressive improvement, i.e., it decreased its errors to criterion the more reversals it experienced. This animal developed a generalized response strategy. In contrast, another animal completed only one reversal, whereas two animals did not learn to reverse during the first reversal. In conclusion, some octopus individuals can learn to reverse in a visual task demonstrating behavioral flexibility even with a refined methodology. PMID:28223940
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:27551263
Visual discrimination of local surface structure: slant, tilt, and curvedness.
Norman, J Farley; Todd, James T; Norman, Hideko F; Clayton, Anna Marie; McBride, T Ryan
2006-03-01
In four experiments, observers were required to discriminate interval or ordinal differences in slant, tilt, or curvedness between designated probe points on randomly shaped curved surfaces defined by shading, texture, and binocular disparity. The results reveal that discrimination thresholds for judgments of slant or tilt typically range between 4 degrees and 10 degrees; that judgments of one component are unaffected by simultaneous variations in the other; and that the individual thresholds for either the slant or tilt components of orientation are approximately equal to those obtained for judgments of the total orientation difference between two probed regions. Performance was much worse, however, for judgments of curvedness, and these judgments were significantly impaired when there were simultaneous variations in the shape index parameter of curvature.
Advanced Parkinson disease patients have impairment in prosody processing.
Albuquerque, Luisa; Martins, Maurício; Coelho, Miguel; Guedes, Leonor; Ferreira, Joaquim J; Rosa, Mário; Martins, Isabel Pavão
2016-01-01
The ability to recognize and interpret emotions in others is a crucial prerequisite of adequate social behavior. Impairments in emotion processing have been reported from the early stages of Parkinson's disease (PD). This study aims to characterize emotion recognition in advanced Parkinson's disease (APD) candidates for deep-brain stimulation and to compare emotion recognition abilities in visual and auditory domains. APD patients, defined as those with levodopa-induced motor complications (N = 42), and healthy controls (N = 43) matched by gender, age, and educational level, undertook the Comprehensive Affect Testing System (CATS), a battery that evaluates recognition of seven basic emotions (happiness, sadness, anger, fear, surprise, disgust, and neutral) on facial expressions and four emotions on prosody (happiness, sadness, anger, and fear). APD patients were assessed during the "ON" state. Group performance was compared with independent-samples t tests. Compared to controls, APD had significantly lower scores on the discrimination and naming of emotions in prosody, and visual discrimination of neutral faces, but no significant differences in visual emotional tasks. The contrasting performance in emotional processing between visual and auditory stimuli suggests that APD candidates for surgery have either a selective difficulty in recognizing emotions in prosody or a general defect in prosody processing. Studies investigating early-stage PD, and the effect of subcortical lesions in prosody processing, favor the latter interpretation. Further research is needed to understand these deficits in emotional prosody recognition and their possible contribution to later behavioral or neuropsychiatric manifestations of PD.
Intermittent regime of brain activity at the early, bias-guided stage of perceptual learning.
Nikolaev, Andrey R; Gepshtein, Sergei; van Leeuwen, Cees
2016-11-01
Perceptual learning improves visual performance. Among the plausible mechanisms of learning, reduction of perceptual bias has been studied the least. Perceptual bias may compensate for lack of stimulus information, but excessive reliance on bias diminishes visual discriminability. We investigated the time course of bias in a perceptual grouping task and studied the associated cortical dynamics in spontaneous and evoked EEG. Participants reported the perceived orientation of dot groupings in ambiguous dot lattices. Performance improved over a 1-hr period as indicated by the proportion of trials in which participants preferred dot groupings favored by dot proximity. The proximity-based responses were compromised by perceptual bias: Vertical groupings were sometimes preferred to horizontal ones, independent of dot proximity. In the evoked EEG activity, greater amplitude of the N1 component for horizontal than vertical responses indicated that the bias was most prominent in conditions of reduced visual discriminability. The prominence of bias decreased in the course of the experiment. Although the bias was still prominent, prestimulus activity was characterized by an intermittent regime of alternating modes of low and high alpha power. Responses were more biased in the former mode, indicating that perceptual bias was deployed actively to compensate for stimulus uncertainty. Thus, early stages of perceptual learning were characterized by episodes of greater reliance on prior visual preferences, alternating with episodes of receptivity to stimulus information. In the course of learning, the former episodes disappeared, and biases reappeared only infrequently.
Neural correlates of face gender discrimination learning.
Su, Junzhu; Tan, Qingleng; Fang, Fang
2013-04-01
Using combined psychophysics and event-related potentials (ERPs), we investigated the effect of perceptual learning on face gender discrimination and probe the neural correlates of the learning effect. Human subjects were trained to perform a gender discrimination task with male or female faces. Before and after training, they were tested with the trained faces and other faces with the same and opposite genders. ERPs responding to these faces were recorded. Psychophysical results showed that training significantly improved subjects' discrimination performance and the improvement was specific to the trained gender, as well as to the trained identities. The training effect indicates that learning occurs at two levels-the category level (gender) and the exemplar level (identity). ERP analyses showed that the gender and identity learning was associated with the N170 latency reduction at the left occipital-temporal area and the N170 amplitude reduction at the right occipital-temporal area, respectively. These findings provide evidence for the facilitation model and the sharpening model on neuronal plasticity from visual experience, suggesting a faster processing speed and a sparser representation of face induced by perceptual learning.
The effect of acute sleep deprivation on visual evoked potentials in professional drivers.
Jackson, Melinda L; Croft, Rodney J; Owens, Katherine; Pierce, Robert J; Kennedy, Gerard A; Crewther, David; Howard, Mark E
2008-09-01
Previous studies have demonstrated that as little as 18 hours of sleep deprivation can cause deleterious effects on performance. It has also been suggested that sleep deprivation can cause a "tunnel-vision" effect, in which attention is restricted to the center of the visual field. The current study aimed to replicate these behavioral effects and to examine the electrophysiological underpinnings of these changes. Repeated-measures experimental study. University laboratory. Nineteen professional drivers (1 woman; mean age = 45.3 +/- 9.1 years). Two experimental sessions were performed; one following 27 hours of sleep deprivation and the other following a normal night of sleep, with control for circadian effects. A tunnel-vision task (central versus peripheral visual discrimination) and a standard checkerboard-viewing task were performed while 32-channel EEG was recorded. For the tunnel-vision task, sleep deprivation resulted in an overall slowing of reaction times and increased errors of omission for both peripheral and foveal stimuli (P < 0.05). These changes were related to reduced P300 amplitude (indexing cognitive processing) but not measures of early visual processing. No evidence was found for an interaction effect between sleep deprivation and visual-field position, either in terms of behavior or electrophysiological responses. Slower processing of the sustained parvocellular visual pathway was demonstrated. These findings suggest that performance deficits on visual tasks during sleep deprivation are due to higher cognitive processes rather than early visual processing. Sleep deprivation may differentially impair processing of more-detailed visual information. Features of the study design (eg, visual angle, duration of sleep deprivation) may influence whether peripheral visual-field neglect occurs.
Colour processing in complex environments: insights from the visual system of bees
Dyer, Adrian G.; Paulk, Angelique C.; Reser, David H.
2011-01-01
Colour vision enables animals to detect and discriminate differences in chromatic cues independent of brightness. How the bee visual system manages this task is of interest for understanding information processing in miniaturized systems, as well as the relationship between bee pollinators and flowering plants. Bees can quickly discriminate dissimilar colours, but can also slowly learn to discriminate very similar colours, raising the question as to how the visual system can support this, or whether it is simply a learning and memory operation. We discuss the detailed neuroanatomical layout of the brain, identify probable brain areas for colour processing, and suggest that there may be multiple systems in the bee brain that mediate either coarse or fine colour discrimination ability in a manner dependent upon individual experience. These multiple colour pathways have been identified along both functional and anatomical lines in the bee brain, providing us with some insights into how the brain may operate to support complex colour discrimination behaviours. PMID:21147796
Some distinguishing characteristics of contour and texture phenomena in images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1992-01-01
The development of generalized contour/texture discrimination techniques is a central element necessary for machine vision recognition and interpretation of arbitrary images. Here, the visual perception of texture, selected studies of texture analysis in machine vision, and diverse small samples of contour and texture are all used to provide insights into the fundamental characteristics of contour and texture. From these, an experimental discrimination scheme is developed and tested on a battery of natural images. The visual perception of texture defined fine texture as a subclass which is interpreted as shading and is distinct from coarse figural similarity textures. Also, perception defined the smallest scale for contour/texture discrimination as eight to nine visual acuity units. Three contour/texture discrimination parameters were found to be moderately successful for this scale discrimination: (1) lightness change in a blurred version of the image, (2) change in lightness change in the original image, and (3) percent change in edge counts relative to local maximum.
A manual and an automatic TERS based virus discrimination
NASA Astrophysics Data System (ADS)
Olschewski, Konstanze; Kämmer, Evelyn; Stöckel, Stephan; Bocklitz, Thomas; Deckert-Gaudig, Tanja; Zell, Roland; Cialla-May, Dana; Weber, Karina; Deckert, Volker; Popp, Jürgen
2015-02-01
Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%.Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr07033j
VFMA: Topographic Analysis of Sensitivity Data From Full-Field Static Perimetry
Weleber, Richard G.; Smith, Travis B.; Peters, Dawn; Chegarnov, Elvira N.; Gillespie, Scott P.; Francis, Peter J.; Gardiner, Stuart K.; Paetzold, Jens; Dietzsch, Janko; Schiefer, Ulrich; Johnson, Chris A.
2015-01-01
Purpose: To analyze static visual field sensitivity with topographic models of the hill of vision (HOV), and to characterize several visual function indices derived from the HOV volume. Methods: A software application, Visual Field Modeling and Analysis (VFMA), was developed for static perimetry data visualization and analysis. Three-dimensional HOV models were generated for 16 healthy subjects and 82 retinitis pigmentosa patients. Volumetric visual function indices, which are measures of quantity and comparable regardless of perimeter test pattern, were investigated. Cross-validation, reliability, and cross-sectional analyses were performed to assess this methodology and compare the volumetric indices to conventional mean sensitivity and mean deviation. Floor effects were evaluated by computer simulation. Results: Cross-validation yielded an overall R2 of 0.68 and index of agreement of 0.89, which were consistent among subject groups, indicating good accuracy. Volumetric and conventional indices were comparable in terms of test–retest variability and discriminability among subject groups. Simulated floor effects did not negatively impact the repeatability of any index, but large floor changes altered the discriminability for regional volumetric indices. Conclusions: VFMA is an effective tool for clinical and research analyses of static perimetry data. Topographic models of the HOV aid the visualization of field defects, and topographically derived indices quantify the magnitude and extent of visual field sensitivity. Translational Relevance: VFMA assists with the interpretation of visual field data from any perimetric device and any test location pattern. Topographic models and volumetric indices are suitable for diagnosis, monitoring of field loss, patient counseling, and endpoints in therapeutic trials. PMID:25938002
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
A strategy to optimize CT pediatric dose with a visual discrimination model
NASA Astrophysics Data System (ADS)
Gutierrez, Daniel; Gudinchet, François; Alamo-Maestre, Leonor T.; Bochud, François O.; Verdun, Francis R.
2008-03-01
Technological developments of computed tomography (CT) have led to a drastic increase of its clinical utilization, creating concerns about patient exposure. To better control dose to patients, we propose a methodology to find an objective compromise between dose and image quality by means of a visual discrimination model. A GE LightSpeed-Ultra scanner was used to perform the acquisitions. A QRM 3D low contrast resolution phantom (QRM - Germany) was scanned using CTDI vol values in the range of 1.7 to 103 mGy. Raw data obtained with the highest CTDI vol were afterwards processed to simulate dose reductions by white noise addition. Noise realism of the simulations was verified by comparing normalized noise power spectra aspect and amplitudes (NNPS) and standard deviation measurements. Patient images were acquired using the Diagnostic Reference Levels (DRL) proposed in Switzerland. Noise reduction was then simulated, as for the QRM phantom, to obtain five different CTDI vol levels, down to 3.0 mGy. Image quality of phantom images was assessed with the Sarnoff JNDmetrix visual discrimination model and compared to an assessment made by means of the ROC methodology, taken as a reference. For patient images a similar approach was taken but using as reference the Visual Grading Analysis (VGA) method. A relationship between Sarnoff JNDmetrix and ROC results was established for low contrast detection in phantom images, demonstrating that the Sarnoff JNDmetrix can be used for qualification of images with highly correlated noise. Patient image qualification showed a threshold of conspicuity loss only for children over 35 kg.
Regini, Elisa; Mariscotti, Giovanna; Durando, Manuela; Ghione, Gianluca; Luparia, Andrea; Campanino, Pier Paolo; Bianchi, Caterina Chiara; Bergamasco, Laura; Fonio, Paolo; Gandini, Giovanni
2014-10-01
This study was done to assess breast density on digital mammography and digital breast tomosynthesis according to the visual Breast Imaging Reporting and Data System (BI-RADS) classification, to compare visual assessment with Quantra software for automated density measurement, and to establish the role of the software in clinical practice. We analysed 200 digital mammograms performed in 2D and 3D modality, 100 of which positive for breast cancer and 100 negative. Radiological density was assessed with the BI-RADS classification; a Quantra density cut-off value was sought on the 2D images only to discriminate between BI-RADS categories 1-2 and BI-RADS 3-4. Breast density was correlated with age, use of hormone therapy, and increased risk of disease. The agreement between the 2D and 3D assessments of BI-RADS density was high (K 0.96). A cut-off value of 21% is that which allows us to best discriminate between BI-RADS categories 1-2 and 3-4. Breast density was negatively correlated to age (r = -0.44) and positively to use of hormone therapy (p = 0.0004). Quantra density was higher in breasts with cancer than in healthy breasts. There is no clear difference between the visual assessments of density on 2D and 3D images. Use of the automated system requires the adoption of a cut-off value (set at 21%) to effectively discriminate BI-RADS 1-2 and 3-4, and could be useful in clinical practice.
Further Development of Measures of Early Math Performance for Preschoolers
ERIC Educational Resources Information Center
VanDerHeyden, Amanda M.; Broussard, Carmen; Cooley, Amanda
2006-01-01
The purpose of this study was to examine the progress monitoring and screening accuracy for a set of curriculum-based measures (CBM) of early mathematics skills. Measures included counting objects, selecting numbers, naming numbers, counting, and visual discrimination. Measures were designed to be administered with preschoolers in a short period…
Auditory-Visual Intermodal Matching of Small Numerosities in 6-Month-Old Infants
ERIC Educational Resources Information Center
Kobayashi, Tessei; Hiraki, Kazuo; Hasegawa, Toshikazu
2005-01-01
Recent studies have reported that preverbal infants are able to discriminate between numerosities of sets presented within a particular modality. There is still debate, however, over whether they are able to perform intermodal numerosity matching, i.e. to relate numerosities of sets presented with different sensory modalities. The present study…
How Attention Affects Spatial Resolution
Carrasco, Marisa; Barbot, Antoine
2015-01-01
We summarize and discuss a series of psychophysical studies on the effects of spatial covert attention on spatial resolution, our ability to discriminate fine patterns. Heightened resolution is beneficial in most, but not all, visual tasks. We show how endogenous attention (voluntary, goal driven) and exogenous attention (involuntary, stimulus driven) affect performance on a variety of tasks mediated by spatial resolution, such as visual search, crowding, acuity, and texture segmentation. Exogenous attention is an automatic mechanism that increases resolution regardless of whether it helps or hinders performance. In contrast, endogenous attention flexibly adjusts resolution to optimize performance according to task demands. We illustrate how psychophysical studies can reveal the underlying mechanisms of these effects and allow us to draw linking hypotheses with known neurophysiological effects of attention. PMID:25948640
Altering sensorimotor feedback disrupts visual discrimination of facial expressions.
Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula
2016-08-01
Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.
Variability in visual working memory ability limits the efficiency of perceptual decision making.
Ester, Edward F; Ho, Tiffany C; Brown, Scott D; Serences, John T
2014-04-02
The ability to make rapid and accurate decisions based on limited sensory information is a critical component of visual cognition. Available evidence suggests that simple perceptual discriminations are based on the accumulation and integration of sensory evidence over time. However, the memory system(s) mediating this accumulation are unclear. One candidate system is working memory (WM), which enables the temporary maintenance of information in a readily accessible state. Here, we show that individual variability in WM capacity is strongly correlated with the speed of evidence accumulation in speeded two-alternative forced choice tasks. This relationship generalized across different decision-making tasks, and could not be easily explained by variability in general arousal or vigilance. Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM storage and decision making are directly linked.
Display characterization by eye: contrast ratio and discrimination throughout the grayscale
NASA Astrophysics Data System (ADS)
Gille, Jennifer; Arend, Larry; Larimer, James O.
2004-06-01
We have measured the ability of observers to estimate the contrast ratio (maximum white luminance / minimum black or gray) of various displays and to assess luminous discrimination over the tonescale of the display. This was done using only the computer itself and easily-distributed devices such as neutral density filters. The ultimate goal of this work is to see how much of the characterization of a display can be performed by the ordinary user in situ, in a manner that takes advantage of the unique abilities of the human visual system and measures visually important aspects of the display. We discuss the relationship among contrast ratio, tone scale, display transfer function and room lighting. These results may contribute to the development of applications that allow optimization of displays for the situated viewer / display system without instrumentation and without indirect inferences from laboratory to workplace.
The Learning of Difficult Visual Discriminations by the Moderately and Severely Retarded
ERIC Educational Resources Information Center
Gold, Marc W.; Barclay, Craig R.
2015-01-01
A procedure to effectively and efficiently train moderately and severely retarded individuals to make fine visual discriminations is described. Results suggest that expectancies for such individuals are in need of examination. Implications for sheltered workshops, work activity centers and classrooms are discussed. [This article appeared…
1989-08-14
DISCRIMINATE SIMILAR KANJt CHARACTERS. Yoshihiro Mori, Kazuhiko Yokosawa . 12 FURTHER EXPLORATIONS IN THE LEARNING OF VISUALLY-GUIDED REACHING: MAKING MURPHY...NETWORKS THAT LEARN TO DISCRIMINATE SIMILAR KANJI CHARACTERS YOSHIHIRO MORI, KAZUHIKO YOKOSAWA , ATR Auditory and Visual Perception Research Laboratories
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Krogh-Jespersen, Sheila; Argumosa, Melissa A.; Lopez, Hassel
2014-01-01
Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of…
Isolating Discriminant Neural Activity in the Presence of Eye Movements and Concurrent Task Demands
Touryan, Jon; Lawhern, Vernon J.; Connolly, Patrick M.; Bigdely-Shamlo, Nima; Ries, Anthony J.
2017-01-01
A growing number of studies use the combination of eye-tracking and electroencephalographic (EEG) measures to explore the neural processes that underlie visual perception. In these studies, fixation-related potentials (FRPs) are commonly used to quantify early and late stages of visual processing that follow the onset of each fixation. However, FRPs reflect a mixture of bottom-up (sensory-driven) and top-down (goal-directed) processes, in addition to eye movement artifacts and unrelated neural activity. At present there is little consensus on how to separate this evoked response into its constituent elements. In this study we sought to isolate the neural sources of target detection in the presence of eye movements and over a range of concurrent task demands. Here, participants were asked to identify visual targets (Ts) amongst a grid of distractor stimuli (Ls), while simultaneously performing an auditory N-back task. To identify the discriminant activity, we used independent components analysis (ICA) for the separation of EEG into neural and non-neural sources. We then further separated the neural sources, using a modified measure-projection approach, into six regions of interest (ROIs): occipital, fusiform, temporal, parietal, cingulate, and frontal cortices. Using activity from these ROIs, we identified target from non-target fixations in all participants at a level similar to other state-of-the-art classification techniques. Importantly, we isolated the time course and spectral features of this discriminant activity in each ROI. In addition, we were able to quantify the effect of cognitive load on both fixation-locked potential and classification performance across regions. Together, our results show the utility of a measure-projection approach for separating task-relevant neural activity into meaningful ROIs within more complex contexts that include eye movements. PMID:28736519
Leach, P T; Crawley, J N
2017-12-20
Mutant mouse models of neurodevelopmental disorders with intellectual disabilities provide useful translational research tools, especially in cases where robust cognitive deficits are reproducibly detected. However, motor, sensory and/or health issues consequent to the mutation may introduce artifacts that preclude testing in some standard cognitive assays. Touchscreen learning and memory tasks in small operant chambers have the potential to circumvent these confounds. Here we use touchscreen visual discrimination learning to evaluate performance in the maternally derived Ube3a mouse model of Angelman syndrome, the Ts65Dn trisomy mouse model of Down syndrome, and the Mecp2 Bird mouse model of Rett syndrome. Significant deficits in acquisition of a 2-choice visual discrimination task were detected in both Ube3a and Ts65Dn mice. Procedural control measures showed no genotype differences during pretraining phases or during acquisition. Mecp2 males did not survive long enough for touchscreen training, consistent with previous reports. Most Mecp2 females failed on pretraining criteria. Significant impairments on Morris water maze spatial learning were detected in both Ube3a and Ts65Dn, replicating previous findings. Abnormalities on rotarod in Ube3a, and on open field in Ts65Dn, replicating previous findings, may have contributed to the observed acquisition deficits and swim speed abnormalities during water maze performance. In contrast, these motor phenotypes do not appear to have affected touchscreen procedural abilities during pretraining or visual discrimination training. Our findings of slower touchscreen learning in 2 mouse models of neurodevelopmental disorders with intellectual disabilities indicate that operant tasks offer promising outcome measures for the preclinical discovery of effective pharmacological therapeutics. © 2017 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.
Lawton, Teri; Shelley-Tremblay, John
2017-01-01
The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination ( PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading ( Raz-Kids ( RK )). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.
Lawton, Teri; Shelley-Tremblay, John
2017-01-01
The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK)). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:28555097
Crawford, H J; Allen, S N
1983-12-01
To investigate the hypothesis that hypnosis has an enhancing effect on imagery processing, as mediated by hypnotic responsiveness and cognitive strategies, four experiments compared performance of low and high, or low, medium, and high, hypnotically responsive subjects in waking and hypnosis conditions on a successive visual memory discrimination task that required detecting differences between successively presented picture pairs in which one member of the pair was slightly altered. Consistently, hypnotically responsive individuals showed enhanced performance during hypnosis, whereas nonresponsive ones did not. Hypnotic responsiveness correlated .52 (p less than .001) with enhanced performance during hypnosis, but it was uncorrelated with waking performance (Experiment 3). Reaction time was not affected by hypnosis, although high hypnotizables were faster than lows in their responses (Experiments 1 and 2). Subjects reported enhanced imagery vividness on the self-report Vividness of Visual Imagery Questionnaire during hypnosis. The differential effect between lows and highs was in the anticipated direction but not significant (Experiments 1 and 2). As anticipated, hypnosis had no significant effect on a discrimination task that required determining whether there were differences between pairs of simultaneously presented pictures. Two cognitive strategies that appeared to mediate visual memory performance were reported: (a) detail strategy, which involved the memorization and rehearsal of individual details for memory, and (b) holistic strategy, which involved looking at and remembering the whole picture with accompanying imagery. Both lows and highs reported similar predominantly detail-oriented strategies during waking; only highs shifted to a significantly more holistic strategy during hypnosis. These findings suggest that high hypnotizables have a greater capacity for cognitive flexibility (Batting, 1979) than do lows. Results are discussed in terms of several theoretical approaches: Paivio's (1971) dual-coding theory and Craik and Tulving's (1975) depth of processing theory. Additional discussion is given to the question of whether hypnosis involves a shift in cerebral dominance, as reflected by the cognitive strategy changes and enhanced imagery processing.
High resolution satellite image indexing and retrieval using SURF features and bag of visual words
NASA Astrophysics Data System (ADS)
Bouteldja, Samia; Kourgli, Assia
2017-03-01
In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.
The Pivotal Role of the Right Parietal Lobe in Temporal Attention.
Agosta, Sara; Magnago, Denise; Tyler, Sarah; Grossman, Emily; Galante, Emanuela; Ferraro, Francesco; Mazzini, Nunzia; Miceli, Gabriele; Battelli, Lorella
2017-05-01
The visual system is extremely efficient at detecting events across time even at very fast presentation rates; however, discriminating the identity of those events is much slower and requires attention over time, a mechanism with a much coarser resolution [Cavanagh, P., Battelli, L., & Holcombe, A. O. Dynamic attention. In A. C. Nobre & S. Kastner (Eds.), The Oxford handbook of attention (pp. 652-675). Oxford: Oxford University Press, 2013]. Patients affected by right parietal lesion, including the TPJ, are severely impaired in discriminating events across time in both visual fields [Battelli, L., Cavanagh, P., & Thornton, I. M. Perception of biological motion in parietal patients. Neuropsychologia, 41, 1808-1816, 2003]. One way to test this ability is to use a simultaneity judgment task, whereby participants are asked to indicate whether two events occurred simultaneously or not. We psychophysically varied the frequency rate of four flickering disks, and on most of the trials, one disk (either in the left or right visual field) was flickering out-of-phase relative to the others. We asked participants to report whether two left-or-right-presented disks were simultaneous or not. We tested a total of 23 right and left parietal lesion patients in Experiment 1, and only right parietal patients showed impairment in both visual fields while their low-level visual functions were normal. Importantly, to causally link the right TPJ to the relative timing processing, we ran a TMS experiment on healthy participants. Participants underwent three stimulation sessions and performed the same simultaneity judgment task before and after 20 min of low-frequency inhibitory TMS over right TPJ, left TPJ, or early visual area as a control. rTMS over the right TPJ caused a bilateral impairment in the simultaneity judgment task, whereas rTMS over left TPJ or over early visual area did not affect performance. Altogether, our results directly link the right TPJ to the processing of relative time.
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Trick, G L; Burde, R M; Gordon, M O; Santiago, J V; Kilo, C
1988-05-01
In an attempt to elucidate more fully the pathophysiologic basis of early visual dysfunction in patients with diabetes mellitus, color vision (hue discrimination) and spatial resolution (contrast sensitivity) were tested in diabetic patients with little or no retinopathy (n = 57) and age-matched visual normals (n = 35). Some evidence of visual dysfunction was observed in 37.8% of the diabetics with no retinopathy and 60.0% of the diabetics with background retinopathy. Although significant hue discrimination and contrast sensitivity deficits were observed in both groups of diabetic patients, contrast sensitivity was abnormal more frequently than hue discrimination. However, only 5.4% of the diabetics with no retinopathy and 10.0% of the diabetics with background retinopathy exhibited both abnormal hue discrimination and abnormal contrast sensitivity. Contrary to previous reports, blue-yellow (B-Y) and red-green (R-G) hue discrimination deficits were observed with approximately equal frequency. In the diabetic group, contrast sensitivity was reduced at all spatial frequencies tested, but for individual diabetic patients, significant deficits were only evident for the mid-range spatial frequencies. Among diabetic patients, the hue discrimination deficits, but not the contrast sensitivity abnormalities, were correlated with the patients' hemoglobin A1 level. A negative correlation between contrast sensitivity at 6.0 cpd and the duration of diabetes also was observed.
Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai
2012-10-01
Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.
[Visual perception and its disorders].
Ruf-Bächtiger, L
1989-11-21
It's the brain and not the eye that decides what is perceived. In spite of this fact, quite a lot is known about the functioning of the eye and the first sections of the optic tract, but little about the actual process of perception. Examination of visual perception and its malfunctions relies therefore on certain hypotheses. Proceeding from the model of functional brain systems, variant functional domains of visual perception can be distinguished. Among the more important of these domains are: digit span, visual discrimination and figure-ground discrimination. Evaluation of these functional domains allows us to understand those children with disorders of visual perception better and to develop more effective treatment methods.
Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.
2014-01-01
The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Ono, T; Tamura, R; Nishijo, H; Nakamura, K; Tabuchi, E
1989-02-01
Visual information processing was investigated in the inferotemporal cortical (ITCx)-amygdalar (AM)-lateral hypothalamic (LHA) axis which contributes to food-nonfood discrimination. Neuronal activity was recorded from monkey AM and LHA during discrimination of sensory stimuli including sight of food or nonfood. The task had four phases: control, visual, bar press, and ingestion. Of 710 AM neurons tested, 220 (31.0%) responded during visual phase: 48 to only visual stimulation, 13 (1.9%) to visual plus oral sensory stimulation, 142 (20.0%) to multimodal stimulation and 17 (2.4%) to one affectively significant item. Of 669 LHA neurons tested, 106 (15.8%) responded in the visual phase. Of 80 visual-related neurons tested systematically, 33 (41.2%) responded selectively to the sight of any object predicting the availability of reward, and 47 (58.8%) responded nondifferentially to both food and nonfood. Many of AM neuron responses were graded according to the degree of affective significance of sensory stimuli (sensory-affective association), but responses of LHA food responsive neurons did not depend on the kind of reward indicated by the sensory stimuli (stimulus-reinforcement association). Some AM and LHA food responses were modulated by extinction or reversal. Dynamic information processing in ITCx-AM-LHA axis was investigated by reversible deficits of bilateral ITCx or AM by cooling. ITCx cooling suppressed discrimination by vision responding AM neurons (8/17). AM cooling suppressed LHA responses to food (9/22). We suggest deep AM-LHA involvement in food-nonfood discrimination based on AM sensory-affective association and LHA stimulus-reinforcement association.
Visuoperceptual impairment in dementia with Lewy bodies.
Mori, E; Shimomura, T; Fujimori, M; Hirono, N; Imamura, T; Hashimoto, M; Tanimukai, S; Kazui, H; Hanihara, T
2000-04-01
In dementia with Lewy bodies (DLB), vision-related cognitive and behavioral symptoms are common, and involvement of the occipital visual cortices has been demonstrated in functional neuroimaging studies. To delineate visuoperceptual disturbance in patients with DLB in comparison with that in patients with Alzheimer disease and to explore the relationship between visuoperceptual disturbance and the vision-related cognitive and behavioral symptoms. Case-control study. Research-oriented hospital. Twenty-four patients with probable DLB (based on criteria of the Consortium on DLB International Workshop) and 48 patients with probable Alzheimer disease (based on criteria of the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association) who were matched to those with DLB 2:1 by age, sex, education, and Mini-Mental State Examination score. Four test items to examine visuoperceptual functions, including the object size discrimination, form discrimination, overlapping figure identification, and visual counting tasks. Compared with patients with probable Alzheimer disease, patients with probable DLB scored significantly lower on all the visuoperceptive tasks (P<.04 to P<.001). In the DLB group, patients with visual hallucinations (n = 18) scored significantly lower on the overlapping figure identification (P = .01) than those without them (n = 6), and patients with television misidentifications (n = 5) scored significantly lower on the size discrimination (P<.001), form discrimination (P = .01), and visual counting (P = .007) than those without them (n = 19). Visual perception is defective in probable DLB. The defective visual perception plays a role in development of visual hallucinations, delusional misidentifications, visual agnosias, and visuoconstructive disability charcteristic of DLB.
Hippocampus, Perirhinal Cortex, and Complex Visual Discriminations in Rats and Humans
ERIC Educational Resources Information Center
Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.; Squire, Larry R.; Clark, Robert E.
2015-01-01
Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with…
ERIC Educational Resources Information Center
Turchi, Janita; Buffalari, Deanne; Mishkin, Mortimer
2008-01-01
Monkeys trained in either one-trial recognition at 8- to 10-min delays or multi-trial discrimination habits with 24-h intertrial intervals received systemic cholinergic and dopaminergic antagonists, scopolamine and haloperidol, respectively, in separate sessions. Recognition memory was impaired markedly by scopolamine but not at all by…
ERIC Educational Resources Information Center
Kodak, Tiffany; Clements, Andrea; LeBlanc, Brittany
2013-01-01
The purpose of the present investigation was to evaluate a rapid assessment procedure to identify effective instructional strategies to teach auditory-visual conditional discriminations to children diagnosed with autism. We replicated and extended previous rapid skills assessments (Lerman, Vorndran, Addison, & Kuhn, 2004) by evaluating the effects…
Speaker Identity Supports Phonetic Category Learning
ERIC Educational Resources Information Center
Mani, Nivedita; Schneider, Signe
2013-01-01
Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…
Discrimination of numerical proportions: A comparison of binomial and Gaussian models.
Raidvee, Aire; Lember, Jüri; Allik, Jüri
2017-01-01
Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian-Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles-internal noise vs. using only a fraction of available information-which are both plausible descriptions of visual perception.
Treviño, Mario
2014-01-01
Animal choices depend on direct sensory information, but also on the dynamic changes in the magnitude of reward. In visual discrimination tasks, the emergence of lateral biases in the choice record from animals is often described as a behavioral artifact, because these are highly correlated with error rates affecting psychophysical measurements. Here, we hypothesized that biased choices could constitute a robust behavioral strategy to solve discrimination tasks of graded difficulty. We trained mice to swim in a two-alterative visual discrimination task with escape from water as the reward. Their prevalence of making lateral choices increased with stimulus similarity and was present in conditions of high discriminability. While lateralization occurred at the individual level, it was absent, on average, at the population level. Biased choice sequences obeyed the generalized matching law and increased task efficiency when stimulus similarity was high. A mathematical analysis revealed that strongly-biased mice used information from past rewards but not past choices to make their current choices. We also found that the amount of lateralized choices made during the first day of training predicted individual differences in the average learning behavior. This framework provides useful analysis tools to study individualized visual-learning trajectories in mice. PMID:25524257
Cho, Hwi-Young; Kim, Kitae; Lee, Byounghee; Jung, Jinhwa
2015-03-01
[Purpose] This study investigated a brain wave and visual perception changes in stroke subjects using neurofeedback (NFB) training. [Subjects] Twenty-seven stroke subjects were randomly allocated to the NFB (n = 13) group and the control group (n=14). [Methods] Two expert therapists provided the NFB and CON groups with traditional rehabilitation therapy in 30 thirst-minute sessions over the course of 6 weeks. NFB training was provided only to the NFB group. The CON group received traditional rehabilitation therapy only. Before and after the 6-week intervention, a brain wave test and motor free visual perception test (MVPT) were performed. [Results] Both groups showed significant differences in their relative beta wave values and attention concentration quotients. Moreover, the NFB group showed a significant difference in MVPT visual discrimination, form constancy, visual memory, visual closure, spatial relation, raw score, and processing time. [Conclusion] This study demonstrated that NFB training is more effective for increasing concentration and visual perception changes than traditional rehabilitation. In further studies, detailed and diverse investigations should be performed considering the number and characteristics of subjects, and the NFB training period.
Prestimulus oscillatory activity in the alpha band predicts visual discrimination ability.
van Dijk, Hanneke; Schoffelen, Jan-Mathijs; Oostenveld, Robert; Jensen, Ole
2008-02-20
Although the resting and baseline states of the human electroencephalogram and magnetoencephalogram (MEG) are dominated by oscillations in the alpha band (approximately 10 Hz), the functional role of these oscillations remains unclear. In this study we used MEG to investigate how spontaneous oscillations in humans presented before visual stimuli modulate visual perception. Subjects had to report if there was a subtle difference in gray levels between two superimposed presented discs. We then compared the prestimulus brain activity for correctly (hits) versus incorrectly (misses) identified stimuli. We found that visual discrimination ability decreased with an increase in prestimulus alpha power. Given that reaction times did not vary systematically with prestimulus alpha power changes in vigilance are not likely to explain the change in discrimination ability. Source reconstruction using spatial filters allowed us to identify the brain areas accounting for this effect. The dominant sources modulating visual perception were localized around the parieto-occipital sulcus. We suggest that the parieto-occipital alpha power reflects functional inhibition imposed by higher level areas, which serves to modulate the gain of the visual stream.
The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.
Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin
2017-01-18
Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.
Visual search accelerates during adolescence.
Burggraaf, Rudolf; van der Geest, Jos N; Frens, Maarten A; Hooge, Ignace T C
2018-05-01
We studied changes in visual-search performance and behavior during adolescence. Search performance was analyzed in terms of reaction time and response accuracy. Search behavior was analyzed in terms of the objects fixated and the duration of these fixations. A large group of adolescents (N = 140; age: 12-19 years; 47% female, 53% male) participated in a visual-search experiment in which their eye movements were recorded with an eye tracker. The experiment consisted of 144 trials (50% with a target present), and participants had to decide whether a target was present. Each trial showed a search display with 36 Gabor patches placed on a hexagonal grid. The target was a vertically oriented element with a high spatial frequency. Nontargets differed from the target in spatial frequency, orientation, or both. Search performance and behavior changed during adolescence; with increasing age, fixation duration and reaction time decreased. Response accuracy, number of fixations, and selection of elements to fixate upon did not change with age. Thus, the speed of foveal discrimination increases with age, while the efficiency of peripheral selection does not change. We conclude that the way visual information is gathered does not change during adolescence, but the processing of visual information becomes faster.
Efficacy of a perceptual and visual-motor skill intervention program for students with dyslexia.
Fusco, Natália; Germano, Giseli Donadon; Capellini, Simone Aparecida
2015-01-01
To verify the efficacy of a perceptual and visual-motor skill intervention program for students with dyslexia. The participants were 20 students from third to fifth grade of a public elementary school in Marília, São Paulo, aged from 8 years to 11 years and 11 months, distributed into the following groups: Group I (GI; 10 students with developmental dyslexia) and Group II (GII; 10 students with good academic performance). A perceptual and visual-motor intervention program was applied, which comprised exercises for visual-motor coordination, visual discrimination, visual memory, visual-spatial relationship, shape constancy, sequential memory, visual figure-ground coordination, and visual closure. In pre- and post-testing situations, both groups were submitted to the Test of Visual-Perceptual Skills (TVPS-3), and the quality of handwriting was analyzed using the Dysgraphia Scale. The analyzed statistical results showed that both groups of students had dysgraphia in pretesting situation. In visual perceptual skills, GI presented a lower performance compared to GII, as well as in the quality of writing. After undergoing the intervention program, GI increased the average of correct answers in TVPS-3 and improved the quality of handwriting. The developed intervention program proved appropriate for being applied to students with dyslexia, and showed positive effects because it provided improved visual perception skills and quality of writing for students with developmental dyslexia.
Goodale, M A; Murison, R C
1975-05-02
The effects of bilateral removal of the superior colliculus or visual cortex on visually guided locomotor movements in rats performing a brightness discrimination task were investigated directly with the use of cine film. Rats with collicular lesions showed patterns of locomotion comparable to or more efficient than those of normal animals when approaching one of 5 small doors located at one end of a large open area. In contrast, animals with large but incomplete lesions of visual cortex were distinctly impaired in their visual control of approach responses to the same stimuli. On the other hand, rats with collicular damage showed no orienting reflex or evidence of distraction in the same task when novel visual or auditory stimuli were presented. However, both normal and visual-decorticate rats showed various components of the orienting reflex and disturbance in task performance when the same novel stimuli were presented. These results suggest that although the superior colliculus does not appear to be essential to the visual control of locomotor orientation, this midbrain structure might participate in the mediation of shifts in visual fixation and attention. Visual cortex, while contributing to visuospatial guidance of locomotor movements, might not play a significant role in the control and integration of the orienting reflex.
Stimulus information contaminates summation tests of independent neural representations of features
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2002-01-01
Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
Impairing the useful field of view in natural scenes: Tunnel vision versus general interference.
Ringer, Ryan V; Throneburg, Zachary; Johnson, Aaron P; Kramer, Arthur F; Loschky, Lester C
2016-01-01
A fundamental issue in visual attention is the relationship between the useful field of view (UFOV), the region of visual space where information is encoded within a single fixation, and eccentricity. A common assumption is that impairing attentional resources reduces the size of the UFOV (i.e., tunnel vision). However, most research has not accounted for eccentricity-dependent changes in spatial resolution, potentially conflating fixed visual properties with flexible changes in visual attention. Williams (1988, 1989) argued that foveal loads are necessary to reduce the size of the UFOV, producing tunnel vision. Without a foveal load, it is argued that the attentional decrement is constant across the visual field (i.e., general interference). However, other research asserts that auditory working memory (WM) loads produce tunnel vision. To date, foveal versus auditory WM loads have not been compared to determine if they differentially change the size of the UFOV. In two experiments, we tested the effects of a foveal (rotated L vs. T discrimination) task and an auditory WM (N-back) task on an extrafoveal (Gabor) discrimination task. Gabor patches were scaled for size and processing time to produce equal performance across the visual field under single-task conditions, thus removing the confound of eccentricity-dependent differences in visual sensitivity. The results showed that although both foveal and auditory loads reduced Gabor orientation sensitivity, only the foveal load interacted with retinal eccentricity to produce tunnel vision, clearly demonstrating task-specific changes to the form of the UFOV. This has theoretical implications for understanding the UFOV.
Astié, Andrea A; Scardamaglia, Romina C; Muzio, Rubén N; Reboreda, Juan C
2015-10-01
Females of avian brood parasites, like the shiny cowbird (Molothrus bonariensis), locate host nests and on subsequent days return to parasitize them. This ecological pressure for remembering the precise location of multiple host nests may have selected for superior spatial memory abilities. We tested the hypothesis that shiny cowbirds show sex differences in spatial memory abilities associated with sex differences in host nest searching behavior and relative hippocampus volume. We evaluated sex differences during acquisition, reversal and retention after extinction in a visual and a spatial discrimination learning task. Contrary to our prediction, females did not outperform males in the spatial task in either the acquisition or the reversal phases. Similarly, there were no sex differences in either phase in the visual task. During extinction, in both tasks the retention of females was significantly higher than expected by chance up to 50 days after the last rewarded session (∼85-90% of the trials with correct responses), but the performance of males at that time did not differ than that expected by chance. This last result shows a long-term memory capacity of female shiny cowbirds, which were able to remember information learned using either spatial or visual cues after a long retention interval. Copyright © 2015 Elsevier B.V. All rights reserved.
Odour discrimination and identification are improved in early blindness.
Cuevas, Isabel; Plaza, Paula; Rombaux, Philippe; De Volder, Anne G; Renier, Laurent
2009-12-01
Previous studies showed that early blind humans develop superior abilities in the use of their remaining senses, hypothetically due to a functional reorganization of the deprived visual brain areas. While auditory and tactile functions have been investigated for long, little is known about the effects of early visual deprivation on olfactory processing. However, blind humans make an extensive use of olfactory information in their daily life. Here we investigated olfactory discrimination and identification abilities in early blind subjects and age-matched sighted controls. Three levels of cuing were used in the identification task, i.e., free-identification (no cue), categorization (semantic cues) and multiple choice (semantic and phonological cues). Early blind subjects significantly outperformed the controls in odour discrimination, free-identification and categorization. In addition, the larger group difference was observed in the free-identification as compared to the categorization and the multiple choice conditions. This indicated that a better access to the semantic information from odour perception accounted for part of the improved olfactory performances in odour identification in the blind. We concluded that early blind subjects have both improved perceptual abilities and a better access to the information stored in semantic memory than sighted subjects.
Effective real-time vehicle tracking using discriminative sparse coding on local patches
NASA Astrophysics Data System (ADS)
Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei
2016-01-01
A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.
Metacognition and Group Differences: A Comparative Study
ERIC Educational Resources Information Center
Al-Hilawani, Yasser A.
2014-01-01
In this study, metacognition refers to performing visual analysis and discrimination of real life events and situations in naïve psychology, naïve physics, and naïve biology domains. It is used, along with measuring reaction time, to examine differences in the ability of four groups of students to select appropriate pictures that correspond with…
Brain-Stimulation Induced Blindsight: Unconscious Vision or Response Bias?
Lloyd, David A.; Abrahamyan, Arman; Harris, Justin A.
2013-01-01
A dissociation between visual awareness and visual discrimination is referred to as “blindsight”. Blindsight results from loss of function of the primary visual cortex (V1) which can occur due to cerebrovascular accidents (i.e. stroke-related lesions). There are also numerous reports of similar, though reversible, effects on vision induced by transcranial Magnetic Stimulation (TMS) to early visual cortex. These effects point to V1 as the “gate” of visual awareness and have strong implications for understanding the neurological underpinnings of consciousness. It has been argued that evidence for the dissociation between awareness of, and responses to, visual stimuli can be a measurement artifact of the use of a high response criterion under yes-no measures of visual awareness when compared with the criterion free forced-choice responses. This difference between yes-no and forced-choice measures suggests that evidence for a dissociation may actually be normal near-threshold conscious vision. Here we describe three experiments that tested visual performance in normal subjects when their visual awareness was suppressed by applying TMS to the occipital pole. The nature of subjects’ performance whilst undergoing occipital TMS was then verified by use of a psychophysical measure (d') that is independent of response criteria. This showed that there was no genuine dissociation in visual sensitivity measured by yes-no and forced-choice responses. These results highlight that evidence for visual sensitivity in the absence of awareness must be analysed using a bias-free psychophysical measure, such as d', In order to confirm whether or not visual performance is truly unconscious. PMID:24324837
Brain-stimulation induced blindsight: unconscious vision or response bias?
Lloyd, David A; Abrahamyan, Arman; Harris, Justin A
2013-01-01
A dissociation between visual awareness and visual discrimination is referred to as "blindsight". Blindsight results from loss of function of the primary visual cortex (V1) which can occur due to cerebrovascular accidents (i.e. stroke-related lesions). There are also numerous reports of similar, though reversible, effects on vision induced by transcranial Magnetic Stimulation (TMS) to early visual cortex. These effects point to V1 as the "gate" of visual awareness and have strong implications for understanding the neurological underpinnings of consciousness. It has been argued that evidence for the dissociation between awareness of, and responses to, visual stimuli can be a measurement artifact of the use of a high response criterion under yes-no measures of visual awareness when compared with the criterion free forced-choice responses. This difference between yes-no and forced-choice measures suggests that evidence for a dissociation may actually be normal near-threshold conscious vision. Here we describe three experiments that tested visual performance in normal subjects when their visual awareness was suppressed by applying TMS to the occipital pole. The nature of subjects' performance whilst undergoing occipital TMS was then verified by use of a psychophysical measure (d') that is independent of response criteria. This showed that there was no genuine dissociation in visual sensitivity measured by yes-no and forced-choice responses. These results highlight that evidence for visual sensitivity in the absence of awareness must be analysed using a bias-free psychophysical measure, such as d', In order to confirm whether or not visual performance is truly unconscious.
Task-specific reorganization of the auditory cortex in deaf humans
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-01
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964
Task-specific reorganization of the auditory cortex in deaf humans.
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-24
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
Surguladze, Simon A; Chkonia, Eka D; Kezeli, Archil R; Roinishvili, Maya O; Stahl, Daniel; David, Anthony S
2012-05-01
Abnormalities in visual processing have been found consistently in schizophrenia patients, including deficits in early visual processing, perceptual organization, and facial emotion recognition. There is however no consensus as to whether these abnormalities represent heritable illness traits and what their contribution is to psychopathology. Fifty patients with schizophrenia, 61 of their first-degree healthy relatives, and 50 psychiatrically healthy volunteers were tested with regard to facial affect (FA) discrimination and susceptibility to develop the color-contingent illusion [the McCollough Effect (ME)]. Both patients and relatives demonstrated significantly lower accuracy in FA discrimination compared with controls. There was also a significant effect of familiality: Participants from the same families had more similar accuracy scores than those who belonged to different families. Experiments with the ME showed that schizophrenia patients required longer time to develop the illusion than relatives and controls, which indicated poor visual adaptation in schizophrenia. Relatives were marginally slower than controls. There was no significant association between the measures of FA discrimination accuracy and ME in any of the participant groups. Facial emotion discrimination was associated with the degree of interpersonal problems, as measured by the Schizotypal Personality Questionnaire in relatives and healthy volunteers, whereas the ME was associated with the perceptual-cognitive symptoms of schizotypy and positive symptoms of schizophrenia. Our results support the heritability of FA discrimination deficits as a trait and indicate visual adaptation abnormalities in schizophrenia, which are symptom related.
Transfer in motion perceptual learning depends on the difficulty of the training task.
Wang, Xiaoxiao; Zhou, Yifeng; Liu, Zili
2013-06-07
One hypothesis in visual perceptual learning is that the amount of transfer depends on the difficulty of the training and transfer tasks (Ahissar & Hochstein, 1997; Liu, 1995, 1999). Jeter, Dosher, Petrov, and Lu (2009), using an orientation discrimination task, challenged this hypothesis by arguing that the amount of transfer depends only on the transfer task but not on the training task. Here we show in a motion direction discrimination task that the amount of transfer indeed depends on the difficulty of the training task. Specifically, participants were first trained with either 4° or 8° direction discrimination along one average direction. Their transfer performance was then tested along an average direction 90° away from the trained direction. A variety of transfer measures consistently demonstrated that transfer performance depended on whether the participants were trained on 4° or 8° directional difference. The results contradicted the prediction that transfer was independent of the training task difficulty.
Dore, Patricia; Dumani, Ardian; Wyatt, Geddes; Shepherd, Alex J
2018-03-16
This study explored associations between local and global shape perception on coloured backgrounds, colour discrimination, and non-verbal IQ (NVIQ). Five background colours were chosen for the local and global shape tasks that were tailored for the cone-opponent pathways early in the visual system (cardinal colour directions: L-M, loosely, reddish-greenish; and S-(L + M), or tritan colours, loosely, blueish-yellowish; where L, M and S refer to the long, middle and short wavelength sensitive cones). Participants also completed the Farnsworth-Munsell 100-hue test (FM100) to determine whether performance on the local and global shape tasks correlated with colour discrimination overall, or with performance on the L-M and tritan subsets of the FM100 test. Overall performance on the local and global shape tasks did correlate with scores on the FM100 tests, despite the colour of the background being irrelevant to the shape tasks. There were also significantly larger associations between scores for the L-M subset of the FM100 test, compared to the tritan subset, and accuracy on some of the shape tasks on the reddish, greenish and neutral backgrounds. Participants also completed the non-verbal components of the WAIS and the SPM+ version of Raven's progressive matrices, to determine whether performance on the FM100 test, and on the local and global shape tasks, correlated with NVIQ. FM100 scores correlated significantly with both WAIS and SPM+ scores. These results extend previous work that has indicated FM100 performance is not purely a measure of colour discrimination, but also involves aspects of each participant's NVIQ, such as the ability to attend to local and global aspects of the test, part-whole relationships, perceptual organisation and good visuomotor skills. Overall performance on the local and global shape tasks correlated only with the WAIS scores, not the SPM+. These results indicate that those aspects of NVIQ that engage spatial comprehension of local-global relationships and manual manipulation (WAIS), rather than more abstract reasoning (SPM+), are related to performance on the local and global shape tasks. Links are presented between various measures of NVIQ and performance on visual tasks, but they are currently seldom addressed in studies of either shape or colour perception. Further studies to explore these issues are recommended. Copyright © 2018 Elsevier Ltd. All rights reserved.
Acerbo, Martin J; Lazareva, Olga F
2018-05-15
Figure-ground segregation is a fundamental visual ability that allows an organism to separate an object from its background. Our earlier research has shown that nucleus rotundus (Rt), a thalamic nucleus processing visual information in pigeons, together with its inhibitory complex, nucleus subpretectalis/interstitio-pretecto-subpretectalis (SP/IPS), are critically involved in figure-ground discrimination (Acerbo et al., 2012; Scully et al., 2014). Here, we further investigated the role of SP/IPS by conducting bilateral microinjections of GABAergic receptor antagonist and agonists (bicuculline and muscimol, respectively) and non-NMDA glutamate receptor antagonist (CNQX) after the pigeons mastered figure-ground discrimination task. We used two doses of each drug (bicuculline: 0.1 mM and 0.05 mM; muscimol: 4.4 mM and 8.8 mM; CNQX: 2.15 mM and 4.6 mM) in a within-subject design, and alternated drug injections with baseline (ACSF). The order of injections was randomized across birds to reduce potential carryover effects. We found that a low dose of bicuculline produced a decrement on figure trials but not on background trials, whereas a high dose impaired performance on background trials but not on figure trials. Muscimol produced an equivalent, dose-dependent impairment on both types of trials. Finally, CNQX had no consistent effect at either dose. Together, these results further confirm our earlier hypothesis that inhibitory projections from SP to Rt modulate figure-ground discrimination, and suggest that the Rt and the SP/IPS provide a plausible substrate that could perform figure-ground segregation in avian brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Berditchevskaia, A.; Cazé, R. D.; Schultz, S. R.
2016-01-01
In recent years, simple GO/NOGO behavioural tasks have become popular due to the relative ease with which they can be combined with technologies such as in vivo multiphoton imaging. To date, it has been assumed that behavioural performance can be captured by the average performance across a session, however this neglects the effect of motivation on behaviour within individual sessions. We investigated the effect of motivation on mice performing a GO/NOGO visual discrimination task. Performance within a session tended to follow a stereotypical trajectory on a Receiver Operating Characteristic (ROC) chart, beginning with an over-motivated state with many false positives, and transitioning through a more or less optimal regime to end with a low hit rate after satiation. Our observations are reproduced by a new model, the Motivated Actor-Critic, introduced here. Our results suggest that standard measures of discriminability, obtained by averaging across a session, may significantly underestimate behavioural performance. PMID:27272438
Kamitani, Toshiaki; Kuroiwa, Yoshiyuki
2009-01-01
Recent studies demonstrated an altered P3 component and prolonged reaction time during the visual discrimination tasks in multiple system atrophy (MSA). In MSA, however, little is known about the N2 component which is known to be closely related to the visual discrimination process. We therefore compared the N2 component as well as the N1 and P3 components in 17 MSA patients with these components in 10 normal controls, by using a visual selective attention task to color or to shape. While the P3 in MSA was significantly delayed in selective attention to shape, the N2 in MSA was significantly delayed in selective attention to color. N1 was normally preserved both in attention to color and in attention to shape. Our electrophysiological results indicate that the color discrimination process during selective attention is impaired in MSA.
Lindor, Ebony; Rinehart, Nicole; Fielding, Joanne
2018-05-22
Individuals with Autism Spectrum Disorder (ASD) often excel on visual search and crowding tasks; however, inconsistent findings suggest that this 'islet of ability' may not be characteristic of the entire spectrum. We examined whether performance on these tasks changed as a function of motor proficiency in children with varying levels of ASD symptomology. Children with high ASD symptomology outperformed all others on complex visual search tasks, but only if their motor skills were rated at, or above, age expectations. For the visual crowding task, children with high ASD symptomology and superior motor skills exhibited enhanced target discrimination, whereas those with high ASD symptomology but poor motor skills experienced deficits. These findings may resolve some of the discrepancies in the literature.
Memory and event-related potentials for rapidly presented emotional pictures.
Versace, Francesco; Bradley, Margaret M; Lang, Peter J
2010-08-01
Dense array event-related potentials (ERPs) and memory performance were assessed following rapid serial visual presentation (RSVP) of emotional and neutral pictures. Despite the extremely brief presentation, emotionally arousing pictures prompted an enhanced negative voltage over occipital sensors, compared to neutral pictures, replicating previous encoding effects. Emotionally arousing pictures were also remembered better in a subsequent recognition test, with higher hit rates and better discrimination performance. ERPs measured during the recognition test showed both an early (250-350 ms) frontally distributed difference between hits and correct rejections, and a later (400-500 ms), more centrally distributed difference, consistent with effects of recognition on ERPs typically found using slower presentation rates. The data are consistent with the hypothesis that features of affective pictures pop out during rapid serial visual presentation, prompting better memory performance.
Explicit attention interferes with selective emotion processing in human extrastriate cortex.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2007-02-22
Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (approximately 150-300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon.
Explicit attention interferes with selective emotion processing in human extrastriate cortex
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2007-01-01
Background Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (~150–300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Results Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. Conclusion The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon. PMID:17316444
ERIC Educational Resources Information Center
Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace
2008-01-01
Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…
A Further Evaluation of Picture Prompts during Auditory-Visual Conditional Discrimination Training
ERIC Educational Resources Information Center
Carp, Charlotte L.; Peterson, Sean P.; Arkel, Amber J.; Petursdottir, Anna I.; Ingvarsson, Einar T.
2012-01-01
This study was a systematic replication and extension of Fisher, Kodak, and Moore (2007), in which a picture prompt embedded into a least-to-most prompting sequence facilitated acquisition of auditory-visual conditional discriminations. Participants were 4 children who had been diagnosed with autism; 2 had limited prior receptive skills, and 2 had…
ERIC Educational Resources Information Center
Janssen, David Rainsford
This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…
NASA Technical Reports Server (NTRS)
Laverghetta, A. V.; Shimizu, T.
1999-01-01
The nucleus rotundus is a large thalamic nucleus in birds and plays a critical role in many visual discrimination tasks. In order to test the hypothesis that there are functionally distinct subdivisions in the nucleus rotundus, effects of selective lesions of the nucleus were studied in pigeons. The birds were trained to discriminate between different types of stationary objects and between different directions of moving objects. Multiple regression analyses revealed that lesions in the anterior, but not posterior, division caused deficits in discrimination of small stationary stimuli. Lesions in neither the anterior nor posterior divisions predicted effects in discrimination of moving stimuli. These results are consistent with a prediction led from the hypothesis that the nucleus is composed of functional subdivisions.
Attentional demands of movement observation as tested by a dual task approach.
Saucedo Marquez, Cinthia M; Ceux, Tanja; Wenderoth, Nicole
2011-01-01
Movement observation (MO) has been shown to activate the motor cortex of the observer as indicated by an increase of corticomotor excitability for muscles involved in the observed actions. Moreover, behavioral work has strongly suggested that this process occurs in a near-automatic manner. Here we further tested this proposal by applying transcranial magnetic stimulation (TMS) when subjects observed how an actor lifted objects of different weights as a single or a dual task. The secondary task was either an auditory discrimination task (experiment 1) or a visual discrimination task (experiment 2). In experiment 1, we found that corticomotor excitability reflected the force requirements indicated in the observed movies (i.e. higher responses when the actor had to apply higher forces). Interestingly, this effect was found irrespective of whether MO was performed as a single or a dual task. By contrast, no such systematic modulations of corticomotor excitability were observed in experiment 2 when visual distracters were present. We conclude that interference effects might arise when MO is performed while competing visual stimuli are present. However, when a secondary task is situated in a different modality, neural responses are in line with the notion that the observers motor system responds in a near-automatic manner. This suggests that MO is a task with very low cognitive demands which might be a valuable supplement for rehabilitation training, particularly, in the acute phase after the incident or in patients suffering from attention deficits. However, it is important to keep in mind that visual distracters might interfere with the neural response in M1.
Estimation of detection thresholds for redirected walking techniques.
Steinicke, Frank; Bruder, Gerd; Jerald, Jason; Frenz, Harald; Lappe, Markus
2010-01-01
In immersive virtual environments (IVEs), users can control their virtual viewpoint by moving their tracked head and walking through the real world. Usually, movements in the real world are mapped one-to-one to virtual camera motions. With redirection techniques, the virtual camera is manipulated by applying gains to user motion so that the virtual world moves differently than the real world. Thus, users can walk through large-scale IVEs while physically remaining in a reasonably small workspace. In psychophysical experiments with a two-alternative forced-choice task, we have quantified how much humans can unknowingly be redirected on physical paths that are different from the visually perceived paths. We tested 12 subjects in three different experiments: (E1) discrimination between virtual and physical rotations, (E2) discrimination between virtual and physical straightforward movements, and (E3) discrimination of path curvature. In experiment E1, subjects performed rotations with different gains, and then had to choose whether the visually perceived rotation was smaller or greater than the physical rotation. In experiment E2, subjects chose whether the physical walk was shorter or longer than the visually perceived scaled travel distance. In experiment E3, subjects estimate the path curvature when walking a curved path in the real world while the visual display shows a straight path in the virtual world. Our results show that users can be turned physically about 49 percent more or 20 percent less than the perceived virtual rotation, distances can be downscaled by 14 percent and upscaled by 26 percent, and users can be redirected on a circular arc with a radius greater than 22 m while they believe that they are walking straight.
Ragozzino, Michael E; Artis, Sonja; Singh, Amritha; Twose, Trevor M; Beck, Joseph E; Messer, William S
2012-03-01
Various neurodegenerative diseases and psychiatric disorders are marked by alterations in brain cholinergic function and cognitive deficits. Efforts to alleviate such deficits have been limited by a lack of selective M(1) muscarinic agonists. 5-(3-Ethyl-1,2,4-oxadiazol-5-yl)-1,4,5,6-tetrahydropyrimidine hydrochloride (CDD-0102A) is a partial agonist at M(1) muscarinic receptors with limited activity at other muscarinic receptor subtypes. The present studies investigated the effects of CDD-0102A on working memory and strategy shifting in rats. CDD-0102A administered intraperitoneally 30 min before testing at 0.1, 0.3, and 1 mg/kg significantly enhanced delayed spontaneous alternation performance in a four-arm cross maze, suggesting improvement in working memory. In separate experiments, CDD-0102A had potent enhancing effects on learning and switching between a place and visual cue discrimination. Treatment with CDD-0102A did not affect acquisition of either a place or visual cue discrimination. In contrast, CDD-0102A at 0.03 and 0.1 mg/kg significantly enhanced a shift between a place and visual cue discrimination. Analysis of the errors in the shift to the place or shift to the visual cue strategy revealed that in both cases CDD-0102A significantly increased the ability to initially inhibit a previously relevant strategy and maintain a new, relevant strategy once selected. In anesthetized rats, the minimum dose required to induce salivation was approximately 0.3 mg/kg i.p. Salivation increased with dose, and the estimated ED(50) was 2.0 mg/kg. The data suggest that CDD-0102A has unique memory and cognitive enhancing properties that might be useful in the treatment of neurological disorders at doses that do not produce adverse effects such as salivation.
Brain-actuated gait trainer with visual and proprioceptive feedback
NASA Astrophysics Data System (ADS)
Liu, Dong; Chen, Weihai; Lee, Kyuhwa; Chavarriaga, Ricardo; Bouri, Mohamed; Pei, Zhongcai; Millán, José del R.
2017-10-01
Objective. Brain-machine interfaces (BMIs) have been proposed in closed-loop applications for neuromodulation and neurorehabilitation. This study describes the impact of different feedback modalities on the performance of an EEG-based BMI that decodes motor imagery (MI) of leg flexion and extension. Approach. We executed experiments in a lower-limb gait trainer (the legoPress) where nine able-bodied subjects participated in three consecutive sessions based on a crossover design. A random forest classifier was trained from the offline session and tested online with visual and proprioceptive feedback, respectively. Post-hoc classification was conducted to assess the impact of feedback modalities and learning effect (an improvement over time) on the simulated trial-based performance. Finally, we performed feature analysis to investigate the discriminant power and brain pattern modulations across the subjects. Main results. (i) For real-time classification, the average accuracy was 62.33 +/- 4.95 % and 63.89 +/- 6.41 % for the two online sessions. The results were significantly higher than chance level, demonstrating the feasibility to distinguish between MI of leg extension and flexion. (ii) For post-hoc classification, the performance with proprioceptive feedback (69.45 +/- 9.95 %) was significantly better than with visual feedback (62.89 +/- 9.20 %), while there was no significant learning effect. (iii) We reported individual discriminate features and brain patterns associated to each feedback modality, which exhibited differences between the two modalities although no general conclusion can be drawn. Significance. The study reported a closed-loop brain-controlled gait trainer, as a proof of concept for neurorehabilitation devices. We reported the feasibility of decoding lower-limb movement in an intuitive and natural way. As far as we know, this is the first online study discussing the role of feedback modalities in lower-limb MI decoding. Our results suggest that proprioceptive feedback has an advantage over visual feedback, which could be used to improve robot-assisted strategies for motor training and functional recovery.
Brain-actuated gait trainer with visual and proprioceptive feedback.
Liu, Dong; Chen, Weihai; Lee, Kyuhwa; Chavarriaga, Ricardo; Bouri, Mohamed; Pei, Zhongcai; Del R Millán, José
2017-10-01
Brain-machine interfaces (BMIs) have been proposed in closed-loop applications for neuromodulation and neurorehabilitation. This study describes the impact of different feedback modalities on the performance of an EEG-based BMI that decodes motor imagery (MI) of leg flexion and extension. We executed experiments in a lower-limb gait trainer (the legoPress) where nine able-bodied subjects participated in three consecutive sessions based on a crossover design. A random forest classifier was trained from the offline session and tested online with visual and proprioceptive feedback, respectively. Post-hoc classification was conducted to assess the impact of feedback modalities and learning effect (an improvement over time) on the simulated trial-based performance. Finally, we performed feature analysis to investigate the discriminant power and brain pattern modulations across the subjects. (i) For real-time classification, the average accuracy was [Formula: see text]% and [Formula: see text]% for the two online sessions. The results were significantly higher than chance level, demonstrating the feasibility to distinguish between MI of leg extension and flexion. (ii) For post-hoc classification, the performance with proprioceptive feedback ([Formula: see text]%) was significantly better than with visual feedback ([Formula: see text]%), while there was no significant learning effect. (iii) We reported individual discriminate features and brain patterns associated to each feedback modality, which exhibited differences between the two modalities although no general conclusion can be drawn. The study reported a closed-loop brain-controlled gait trainer, as a proof of concept for neurorehabilitation devices. We reported the feasibility of decoding lower-limb movement in an intuitive and natural way. As far as we know, this is the first online study discussing the role of feedback modalities in lower-limb MI decoding. Our results suggest that proprioceptive feedback has an advantage over visual feedback, which could be used to improve robot-assisted strategies for motor training and functional recovery.
Discrimination between smiling faces: Human observers vs. automated face analysis.
Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo
2018-05-11
This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.
Rapid learning in visual cortical networks.
Wang, Ye; Dragoi, Valentin
2015-08-26
Although changes in brain activity during learning have been extensively examined at the single neuron level, the coding strategies employed by cell populations remain mysterious. We examined cell populations in macaque area V4 during a rapid form of perceptual learning that emerges within tens of minutes. Multiple single units and LFP responses were recorded as monkeys improved their performance in an image discrimination task. We show that the increase in behavioral performance during learning is predicted by a tight coordination of spike timing with local population activity. More spike-LFP theta synchronization is correlated with higher learning performance, while high-frequency synchronization is unrelated with changes in performance, but these changes were absent once learning had stabilized and stimuli became familiar, or in the absence of learning. These findings reveal a novel mechanism of plasticity in visual cortex by which elevated low-frequency synchronization between individual neurons and local population activity accompanies the improvement in performance during learning.
Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka
2008-01-01
In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.
Reward modulates the effect of visual cortical microstimulation on perceptual decisions
Cicmil, Nela; Cumming, Bruce G; Parker, Andrew J; Krug, Kristine
2015-01-01
Effective perceptual decisions rely upon combining sensory information with knowledge of the rewards available for different choices. However, it is not known where reward signals interact with the multiple stages of the perceptual decision-making pathway and by what mechanisms this may occur. We combined electrical microstimulation of functionally specific groups of neurons in visual area V5/MT with performance-contingent reward manipulation, while monkeys performed a visual discrimination task. Microstimulation was less effective in shifting perceptual choices towards the stimulus preferences of the stimulated neurons when available reward was larger. Psychophysical control experiments showed this result was not explained by a selective change in response strategy on microstimulated trials. A bounded accumulation decision model, applied to analyse behavioural performance, revealed that the interaction of expected reward with microstimulation can be explained if expected reward modulates a sensory representation stage of perceptual decision-making, in addition to the better-known effects at the integration stage. DOI: http://dx.doi.org/10.7554/eLife.07832.001 PMID:26402458
Zeleznikow-Johnston, Ariel; Burrows, Emma L; Renoir, Thibault; Hannan, Anthony J
2017-05-01
Environmental enrichment (EE) is any positive modification of the 'standard housing' (SH) conditions in which laboratory animals are typically held, usually involving increased opportunity for cognitive stimulation and physical activity. EE has been reported to enhance baseline performance of wild-type animals on traditional cognitive behavioural tasks. Recently, touchscreen operant testing chambers have emerged as a way of performing rodent cognitive assays, providing greater reproducibility, translatability and automatability. Cognitive tests in touchscreen chambers are performed over numerous trials and thus experimenters have the power to detect subtle enhancements in performance. We used touchscreens to analyse the effects of EE on reversal learning, visual discrimination and hippocampal-dependent spatial pattern separation and working memory. We hypothesized that EE would enhance the performance of mice on cognitive touchscreen tasks. Our hypothesis was partially supported in that EE induced enhancements in cognitive flexibility as observed in visual discrimination and reversal learning improvements. However, no other significant effects of EE on cognitive performance were observed. EE decreased the activity level of mice in the touchscreen chambers, which may influence the enrichment level of the animals. Although we did not see enhancements on all hypothesized parameters, our testing paradigm is capable of detecting EE-induced improved cognitive flexibility in mice, which has implications for both understanding the mechanisms of EE and improving screening of putative cognitive-enhancing therapeutics. Copyright © 2017 Elsevier Ltd. All rights reserved.
Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.
2013-01-01
Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656
Kaptsov, V A; Sosunov, N N; Shishchenko, I I; Viktorov, V S; Tulushev, V N; Deynego, V N; Bukhareva, E A; Murashova, M A; Shishchenko, A A
2014-01-01
There was performed the experimental work on the study of the possibility of the application of LED lighting (LED light sources) in rail transport for traffic safety in related professions. Results of 4 series of studies involving 10 volunteers for the study and a comparative evaluation of the functional state of the visual analyzer, the general functional state and mental capacity under the performing the simulated operator activity in conditions of traditional light sources (incandescent, fluorescent lamp) and the new LED (LED lamp, LED panel) light sources have revealed changes in the negative direction. This was pronounced in a some decrease of functional stability to color discrimination between green and red cone signals, as well as an increase in response time in complex visual--motor response and significant reduction in readiness for emergency action of examinees.
ERIC Educational Resources Information Center
Hendrickson, Homer
1988-01-01
Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…
Herrmann, C S; Mecklinger, A
2000-12-01
We examined evoked and induced responses in event-related fields and gamma activity in the magnetoencephalogram (MEG) during a visual classification task. The objective was to investigate the effects of target classification and the different levels of discrimination between certain stimulus features. We performed two experiments, which differed only in the subjects' task while the stimuli were identical. In Experiment 1, subjects responded by a button-press to rare Kanizsa squares (targets) among Kanizsa triangles and non-Kanizsa figures (standards). This task requires the processing of both stimulus features (colinearity and number of inducer disks). In Experiment 2, the four stimuli of Experiment 1 were used as standards and the occurrence of an additional stimulus without any feature overlap with the Kanizsa stimuli (a rare and highly salient red fixation cross) had to be detected. Discrimination of colinearity and number of inducer disks was not necessarily required for task performance. We applied a wavelet-based time-frequency analysis to the data and calculated topographical maps of the 40 Hz activity. The early evoked gamma activity (100-200 ms) in Experiment 1 was higher for targets as compared to standards. In Experiment 2, no significant differences were found in the gamma responses to the Kanizsa figures and non-Kanizsa figures. This pattern of results suggests that early evoked gamma activity in response to visual stimuli is affected by the targetness of a stimulus and the need to discriminate between the features of a stimulus.
Yokoi, Isao; Komatsu, Hidehiko
2010-09-01
Visual grouping of discrete elements is an important function for object recognition. We recently conducted an experiment to study neural correlates of visual grouping. We recorded neuronal activities while monkeys performed a grouping detection task in which they discriminated visual patterns composed of discrete dots arranged in a cross and detected targets in which dots with the same contrast were aligned horizontally or vertically. We found that some neurons in the lateral bank of the intraparietal sulcus exhibit activity related to visual grouping. In the present study, we analyzed how different types of neurons contribute to visual grouping. We classified the recorded neurons as putative pyramidal neurons or putative interneurons, depending on the duration of their action potentials. We found that putative pyramidal neurons exhibited selectivity for the orientation of the target, and this selectivity was enhanced by attention to a particular target orientation. By contrast, putative interneurons responded more strongly to the target stimuli than to the nontargets, regardless of the orientation of the target. These results suggest that different classes of parietal neurons contribute differently to the grouping of discrete elements.
Encoding color information for visual tracking: Algorithms and benchmark.
Liang, Pengpeng; Blasch, Erik; Ling, Haibin
2015-12-01
While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.
Comparative psychophysics of bumblebee and honeybee colour discrimination and object detection.
Dyer, Adrian G; Spaethe, Johannes; Prack, Sabina
2008-07-01
Bumblebee (Bombus terrestris) discrimination of targets with broadband reflectance spectra was tested using simultaneous viewing conditions, enabling an accurate determination of the perceptual limit of colour discrimination excluding confounds from memory coding (experiment 1). The level of colour discrimination in bumblebees, and honeybees (Apis mellifera) (based upon previous observations), exceeds predictions of models considering receptor noise in the honeybee. Bumblebee and honeybee photoreceptors are similar in spectral shape and spacing, but bumblebees exhibit significantly poorer colour discrimination in behavioural tests, suggesting possible differences in spatial or temporal signal processing. Detection of stimuli in a Y-maze was evaluated for bumblebees (experiment 2) and honeybees (experiment 3). Honeybees detected stimuli containing both green-receptor-contrast and colour contrast at a visual angle of approximately 5 degrees , whilst stimuli that contained only colour contrast were only detected at a visual angle of 15 degrees . Bumblebees were able to detect these stimuli at a visual angle of 2.3 degrees and 2.7 degrees , respectively. A comparison of the experiments suggests a tradeoff between colour discrimination and colour detection in these two species, limited by the need to pool colour signals to overcome receptor noise. We discuss the colour processing differences and possible adaptations to specific ecological habitats.
Support for Lateralization of the Whorf Effect beyond the Realm of Color Discrimination
ERIC Educational Resources Information Center
Gilbert, Aubrey L.; Regier, Terry; Kay, Paul; Ivry, Richard B.
2008-01-01
Recent work has shown that Whorf effects of language on color discrimination are stronger in the right visual field than in the left. Here we show that this phenomenon is not limited to color: The perception of animal figures (cats and dogs) was more strongly affected by linguistic categories for stimuli presented to the right visual field than…
ERIC Educational Resources Information Center
Patching, Geoffrey R.; Englund, Mats P.; Hellstrom, Ake
2012-01-01
Despite the importance of both response probability and response time for testing models of choice, there is a dearth of chronometric studies examining systematic asymmetries that occur over time- and space-orders in the method of paired comparisons. In this study, systematic asymmetries in discriminating the magnitude of paired visual stimuli are…
Melanopsin-based brightness discrimination in mice and humans.
Brown, Timothy M; Tsujimura, Sei-Ichi; Allen, Annette E; Wynne, Jonathan; Bedford, Robert; Vickery, Graham; Vugler, Anthony; Lucas, Robert J
2012-06-19
Photoreception in the mammalian retina is not restricted to rods and cones but extends to a small number of intrinsically photoreceptive retinal ganglion cells (ipRGCs), expressing the photopigment melanopsin. ipRGCs are known to support various accessory visual functions including circadian photoentrainment and pupillary reflexes. However, despite anatomical and physiological evidence that they contribute to the thalamocortical visual projection, no aspect of visual discrimination has been shown to rely upon ipRGCs. Based on their currently known roles, we hypothesized that ipRGCs may contribute to distinguishing brightness. This percept is related to an object's luminance-a photometric measure of light intensity relevant for cone photoreceptors. However, the perceived brightness of different sources is not always predicted by their respective luminance. Here, we used parallel behavioral and electrophysiological experiments to first show that melanopsin contributes to brightness discrimination in both retinally degenerate and fully sighted mice. We continued to use comparable paradigms in psychophysical experiments to provide evidence for a similar role in healthy human subjects. These data represent the first direct evidence that an aspect of visual discrimination in normally sighted subjects can be supported by inner retinal photoreceptors. Copyright © 2012 Elsevier Ltd. All rights reserved.
The oblique effect is both allocentric and egocentric
Mikellidou, Kyriaki; Cicchini, Guido Marco; Thompson, Peter G.; Burr, David C.
2016-01-01
Despite continuous movements of the head, humans maintain a stable representation of the visual world, which seems to remain always upright. The mechanisms behind this stability are largely unknown. To gain some insight on how head tilt affects visual perception, we investigate whether a well-known orientation-dependent visual phenomenon, the oblique effect—superior performance for stimuli at cardinal orientations (0° and 90°) compared with oblique orientations (45°)—is anchored in egocentric or allocentric coordinates. To this aim, we measured orientation discrimination thresholds at various orientations for different head positions both in body upright and in supine positions. We report that, in the body upright position, the oblique effect remains anchored in allocentric coordinates irrespective of head position. When lying supine, gravitational effects in the plane orthogonal to gravity are discounted. Under these conditions, the oblique effect was less marked than when upright, and anchored in egocentric coordinates. The results are well explained by a simple “compulsory fusion” model in which the head-based and the gravity-based signals are combined with different weightings (30% and 70%, respectively), even when this leads to reduced sensitivity in orientation discrimination. PMID:26129862
A vertebrate retina with segregated colour and polarization sensitivity.
Novales Flamarique, Iñigo
2017-09-13
Besides colour and intensity, some invertebrates are able to independently detect the polarization of light. Among vertebrates, such separation of visual modalities has only been hypothesized for some species of anchovies whose cone photoreceptors have unusual ultrastructure that varies with retinal location. Here, I tested this hypothesis by performing physiological experiments of colour and polarization discrimination using the northern anchovy, Engraulis mordax Optic nerve recordings showed that the ventro-temporal (VT), but not the ventro-nasal (VN), retina was polarization sensitive, and this coincided with the exclusive presence of polarization-sensitive photoreceptors in the VT retina. Spectral (colour) sensitivity recordings from the VN retina indicated the contribution of two spectral cone mechanisms to the optic nerve response, whereas only one contributed to the VT retina. This was supported by the presence of only one visual pigment in the VT retina and two in the VN retina, suggesting that only the VN retina was associated with colour sensitivity. Behavioural tests further demonstrated that anchovies could discriminate colour and the polarization of light using the ventral retina. Thus, in analogy with the visual system of some invertebrates, the northern anchovy has a retina with segregated retinal pathways for colour and polarization vision. © 2017 The Author(s).
ERIC Educational Resources Information Center
Benard, Julie; Giurfa, Martin
2004-01-01
We asked whether honeybees, "Apis mellifera," could solve a transitive inference problem. Individual free-flying bees were conditioned with four overlapping premise pairs of five visual patterns in a multiple discrimination task (A+ vs. B-, B+ vs. C-, C+ vs. D-, D+ vs. E-, where + and - indicate sucrose reward or absence of it,…
ERIC Educational Resources Information Center
Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi
2005-01-01
Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…
Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.
Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil
2017-01-19
Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.
Akiyama, Yoshihiro B; Iseri, Erina; Kataoka, Tomoya; Tanaka, Makiko; Katsukoshi, Kiyonori; Moki, Hirotada; Naito, Ryoji; Hem, Ramrav; Okada, Tomonari
2017-02-15
In the present study, we determined the common morphological characteristics of the feces of Mytilus galloprovincialis to develop a method for visually discriminating the feces of this mussel in deposited materials. This method can be used to assess the effect of mussel feces on benthic environments. The accuracy of visual morphology-based discrimination of mussel feces in deposited materials was confirmed by DNA analysis. Eighty-nine percent of mussel feces shared five common morphological characteristics. Of the 372 animal species investigated, only four species shared all five of these characteristics. More than 96% of the samples were visually identified as M. galloprovincialis feces on the basis of morphology of the particles containing the appropriate mitochondrial DNA. These results suggest that mussel feces can be discriminated with high accuracy on the basis of their morphological characteristics. Thus, our method can be used to quantitatively assess the effect of mussel feces on local benthic environments. Copyright © 2016 Elsevier Ltd. All rights reserved.
Robust visual tracking via multiple discriminative models with object proposals
NASA Astrophysics Data System (ADS)
Zhang, Yuanqiang; Bi, Duyan; Zha, Yufei; Li, Huanyu; Ku, Tao; Wu, Min; Ding, Wenshan; Fan, Zunlin
2018-04-01
Model drift is an important reason for tracking failure. In this paper, multiple discriminative models with object proposals are used to improve the model discrimination for relieving this problem. Firstly, the target location and scale changing are captured by lots of high-quality object proposals, which are represented by deep convolutional features for target semantics. And then, through sharing a feature map obtained by a pre-trained network, ROI pooling is exploited to wrap the various sizes of object proposals into vectors of the same length, which are used to learn a discriminative model conveniently. Lastly, these historical snapshot vectors are trained by different lifetime models. Based on entropy decision mechanism, the bad model owing to model drift can be corrected by selecting the best discriminative model. This would improve the robustness of the tracker significantly. We extensively evaluate our tracker on two popular benchmarks, the OTB 2013 benchmark and UAV20L benchmark. On both benchmarks, our tracker achieves the best performance on precision and success rate compared with the state-of-the-art trackers.
Tensor discriminant color space for face recognition.
Wang, Su-Jing; Yang, Jian; Zhang, Na; Zhou, Chun-Guang
2011-09-01
Recent research efforts reveal that color may provide useful information for face recognition. For different visual tasks, the choice of a color space is generally different. How can a color space be sought for the specific face recognition problem? To address this problem, this paper represents a color image as a third-order tensor and presents the tensor discriminant color space (TDCS) model. The model can keep the underlying spatial structure of color images. With the definition of n-mode between-class scatter matrices and within-class scatter matrices, TDCS constructs an iterative procedure to obtain one color space transformation matrix and two discriminant projection matrices by maximizing the ratio of these two scatter matrices. The experiments are conducted on two color face databases, AR and Georgia Tech face databases, and the results show that both the performance and the efficiency of the proposed method are better than those of the state-of-the-art color image discriminant model, which involve one color space transformation matrix and one discriminant projection matrix, specifically in a complicated face database with various pose variations.
Afraz, Arash; Boyden, Edward S; DiCarlo, James J
2015-05-26
Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with "face neurons," such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception.
Perceptual grouping enhances visual plasticity.
Mastropasqua, Tommaso; Turatto, Massimo
2013-01-01
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.
Coupled binary embedding for large-scale image retrieval.
Zheng, Liang; Wang, Shengjin; Tian, Qi
2014-08-01
Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.
Pigeons can discriminate "good" and "bad" paintings by children.
Watanabe, Shigeru
2010-01-01
Humans have the unique ability to create art, but non-human animals may be able to discriminate "good" art from "bad" art. In this study, I investigated whether pigeons could be trained to discriminate between paintings that had been judged by humans as either "bad" or "good". To do this, adult human observers first classified several children's paintings as either "good" (beautiful) or "bad" (ugly). Using operant conditioning procedures, pigeons were then reinforced for pecking at "good" paintings. After the pigeons learned the discrimination task, they were presented with novel pictures of both "good" and "bad" children's paintings to test whether they had successfully learned to discriminate between these two stimulus categories. The results showed that pigeons could discriminate novel "good" and "bad" paintings. Then, to determine which cues the subjects used for the discrimination, I conducted tests of the stimuli when the paintings were of reduced size or grayscale. In addition, I tested their ability to discriminate when the painting stimuli were mosaic and partial occluded. The pigeons maintained discrimination performance when the paintings were reduced in size. However, discrimination performance decreased when stimuli were presented as grayscale images or when a mosaic effect was applied to the original stimuli in order to disrupt spatial frequency. Thus, the pigeons used both color and pattern cues for their discrimination. The partial occlusion did not disrupt the discriminative behavior suggesting that the pigeons did not attend to particular parts, namely upper, lower, left or right half, of the paintings. These results suggest that the pigeons are capable of learning the concept of a stimulus class that humans name "good" pictures. The second experiment showed that pigeons learned to discriminate watercolor paintings from pastel paintings. The subjects showed generalization to novel paintings. Then, as the first experiment, size reduction test, grayscale test, mosaic processing test and partial occlusion test were carried out. The results suggest that the pigeons used both color and pattern cues for the discrimination and show that non-human animals, such as pigeons, can be trained to discriminate abstract visual stimuli, such as pictures and may also have the ability to learn the concept of "beauty" as defined by humans.
Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking
Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua
2014-01-01
To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252
Impact of stimulus uncanniness on speeded response
Takahashi, Kohske; Fukuda, Haruaki; Samejima, Kazuyuki; Watanabe, Katsumi; Ueda, Kazuhiro
2015-01-01
In the uncanny valley phenomenon, the causes of the feeling of uncanniness as well as the impact of the uncanniness on behavioral performances still remain open. The present study investigated the behavioral effects of stimulus uncanniness, particularly with respect to speeded response. Pictures of fish were used as visual stimuli. Participants engaged in direction discrimination, spatial cueing, and dot-probe tasks. The results showed that pictures rated as strongly uncanny delayed speeded response in the discrimination of the direction of the fish. In the cueing experiment, where a fish served as a task-irrelevant and unpredictable cue for a peripheral target, we again observed that the detection of a target was slowed when the cue was an uncanny fish. Conversely, the dot-probe task suggested that uncanny fish, unlike threatening stimulus, did not capture visual spatial attention. These results suggested that stimulus uncanniness resulted in the delayed response, and importantly this modulation was not mediated by the feelings of threat. PMID:26052297
Understanding Deep Representations Learned in Modeling Users Likes.
Guntuku, Sharath Chandra; Zhou, Joey Tianyi; Roy, Sujoy; Lin, Weisi; Tsang, Ivor W
2016-08-01
Automatically understanding and discriminating different users' liking for an image is a challenging problem. This is because the relationship between image features (even semantic ones extracted by existing tools, viz., faces, objects, and so on) and users' likes is non-linear, influenced by several subtle factors. This paper presents a deep bi-modal knowledge representation of images based on their visual content and associated tags (text). A mapping step between the different levels of visual and textual representations allows for the transfer of semantic knowledge between the two modalities. Feature selection is applied before learning deep representation to identify the important features for a user to like an image. The proposed representation is shown to be effective in discriminating users based on images they like and also in recommending images that a given user likes, outperforming the state-of-the-art feature representations by ∼ 15 %-20%. Beyond this test-set performance, an attempt is made to qualitatively understand the representations learned by the deep architecture used to model user likes.
Peters, Megan A K; Lau, Hakwan
2015-01-01
Many believe that humans can ‘perceive unconsciously’ – that for weak stimuli, briefly presented and masked, above-chance discrimination is possible without awareness. Interestingly, an online survey reveals that most experts in the field recognize the lack of convincing evidence for this phenomenon, and yet they persist in this belief. Using a recently developed bias-free experimental procedure for measuring subjective introspection (confidence), we found no evidence for unconscious perception; participants’ behavior matched that of a Bayesian ideal observer, even though the stimuli were visually masked. This surprising finding suggests that the thresholds for subjective awareness and objective discrimination are effectively the same: if objective task performance is above chance, there is likely conscious experience. These findings shed new light on decades-old methodological issues regarding what it takes to consider a neurobiological or behavioral effect to be 'unconscious,' and provide a platform for rigorously investigating unconscious perception in future studies. DOI: http://dx.doi.org/10.7554/eLife.09651.001 PMID:26433023
Neural activity in cortical area V4 underlies fine disparity discrimination.
Shiozaki, Hiroshi M; Tanabe, Seiji; Doi, Takahiro; Fujita, Ichiro
2012-03-14
Primates are capable of discriminating depth with remarkable precision using binocular disparity. Neurons in area V4 are selective for relative disparity, which is the crucial visual cue for discrimination of fine disparity. Here, we investigated the contribution of V4 neurons to fine disparity discrimination. Monkeys discriminated whether the center disk of a dynamic random-dot stereogram was in front of or behind its surrounding annulus. We first behaviorally tested the reference frame of the disparity representation used for performing this task. After learning the task with a set of surround disparities, the monkey generalized its responses to untrained surround disparities, indicating that the perceptual decisions were generated from a disparity representation in a relative frame of reference. We then recorded single-unit responses from V4 while the monkeys performed the task. On average, neuronal thresholds were higher than the behavioral thresholds. The most sensitive neurons reached thresholds as low as the psychophysical thresholds. For subthreshold disparities, the monkeys made frequent errors. The variable decisions were predictable from the fluctuation in the neuronal responses. The predictions were based on a decision model in which each V4 neuron transmits the evidence for the disparity it prefers. We finally altered the disparity representation artificially by means of microstimulation to V4. The decisions were systematically biased when microstimulation boosted the V4 responses. The bias was toward the direction predicted from the decision model. We suggest that disparity signals carried by V4 neurons underlie precise discrimination of fine stereoscopic depth.
NEONATAL VISUAL INFORMATION PROCESSING IN COCAINE-EXPOSED AND NON-EXPOSED INFANTS
Singer, Lynn T.; Arendt, Robert; Fagan, Joseph; Minnes, Sonia; Salvator, Ann; Bolek, Tina; Becker, Michael
2014-01-01
This study investigated early neonatal visual preferences in 267 poly drug exposed neonates (131 cocaine-exposed and 136 non-cocaine exposed) whose drug exposure was documented through interviews and urine and meconium drug screens. Infants were given four visual recognition memory tasks comparing looking time to familiarized stimuli of lattices and rectangular shapes to novel stimuli of a schematic face and curved hourglass and bull’s eye forms. Cocaine-exposed infants performed more poorly, after consideration of confounding factors, with a relationship of severity of cocaine exposure to lower novelty score found for both self-report and biologic measures of exposure, Findings support theories which link prenatal cocaine exposure to deficits in information processing entailing attentional and arousal organizational systems. Neonatal visual discrimination and attention tasks should be further explored as potentially sensitive behavioral indicators of teratologic effects. PMID:25717215
Smell or vision? The use of different sensory modalities in predator discrimination.
Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara
2017-01-01
Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.
A neurocomputational model of figure-ground discrimination and target tracking.
Sun, H; Liu, L; Guo, A
1999-01-01
A neurocomputational model is presented for figureground discrimination and target tracking. In the model, the elementary motion detectors of the correlation type, the computational modules of saccadic and smooth pursuit eye movement, an oscillatory neural-network motion perception module and a selective attention module are involved. It is shown that through the oscillatory amplitude and frequency encoding, and selective synchronization of phase oscillators, the figure and the ground can be successfully discriminated from each other. The receptive fields developed by hidden units of the networks were surprisingly similar to the actual receptive fields and columnar organization found in the primate visual cortex. It is suggested that equivalent mechanisms may exist in the primate visual cortex to discriminate figure-ground in both temporal and spatial domains.
Discriminating between first- and second-order cognition in first-episode paranoid schizophrenia.
Bliksted, Vibeke; Samuelsen, Erla; Sandberg, Kristian; Bibby, Bo Martin; Overgaard, Morten Storm
2017-03-01
An impairment of visually perceiving backward masked stimuli is commonly observed in patients with schizophrenia, yet it is unclear whether this impairment is the result of a deficiency in first or higher order processing and for which subtypes of schizophrenia it is present. Here, we compare identification (first order) and metacognitive (higher order) performance in a visual masking paradigm between a highly homogenous group of young first-episode patients diagnosed with paranoid schizophrenia (N = 11) to that of carefully matched healthy controls (N = 13). We find no difference across groups in first-order performance, but find a difference in metacognitive performance, particularly for stimuli with relatively high visibility. These results indicate that the masking deficit is present in first-episode patients with paranoid schizophrenia, but that it is primarily an impairment of metacognition.
Factors influencing self-reported vision-related activity limitation in the visually impaired.
Tabrett, Daryl R; Latham, Keziah
2011-07-15
The use of patient-reported outcome (PRO) measures to assess self-reported difficulty in visual activities is common in patients with impaired vision. This study determines the visual and psychosocial factors influencing patients' responses to self-report measures, to aid in understanding what is being measured. One hundred visually impaired participants completed the Activity Inventory (AI), which assesses self-reported, vision-related activity limitation (VRAL) in the task domains of reading, mobility, visual information, and visual motor tasks. Participants also completed clinical tests of visual function (distance visual acuity and near reading performance both with and without low vision aids [LVAs], contrast sensitivity, visual fields, and depth discrimination), and questionnaires assessing depressive symptoms, social support, adjustment to visual loss, and personality. Multiple regression analyses identified that an acuity measure (distance or near), and, to a lesser extent, near reading performance without LVAs, visual fields, and contrast sensitivity best explained self-reported VRAL (28%-50% variance explained). Significant psychosocial correlates were depression and adjustment, explaining an additional 6% to 19% unique variance. Dependent on task domain, the parameters assessed explained 59% to 71% of the variance in self-reported VRAL. Visual function, most notably acuity without LVAs, is the best predictor of self-reported VRAL assessed by the AI. Depression and adjustment to visual loss also significantly influence self-reported VRAL, largely independent of the severity of visual loss and most notably in the less vision-specific tasks. The results suggest that rehabilitation strategies addressing depression and adjustment could improve perceived visual disability.
ERIC Educational Resources Information Center
Lewkowicz, David J.
2003-01-01
Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…
Pearce, Bradley; Crichton, Stuart; Mackiewicz, Michal; Finlayson, Graham D; Hurlbert, Anya
2014-01-01
The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed.
Spatial frequency discrimination learning in normal and developmentally impaired human vision
Astle, Andrew T.; Webb, Ben S.; McGraw, Paul V.
2010-01-01
Perceptual learning effects demonstrate that the adult visual system retains neural plasticity. If perceptual learning holds any value as a treatment tool for amblyopia, trained improvements in performance must generalise. Here we investigate whether spatial frequency discrimination learning generalises within task to other spatial frequencies, and across task to contrast sensitivity. Before and after training, we measured contrast sensitivity and spatial frequency discrimination (at a range of reference frequencies 1, 2, 4, 8, 16 c/deg). During training, normal and amblyopic observers were divided into three groups. Each group trained on a spatial frequency discrimination task at one reference frequency (2, 4, or 8 c/deg). Normal and amblyopic observers who trained at lower frequencies showed a greater rate of within task learning (at their reference frequency) compared to those trained at higher frequencies. Compared to normals, amblyopic observers showed greater within task learning, at the trained reference frequency. Normal and amblyopic observers showed asymmetrical transfer of learning from high to low spatial frequencies. Both normal and amblyopic subjects showed transfer to contrast sensitivity. The direction of transfer for contrast sensitivity measurements was from the trained spatial frequency to higher frequencies, with the bandwidth and magnitude of transfer greater in the amblyopic observers compared to normals. The findings provide further support for the therapeutic efficacy of this approach and establish general principles that may help develop more effective protocols for the treatment of developmental visual deficits. PMID:20832416
“Global” visual training and extent of transfer in amblyopic macaque monkeys
Kiorpes, Lynne; Mangal, Paul
2015-01-01
Perceptual learning is gaining acceptance as a potential treatment for amblyopia in adults and children beyond the critical period. Many perceptual learning paradigms result in very specific improvement that does not generalize beyond the training stimulus, closely related stimuli, or visual field location. To be of use in amblyopia, a less specific effect is needed. To address this problem, we designed a more general training paradigm intended to effect improvement in visual sensitivity across tasks and domains. We used a “global” visual stimulus, random dot motion direction discrimination with 6 training conditions, and tested for posttraining improvement on a motion detection task and 3 spatial domain tasks (contrast sensitivity, Vernier acuity, Glass pattern detection). Four amblyopic macaques practiced the motion discrimination with their amblyopic eye for at least 20,000 trials. All showed improvement, defined as a change of at least a factor of 2, on the trained task. In addition, all animals showed improvements in sensitivity on at least some of the transfer test conditions, mainly the motion detection task; transfer to the spatial domain was inconsistent but best at fine spatial scales. However, the improvement on the transfer tasks was largely not retained at long-term follow-up. Our generalized training approach is promising for amblyopia treatment, but sustaining improved performance may require additional intervention. PMID:26505868
Dissociation of visual associative and motor learning in Drosophila at the flight simulator.
Wang, Shunpeng; Li, Yan; Feng, Chunhua; Guo, Aike
2003-08-29
Ever since operant conditioning was studied experimentally, the relationship between associative learning and possible motor learning has become controversial. Although motor learning and its underlying neural substrates have been extensively studied in mammals, it is still poorly understood in invertebrates. The visual discriminative avoidance paradigm of Drosophila at the flight simulator has been widely used to study the flies' visual associative learning and related functions, but it has not been used to study the motor learning process. In this study, newly-designed data analysis was employed to examine the flies' solitary behavioural variable that was recorded at the flight simulator-yaw torque. Analysis was conducted to explore torque distributions of both wild-type and mutant flies in conditioning, with the following results: (1) Wild-type Canton-S flies had motor learning performance in conditioning, which was proved by modifications of the animal's behavioural mode in conditioning. (2) Repetition of training improved the motor learning performance of wild-type Canton-S flies. (3) Although mutant dunce(1) flies were defective in visual associative learning, they showed essentially normal motor learning performance in terms of yaw torque distribution in conditioning. Finally, we tentatively proposed that both visual associative learning and motor learning were involved in the visual operant conditioning of Drosophila at the flight simulator, that the two learning forms could be dissociated and they might have different neural bases.
Moehler, Tobias; Fiehler, Katja
2015-11-01
Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation=simultaneous vs. before saccade preparation=sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. Copyright © 2015 Elsevier Ltd. All rights reserved.
Metabolic Pathways Visualization Skills Development by Undergraduate Students
ERIC Educational Resources Information Center
dos Santos, Vanessa J. S. V.; Galembeck, Eduardo
2015-01-01
We have developed a metabolic pathways visualization skill test (MPVST) to gain greater insight into our students' abilities to comprehend the visual information presented in metabolic pathways diagrams. The test is able to discriminate students' visualization ability with respect to six specific visualization skills that we identified as key to…
Representation in dynamical agents.
Ward, Ronnie; Ward, Robert
2009-04-01
This paper extends experiments by Beer [Beer, R. D. (1996). Toward the evolution of dynamical neural networks for minimally cognitive behavior. In P. Maes, M. Mataric, J. Meyer, J. Pollack, & S. Wilson (Eds.), From animals to animats 4: Proceedings of the fourth international conference on simulation of adaptive behavior (pp. 421-429). MIT Press; Beer, R. D. (2003). The dynamics of active categorical perception in an evolved model agent (with commentary and response). Adaptive Behavior, 11 (4), 209-243] with an evolved, dynamical agent to further explore the question of representation in cognitive systems. Beer's environmentally-situated visual agent was controlled by a continuous-time recurrent neural network, and evolved to perform a categorical perception task, discriminating circles from diamonds. Despite the agent's high levels of discrimination performance, Beer found no evidence of internal representation in the best-evolved agent's nervous system. Here we examine the generality of this result. We evolved an agent for shape discrimination, and performed extensive behavioral analyses to test for representation. In this case we find that agents developed to discriminate equal-width shapes exhibit what Clark [Clark, A. (1997). The dynamical challenge. Cognitive Science, 21 (4), 461-481] calls "weak-substantive representation". The agent had internal configurations that (1) were understandably related to the object in the environment, and (2) were functionally used in a task relevant way when the target was not visible to the agent.
Object detection in natural backgrounds predicted by discrimination performance and models
NASA Technical Reports Server (NTRS)
Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.
1997-01-01
Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.
Comparison of Automated and Human Instruction for Developmentally Retarded Preschool Children.
ERIC Educational Resources Information Center
Richmond, Glenn
1983-01-01
Twenty developmentally retarded preschool children were trained on two visual discriminations with automated instruction and two discriminations with human instruction. Results showed human instruction significantly better than automated instruction. Nine Ss reached criterion for both discriminations with automated instruction, therefore showing…
A horse's eye view: size and shape discrimination compared with other mammals.
Tomonaga, Masaki; Kumazaki, Kiyonori; Camus, Florine; Nicod, Sophie; Pereira, Carlos; Matsuzawa, Tetsuro
2015-11-01
Mammals have adapted to a variety of natural environments from underwater to aerial and these different adaptations have affected their specific perceptive and cognitive abilities. This study used a computer-controlled touchscreen system to examine the visual discrimination abilities of horses, particularly regarding size and shape, and compared the results with those from chimpanzee, human and dolphin studies. Horses were able to discriminate a difference of 14% in circle size but showed worse discrimination thresholds than chimpanzees and humans; these differences cannot be explained by visual acuity. Furthermore, the present findings indicate that all species use length cues rather than area cues to discriminate size. In terms of shape discrimination, horses exhibited perceptual similarities among shapes with curvatures, vertical/horizontal lines and diagonal lines, and the relative contributions of each feature to perceptual similarity in horses differed from those for chimpanzees, humans and dolphins. Horses pay more attention to local components than to global shapes. © 2015 The Author(s).
Sensitivity of the lane change test as a measure of in-vehicle system demand.
Young, Kristie L; Lenné, Michael G; Williamson, Amy R
2011-05-01
The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Color and luminance increment thresholds in poor readers.
Dain, Stephen J; Floyd, Richard A; Elliot, Robert T
2008-01-01
The hypotheses of a visual basis to reading disabilities in some children have centered around deficits in the visual processes displaying more transient responses to stimuli although hyperactivity in the visual processes displaying sustained responses to stimuli has also been proposed as a mechanism. In addition, there is clear evidence that colored lenses and/or colored overlays and/or colored backgrounds can influence performance in reading and/or may assist in providing comfortable vision for reading and, as a consequence, the ability to maintain reading for longer. As a consequence, it is surprising that the color vision of poor readers is relatively little studied. We assessed luminance increment thresholds and equi-luminous red-green and blue-yellow increment thresholds using a computer based test in central vision and at 10 degrees nasally employing the paradigm pioneered by King-Smith. We examined 35 poor readers (based on the Neale Analysis of Reading) and compared their performance with 35 normal readers matched for age and IQ. Poor readers produced similar luminance contrast thresholds for both foveal and peripheral presentation compared with normals. Similarly, chromatic contrast discrimination for the red/green stimuli was the same in normal and poor readers. However, poor readers had significantly lower thresholds/higher sensitivity for the blue/yellow stimuli, for both foveal and peripheral presentation, compared with normal readers. This hypersensitivity in blue-yellow discrimination may point to why colored lenses and overlays are often found to be effective in assisting many poor readers.
Effects of walker gender and observer gender on biological motion walking direction discrimination.
Yang, Xiaoying; Cai, Peng; Jiang, Yi
2014-09-01
The ability to recognize the movements of other biological entities, such as whether a person is walking toward you, is essential for survival and social interaction. Previous studies have shown that the visual system is particularly sensitive to approaching biological motion. In this study, we examined whether the gender of walkers and observers influenced the walking direction discrimination of approaching point-light walkers in fine granularity. The observers were presented a walker who walked in different directions and were asked to quickly judge the walking direction (left or right). The results showed that the observers demonstrated worse direction discrimination when the walker was depicted as male than when the walker was depicted as female, probably because the observers tended to perceive the male walkers as walking straight ahead. Intriguingly, male observers performed better than female observers at judging the walking directions of female walkers but not those of male walkers, a result indicating perceptual advantage with evolutionary significance. These findings provide strong evidence that the gender of walkers and observers modulates biological motion perception and that an adaptive perceptual mechanism exists in the visual system to facilitate the survival of social organisms. © 2014 The Institute of Psychology, Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.
Perceptual grouping determines haptic contextual modulation.
Overvliet, K E; Sayim, B
2016-09-01
Since the early phenomenological demonstrations of Gestalt principles, one of the major challenges of Gestalt psychology has been to quantify these principles. Here, we show that contextual modulation, i.e. the influence of context on target perception, can be used as a tool to quantify perceptual grouping in the haptic domain, similar to the visual domain. We investigated the influence of target-flanker grouping on performance in haptic vernier offset discrimination. We hypothesized that when, despite the apparent differences between vision and haptics, similar grouping principles are operational, a similar pattern of flanker interference would be observed in the haptic as in the visual domain. Participants discriminated the offset of a haptic vernier. The vernier was flanked by different flanker configurations: no flankers, single flanking lines, 10 flanking lines, rectangles and single perpendicular lines, varying the degree to which the vernier grouped with the flankers. Additionally, we used two different flanker widths (same width as and narrower than the target), again to vary target-flanker grouping. Our results show a clear effect of flankers: performance was much better when the vernier was presented alone compared to when it was presented with flankers. In the majority of flanker configurations, grouping between the target and the flankers determined the strength of interference, similar to the visual domain. However, in the same width rectangular flanker condition we found aberrant results. We discuss the results of our study in light of similarities and differences between vision and haptics and the interaction between different grouping principles. We conclude that in haptics, similar organization principles apply as in visual perception and argue that grouping and Gestalt are key organization principles not only of vision, but of the perceptual system in general. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Effect of Acute Sleep Deprivation on Visual Evoked Potentials in Professional Drivers
Jackson, Melinda L.; Croft, Rodney J.; Owens, Katherine; Pierce, Robert J.; Kennedy, Gerard A.; Crewther, David; Howard, Mark E.
2008-01-01
Study Objectives: Previous studies have demonstrated that as little as 18 hours of sleep deprivation can cause deleterious effects on performance. It has also been suggested that sleep deprivation can cause a “tunnel-vision” effect, in which attention is restricted to the center of the visual field. The current study aimed to replicate these behavioral effects and to examine the electrophysiological underpinnings of these changes. Design: Repeated-measures experimental study. Setting: University laboratory. Patients or Participants: Nineteen professional drivers (1 woman; mean age = 45.3 ± 9.1 years). Interventions: Two experimental sessions were performed; one following 27 hours of sleep deprivation and the other following a normal night of sleep, with control for circadian effects. Measurements & Results: A tunnel-vision task (central versus peripheral visual discrimination) and a standard checkerboard-viewing task were performed while 32-channel EEG was recorded. For the tunnel-vision task, sleep deprivation resulted in an overall slowing of reaction times and increased errors of omission for both peripheral and foveal stimuli (P < 0.05). These changes were related to reduced P300 amplitude (indexing cognitive processing) but not measures of early visual processing. No evidence was found for an interaction effect between sleep deprivation and visual-field position, either in terms of behavior or electrophysiological responses. Slower processing of the sustained parvocellular visual pathway was demonstrated. Conclusions: These findings suggest that performance deficits on visual tasks during sleep deprivation are due to higher cognitive processes rather than early visual processing. Sleep deprivation may differentially impair processing of more-detailed visual information. Features of the study design (eg, visual angle, duration of sleep deprivation) may influence whether peripheral visual-field neglect occurs. Citation: Jackson ML; Croft RJ; Owens K; Pierce RJ; Kennedy GA; Crewther D; Howard ME. The effect of acute sleep deprivation on visual evoked potentials in professional drivers. SLEEP 2008;31(9):1261-1269. PMID:18788651
Spatial Frequency Discrimination: Effects of Age, Reward, and Practice
Peters, Judith Carolien
2017-01-01
Social interaction starts with perception of the world around you. This study investigated two fundamental issues regarding the development of discrimination of higher spatial frequencies, which are important building blocks of perception. Firstly, it mapped the typical developmental trajectory of higher spatial frequency discrimination. Secondly, it developed and validated a novel design that could be applied to improve atypically developed vision. Specifically, this study examined the effect of age and reward on task performance, practice effects, and motivation (i.e., number of trials completed) in a higher spatial frequency (reference frequency: 6 cycles per degree) discrimination task. We measured discrimination thresholds in children aged between 7 to 12 years and adults (N = 135). Reward was manipulated by presenting either positive reinforcement or punishment. Results showed a decrease in discrimination thresholds with age, thus revealing that higher spatial frequency discrimination continues to develop after 12 years of age. This development continues longer than previously shown for discrimination of lower spatial frequencies. Moreover, thresholds decreased during the run, indicating that discrimination abilities improved. Reward did not affect performance or improvement. However, in an additional group of 5–6 year-olds (N = 28) punishments resulted in the completion of fewer trials compared to reinforcements. In both reward conditions children aged 5–6 years completed only a fourth or half of the run (64 to 128 out of 254 trials) and were not motivated to continue. The design thus needs further adaptation before it can be applied to this age group. Children aged 7–12 years and adults completed the run, suggesting that the design is successful and motivating for children aged 7–12 years. This study thus presents developmental differences in higher spatial frequency discrimination thresholds. Furthermore, it presents a design that can be used in future developmental studies that require multiple stimulus presentations such as visual perceptual learning. PMID:28135272
Spatial Frequency Discrimination: Effects of Age, Reward, and Practice.
van den Boomen, Carlijn; Peters, Judith Carolien
2017-01-01
Social interaction starts with perception of the world around you. This study investigated two fundamental issues regarding the development of discrimination of higher spatial frequencies, which are important building blocks of perception. Firstly, it mapped the typical developmental trajectory of higher spatial frequency discrimination. Secondly, it developed and validated a novel design that could be applied to improve atypically developed vision. Specifically, this study examined the effect of age and reward on task performance, practice effects, and motivation (i.e., number of trials completed) in a higher spatial frequency (reference frequency: 6 cycles per degree) discrimination task. We measured discrimination thresholds in children aged between 7 to 12 years and adults (N = 135). Reward was manipulated by presenting either positive reinforcement or punishment. Results showed a decrease in discrimination thresholds with age, thus revealing that higher spatial frequency discrimination continues to develop after 12 years of age. This development continues longer than previously shown for discrimination of lower spatial frequencies. Moreover, thresholds decreased during the run, indicating that discrimination abilities improved. Reward did not affect performance or improvement. However, in an additional group of 5-6 year-olds (N = 28) punishments resulted in the completion of fewer trials compared to reinforcements. In both reward conditions children aged 5-6 years completed only a fourth or half of the run (64 to 128 out of 254 trials) and were not motivated to continue. The design thus needs further adaptation before it can be applied to this age group. Children aged 7-12 years and adults completed the run, suggesting that the design is successful and motivating for children aged 7-12 years. This study thus presents developmental differences in higher spatial frequency discrimination thresholds. Furthermore, it presents a design that can be used in future developmental studies that require multiple stimulus presentations such as visual perceptual learning.
The time course of shape discrimination in the human brain.
Ales, Justin M; Appelbaum, L Gregory; Cottereau, Benoit R; Norcia, Anthony M
2013-02-15
The lateral occipital cortex (LOC) activates selectively to images of intact objects versus scrambled controls, is selective for the figure-ground relationship of a scene, and exhibits at least some degree of invariance for size and position. Because of these attributes, it is considered to be a crucial part of the object recognition pathway. Here we show that human LOC is critically involved in perceptual decisions about object shape. High-density EEG was recorded while subjects performed a threshold-level shape discrimination task on texture-defined figures segmented by either phase or orientation cues. The appearance or disappearance of a figure region from a uniform background generated robust visual evoked potentials throughout retinotopic cortex as determined by inverse modeling of the scalp voltage distribution. Contrasting responses from trials containing shape changes that were correctly detected (hits) with trials in which no change occurred (correct rejects) revealed stimulus-locked, target-selective activity in the occipital visual areas LOC and V4 preceding the subject's response. Activity that was locked to the subjects' reaction time was present in the LOC. Response-locked activity in the LOC was determined to be related to shape discrimination for several reasons: shape-selective responses were silenced when subjects viewed identical stimuli but their attention was directed away from the shapes to a demanding letter discrimination task; shape-selectivity was present across four different stimulus configurations used to define the figure; LOC responses correlated with participants' reaction times. These results indicate that decision-related activity is present in the LOC when subjects are engaged in threshold-level shape discriminations. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.
This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 27 titles deal with a variety of topics, including the following: facilitation of language development in disadvantaged preschool children; auditory-visual discrimination skills, language performance, and development of manual…
Outcome Analysis Tool for Army Refractive Surgery Program
2005-03-01
analysis function produces reports on the following information: " Evaluation of the safety of PRK and LASIK for maintenance of optimal visual...performance and ocular integrity. " Evaluation of the efficacy of PRK and LASIK by assessing the improvement in uncorrected vision for target detection...discrimination and recognition. "* Evaluation of the efficacy of PRK and LASIK by evaluating the stability of the refractive error over time
How category learning affects object representations: Not all morphspaces stretch alike
Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.
2012-01-01
How does learning to categorize objects affect how we visually perceive them? Behavioral, neurophysiological, and neuroimaging studies have tested the degree to which category learning influences object representations, with conflicting results. Some studies find that objects become more visually discriminable along dimensions relevant to previously learned categories, while others find no such effect. One critical factor we explore here lies in the structure of the morphspaces used in different studies. Studies finding no increase in discriminability often use “blended” morphspaces, with morphparents lying at corners of the space. By contrast, studies finding increases in discriminability use “factorial” morphspaces, defined by separate morphlines forming axes of the space. Using the same four morphparents, we created both factorial and blended morphspaces matched in pairwise discriminability. Category learning caused a selective increase in discriminability along the relevant dimension of the factorial space, but not in the blended space, and led to the creation of functional dimensions in the factorial space, but not in the blended space. These findings demonstrate that not all morphspaces stretch alike: Only some morphspaces support enhanced discriminability to relevant object dimensions following category learning. Our results have important implications for interpreting neuroimaging studies reporting little or no effect of category learning on object representations in the visual system: Those studies may have been limited by their use of blended morphspaces. PMID:22746950
The retention and disruption of color information in human short-term visual memory.
Nemes, Vanda A; Parry, Neil R A; Whitaker, David; McKeefry, Declan J
2012-01-27
Previous studies have demonstrated that the retention of information in short-term visual perceptual memory can be disrupted by the presentation of masking stimuli during interstimulus intervals (ISIs) in delayed discrimination tasks (S. Magnussen & W. W. Greenlee, 1999). We have exploited this effect in order to determine to what extent short-term perceptual memory is selective for stimulus color. We employed a delayed hue discrimination paradigm to measure the fidelity with which color information was retained in short-term memory. The task required 5 color normal observers to discriminate between spatially non-overlapping colored reference and test stimuli that were temporally separated by an ISI of 5 s. The points of subjective equality (PSEs) on the resultant psychometric matching functions provided an index of performance. Measurements were made in the presence and absence of mask stimuli presented during the ISI, which varied in hue around the equiluminant plane in DKL color space. For all reference stimuli, we found a consistent mask-induced, hue-dependent shift in PSE compared to the "no mask" conditions. These shifts were found to be tuned in color space, only occurring for a range of mask hues that fell within bandwidths of 29-37 deg. Outside this range, masking stimuli had little or no effect on measured PSEs. The results demonstrate that memory masking for color exhibits selectivity similar to that which has already been demonstrated for other visual attributes. The relatively narrow tuning of these interference effects suggests that short-term perceptual memory for color is based on higher order, non-linear color coding. © ARVO
Automatic Spiral Analysis for Objective Assessment of Motor Symptoms in Parkinson's Disease.
Memedi, Mevludin; Sadikov, Aleksander; Groznik, Vida; Žabkar, Jure; Možina, Martin; Bergquist, Filip; Johansson, Anders; Haubenberger, Dietrich; Nyholm, Dag
2015-09-17
A challenge for the clinical management of advanced Parkinson's disease (PD) patients is the emergence of fluctuations in motor performance, which represents a significant source of disability during activities of daily living of the patients. There is a lack of objective measurement of treatment effects for in-clinic and at-home use that can provide an overview of the treatment response. The objective of this paper was to develop a method for objective quantification of advanced PD motor symptoms related to off episodes and peak dose dyskinesia, using spiral data gathered by a touch screen telemetry device. More specifically, the aim was to objectively characterize motor symptoms (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Digitized upper limb movement data of 65 advanced PD patients and 10 healthy (HE) subjects were recorded as they performed spiral drawing tasks on a touch screen device in their home environment settings. Several spatiotemporal features were extracted from the time series and used as inputs to machine learning methods. The methods were validated against ratings on animated spirals scored by four movement disorder specialists who visually assessed a set of kinematic features and the motor symptom. The ability of the method to discriminate between PD patients and HE subjects and the test-retest reliability of the computed scores were also evaluated. Computed scores correlated well with mean visual ratings of individual kinematic features. The best performing classifier (Multilayer Perceptron) classified the motor symptom (bradykinesia or dyskinesia) with an accuracy of 84% and area under the receiver operating characteristics curve of 0.86 in relation to visual classifications of the raters. In addition, the method provided high discriminating power when distinguishing between PD patients and HE subjects as well as had good test-retest reliability. This study demonstrated the potential of using digital spiral analysis for objective quantification of PD-specific and/or treatment-induced motor symptoms.
Impaired discrimination learning in interneuronal NMDAR-GluN2B mutant mice.
Brigman, Jonathan L; Daut, Rachel A; Saksida, Lisa; Bussey, Timothy J; Nakazawa, Kazu; Holmes, Andrew
2015-06-17
Previous studies have established a role for N-methyl-D-aspartate receptor (NMDAR) containing the GluN2B subunit in efficient learning behavior on a variety of tasks. Recent findings have suggested that NMDAR on GABAergic interneurons may underlie the modulation of striatal function necessary to balance efficient action with cortical excitatory input. Here we investigated how loss of GluN2B-containing NMDAR on GABAergic interneurons altered corticostriatal-mediated associative learning. Mutant mice (floxed-GluN2B×Ppp1r2-Cre) were generated to produce loss of GluN2B on forebrain interneurons and phenotyped on a touchscreen-based pairwise visual learning paradigm. We found that the mutants showed normal performance during Pavlovian and instrumental pretraining, but were significantly impaired on a discrimination learning task. Detailed analysis of the microstructure of discrimination performance revealed reduced win→stay behavior in the mutants. These results further support the role of NMDAR, and GluN2B in particular, on modulation of striatal function necessary for efficient choice behavior and suggest that NMDAR on interneurons may play a critical role in associative learning.
Unger, Ashley; Alm, Kylie H.; Collins, Jessica A.; O’Leary, Jacqueline M.; Olson, Ingrid R.
2017-01-01
Objective The extended face network contains clusters of neurons that perform distinct functions on facial stimuli. Regions in the posterior ventral visual stream appear to perform basic perceptual functions on faces, while more anterior regions, such as the ventral anterior temporal lobe and amygdala, function to link mnemonic and affective information to faces. Anterior and posterior regions are interconnected by a long-range white matter tracts however it is not known if variation in connectivity of these pathways explains cognitive performance. Methods Here, we used diffusion imaging and deterministic tractography in a cohort of 28 neurologically normal adults ages 18–28 to examine microstructural properties of visual fiber pathways and their relationship to certain mnemonic and affective functions involved in face processing. We investigated how inter-individual variability in two tracts, the inferior longitudinal fasciculus (ILF) and the inferior fronto-occipital fasciculus (IFOF), related to performance on tests of facial emotion recognition and face memory. Results Results revealed that microstructure of both tracts predicted variability in behavioral performance indexed by both tasks, suggesting that the ILF and IFOF play a role in facilitating our ability to discriminate emotional expressions in faces, as well as to remember unique faces. Variation in a control tract, the uncinate fasciculus, did not predict performance on these tasks. Conclusions These results corroborate and extend the findings of previous neuropsychology studies investigating the effects of damage to the ILF and IFOF, and demonstrate that differences in face processing abilities are related to white matter microstructure, even in healthy individuals. PMID:26888615
Asymmetry and irregularity border as discrimination factor between melanocytic lesions
NASA Astrophysics Data System (ADS)
Sbrissa, David; Pratavieira, Sebastião.; Salvio, Ana Gabriela; Kurachi, Cristina; Bagnato, Vanderlei Salvadori; Costa, Luciano Da Fontoura; Travieso, Gonzalo
2015-06-01
Image processing tools have been widely used in systems supporting medical diagnosis. The use of mobile devices for the diagnosis of melanoma can assist doctors and improve their diagnosis of a melanocytic lesion. This study proposes a method of image analysis for melanoma discrimination from other types of melanocytic lesions, such as regular and atypical nevi. The process is based on extracting features related with asymmetry and border irregularity. It were collected 104 images, from medical database of two years. The images were obtained with standard digital cameras without lighting and scale control. Metrics relating to the characteristics of shape, asymmetry and curvature of the contour were extracted from segmented images. Linear Discriminant Analysis was performed for dimensionality reduction and data visualization. Segmentation results showed good efficiency in the process, with approximately 88:5% accuracy. Validation results presents sensibility and specificity 85% and 70% for melanoma detection, respectively.
Abbey, Craig K.; Zemp, Roger J.; Liu, Jie; Lindfors, Karen K.; Insana, Michael F.
2009-01-01
We investigate and extend the ideal observer methodology developed by Smith and Wagner to detection and discrimination tasks related to breast sonography. We provide a numerical approach for evaluating the ideal observer acting on radio-frequency (RF) frame data, which involves inversion of large nonstationary covariance matrices, and we describe a power-series approach to computing this inverse. Considering a truncated power series suggests that the RF data be Wiener-filtered before forming the final envelope image. We have compared human performance for Wiener-filtered and conventional B-mode envelope images using psychophysical studies for 5 tasks related to breast cancer classification. We find significant improvements in visual detection and discrimination efficiency in four of these five tasks. We also use the Smith-Wagner approach to distinguish between human and processing inefficiencies, and find that generally the principle limitation comes from the information lost in computing the final envelope image. PMID:16468454
Auditory discrimination therapy (ADT) for tinnitus management.
Herraiz, C; Diges, I; Cobo, P
2007-01-01
Auditory discrimination training (ADT) designs a procedure to increase cortical areas responding to trained frequencies (damaged cochlear areas with cortical misrepresentation) and to shrink the neighboring over-represented ones (tinnitus pitch). In a prospective descriptive study of 27 patients with high frequency tinnitus, the severity of the tinnitus was measured using a visual analog scale (VAS) and the tinnitus handicap inventory (THI). Patients performed a 10-min auditory discrimination task twice a day during one month. Discontinuous 4 kHz pure tones were mixed randomly with short broadband noise sounds through an MP3 system. After the treatment mean VAS scores were reduced from 5.2 to 4.5 (p=0.000) and the THI decreased from 26.2% to 21.3% (p=0.000). Forty percent of the patients had improvement in tinnitus perception (RESP). Comparing the ADT group with a control group showed statistically significant improvement of their tinnitus as assessed by RESP, VAS, and THI.
Ebersbach, Mirjam; Nawroth, Christian
2016-01-01
Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds’ success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children (N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children. PMID:27812346
Ebersbach, Mirjam; Nawroth, Christian
2016-01-01
Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds' success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children ( N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children.
ERIC Educational Resources Information Center
Friar, John T.
Two factors of predicted learning disorders were investigated: (1) inability to maintain appropriate classroom behavior (BEH), (2) perceptual discrimination deficit (PERC). Three groups of first-graders (BEH, PERC, normal control) were administered measures of impulse control, distractability, auditory discrimination, and visual discrimination.…
Pérez-Garín, Daniel; Recio, Patricia; Magallares, Alejandro; Molero, Fernando; García-Ael, Cristina
2018-05-15
The purpose of this study is to assess the discourse of people with disabilities regarding their perception of discrimination and stigma. Semi-structured interviews were conducted with ten adults with physical disabilities, ten with hearing impairments and seven with visual impairments. The agreement between the coders showed an excellent reliability for all three groups, with kappa coefficients between .82 and .96. Differences were assessed between the three groups regarding the types of discrimination they experienced and their most frequent emotional responses. People with physical disabilities mainly reported being stared at, undervalued, and subtly discriminated at work, whereas people with hearing impairments mainly reported encountering barriers in leisure activities, and people with visual impairments spoke of a lack of equal opportunities, mockery and/or bullying, and overprotection. Regarding their emotional reactions, people with physical disabilities mainly reported feeling anxious and depressed, whereas people with hearing impairments reported feeling helpless, and people with visual impairments reported feeling anger and self-pity. Findings are relevant to guide future research and interventions on the stigma of disability.
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
Barbosa, Sara; Pires, Gabriel; Nunes, Urbano
2016-03-01
Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Larcombe, Stephanie J.; Kennard, Chris
2017-01-01
Abstract Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145–156, 2018. © 2017 Wiley Periodicals, Inc. PMID:28963815
Koen, Joshua D; Borders, Alyssa A; Petzold, Michael T; Yonelinas, Andrew P
2017-02-01
The medial temporal lobe (MTL) plays a critical role in episodic long-term memory, but whether the MTL is necessary for visual short-term memory is controversial. Some studies have indicated that MTL damage disrupts visual short-term memory performance whereas other studies have failed to find such evidence. To account for these mixed results, it has been proposed that the hippocampus is critical in supporting short-term memory for high resolution complex bindings, while the cortex is sufficient to support simple, low resolution bindings. This hypothesis was tested in the current study by assessing visual short-term memory in patients with damage to the MTL and controls for high resolution and low resolution object-location and object-color associations. In the location tests, participants encoded sets of two or four objects in different locations on the screen. After each set, participants performed a two-alternative forced-choice task in which they were required to discriminate the object in the target location from the object in a high or low resolution lure location (i.e., the object locations were very close or far away from the target location, respectively). Similarly, in the color tests, participants were presented with sets of two or four objects in a different color and, after each set, were required to discriminate the object in the target color from the object in a high or low resolution lure color (i.e., the lure color was very similar or very different, respectively, to the studied color). The patients were significantly impaired in visual short-term memory, but importantly, they were more impaired for high resolution object-location and object-color bindings. The results are consistent with the proposal that the hippocampus plays a critical role in forming and maintaining complex, high resolution bindings. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Makowiecki, Kalina; Hammond, Geoff; Rodger, Jennifer
2012-01-01
In behavioural experiments, motivation to learn can be achieved using food rewards as positive reinforcement in food-restricted animals. Previous studies reduce animal weights to 80–90% of free-feeding body weight as the criterion for food restriction. However, effects of different degrees of food restriction on task performance have not been assessed. We compared learning task performance in mice food-restricted to 80 or 90% body weight (BW). We used adult wildtype (WT; C57Bl/6j) and knockout (ephrin-A2−/−) mice, previously shown to have a reverse learning deficit. Mice were trained in a two-choice visual discrimination task with food reward as positive reinforcement. When mice reached criterion for one visual stimulus (80% correct in three consecutive 10 trial sets) they began the reverse learning phase, where the rewarded stimulus was switched to the previously incorrect stimulus. For the initial learning and reverse phase of the task, mice at 90%BW took almost twice as many trials to reach criterion as mice at 80%BW. Furthermore, WT 80 and 90%BW groups significantly differed in percentage correct responses and learning strategy in the reverse learning phase, whereas no differences between weight restriction groups were observed in ephrin-A2−/− mice. Most importantly, genotype-specific differences in reverse learning strategy were only detected in the 80%BW groups. Our results indicate that increased food restriction not only results in better performance and a shorter training period, but may also be necessary for revealing behavioural differences between experimental groups. This has important ethical and animal welfare implications when deciding extent of diet restriction in behavioural studies. PMID:23144936
Makowiecki, Kalina; Hammond, Geoff; Rodger, Jennifer
2012-01-01
In behavioural experiments, motivation to learn can be achieved using food rewards as positive reinforcement in food-restricted animals. Previous studies reduce animal weights to 80-90% of free-feeding body weight as the criterion for food restriction. However, effects of different degrees of food restriction on task performance have not been assessed. We compared learning task performance in mice food-restricted to 80 or 90% body weight (BW). We used adult wildtype (WT; C57Bl/6j) and knockout (ephrin-A2⁻/⁻) mice, previously shown to have a reverse learning deficit. Mice were trained in a two-choice visual discrimination task with food reward as positive reinforcement. When mice reached criterion for one visual stimulus (80% correct in three consecutive 10 trial sets) they began the reverse learning phase, where the rewarded stimulus was switched to the previously incorrect stimulus. For the initial learning and reverse phase of the task, mice at 90%BW took almost twice as many trials to reach criterion as mice at 80%BW. Furthermore, WT 80 and 90%BW groups significantly differed in percentage correct responses and learning strategy in the reverse learning phase, whereas no differences between weight restriction groups were observed in ephrin-A2⁻/⁻ mice. Most importantly, genotype-specific differences in reverse learning strategy were only detected in the 80%BW groups. Our results indicate that increased food restriction not only results in better performance and a shorter training period, but may also be necessary for revealing behavioural differences between experimental groups. This has important ethical and animal welfare implications when deciding extent of diet restriction in behavioural studies.
Wijesekara Witharanage, Randika; Rosa, Marcello G. P.
2012-01-01
Background Recent studies on colour discrimination suggest that experience is an important factor in how a visual system processes spectral signals. In insects it has been shown that differential conditioning is important for processing fine colour discriminations. However, the visual system of many insects, including the honeybee, has a complex set of neural pathways, in which input from the long wavelength sensitive (‘green’) photoreceptor may be processed either as an independent achromatic signal or as part of a trichromatic opponent-colour system. Thus, a potential confound of colour learning in insects is the possibility that modulation of the ‘green’ photoreceptor could underlie observations. Methodology/Principal Findings We tested honeybee vision using light emitting diodes centered on 414 and 424 nm wavelengths, which limit activation to the short-wavelength-sensitive (‘UV’) and medium-wavelength-sensitive (‘blue’) photoreceptors. The absolute irradiance spectra of stimuli was measured and modelled at both receptor and colour processing levels, and stimuli were then presented to the bees in a Y-maze at a large visual angle (26°), to ensure chromatic processing. Sixteen bees were trained over 50 trials, using either appetitive differential conditioning (N = 8), or aversive-appetitive differential conditioning (N = 8). In both cases the bees slowly learned to discriminate between the target and distractor with significantly better accuracy than would be expected by chance. Control experiments confirmed that changing stimulus intensity in transfers tests does not significantly affect bee performance, and it was possible to replicate previous findings that bees do not learn similar colour stimuli with absolute conditioning. Conclusion Our data indicate that honeybee colour vision can be tuned to relatively small spectral differences, independent of ‘green’ photoreceptor contrast and brightness cues. We thus show that colour vision is at least partly experience dependent, and behavioural plasticity plays an important role in how bees exploit colour information. PMID:23155394
Techniques for Programming Visual Demonstrations.
ERIC Educational Resources Information Center
Gropper, George L.
Visual demonstrations may be used as part of programs to deliver both content objectives and process objectives. Research has shown that learning of concepts is easier, more accurate, and more broadly applied when it is accompanied by visual examples. The visual examples supporting content learning should emphasize both discrimination and…
Visual modifications on the P300 speller BCI paradigm
NASA Astrophysics Data System (ADS)
Salvaris, M.; Sepulveda, F.
2009-08-01
The best known P300 speller brain-computer interface (BCI) paradigm is the Farwell and Donchin paradigm. In this paper, various changes to the visual aspects of this protocol are explored as well as their effects on classification. Changes to the dimensions of the symbols, the distance between the symbols and the colours used were tested. The purpose of the present work was not to achieve the highest possible accuracy results, but to ascertain whether these simple modifications to the visual protocol will provide classification differences between them and what these differences will be. Eight subjects were used, with each subject carrying out a total of six different experiments. In each experiment, the user spelt a total of 39 characters. Two types of classifiers were trained and tested to determine whether the results were classifier dependant. These were a support vector machine (SVM) with a radial basis function (RBF) kernel and Fisher's linear discriminant (FLD). The single-trial classification results and multiple-trial classification results were recorded and compared. Although no visual protocol was the best for all subjects, the best performances, across both classifiers, were obtained with the white background (WB) visual protocol. The worst performance was obtained with the small symbol size (SSS) visual protocol.
Lim, JaeHyoung; Oh, In Kyung; Han, Changsu; Huh, Yu Jeong; Jung, In-Kwa; Patkar, Ashwin A; Steffens, David C; Jang, Bo-Hyoung
2013-09-01
We performed a meta-analysis in order to determine which neuropsychological domains and tasks would be most sensitive for discriminating between patients with major depressive disorder (MDD) and healthy controls. Relevant articles were identified through a literature search of the PubMed and Cochrane Library databases for the period between January 1997 and May 2011. A meta-analysis was conducted using the standardized means of individual cognitive tests in each domain. The heterogeneity was assessed, and subgroup analyses according to age and medication status were performed to explore the sources of heterogeneity. A total of 22 trials involving 955 MDD patients and 7,664 healthy participants were selected for our meta-analysis. MDD patients showed significantly impaired results compared with healthy participants on the Digit Span and Continuous Performance Test in the attention domain; the Trail Making Test A (TMT-A) and the Digit Symbol Test in the processing speed domain; the Stroop Test, the Wisconsin Card Sorting Test, and Verbal Fluency in the executive function domain; and immediate verbal memory in the memory domain. The Finger Tapping Task, TMT-B, delayed verbal memory, and immediate and delayed visual memory failed to separate MDD patients from healthy controls. The results of subgroup analysis showed that performance of Verbal Fluency was significantly impaired in younger depressed patients (<60 years), and immediate visual memory was significantly reduced in depressed patients using antidepressants. Our findings have inevitable limitations arising from methodological issues inherent in the meta-analysis and we could not explain high heterogeneity between studies. Despite such limitations, current study has the strength of being the first meta-analysis which tried to specify cognitive function of depressed patients compared with healthy participants. And our findings may provide clinicians with further evidences that some cognitive tests in specific cognitive domains have sensitivity to discriminate MDD patients from healthy controls.
Oscillations during observations: Dynamic oscillatory networks serving visuospatial attention.
Wiesman, Alex I; Heinrichs-Graham, Elizabeth; Proskovec, Amy L; McDermott, Timothy J; Wilson, Tony W
2017-10-01
The dynamic allocation of neural resources to discrete features within a visual scene enables us to react quickly and accurately to salient environmental circumstances. A network of bilateral cortical regions is known to subserve such visuospatial attention functions; however the oscillatory and functional connectivity dynamics of information coding within this network are not fully understood. Particularly, the coding of information within prototypical attention-network hubs and the subsecond functional connections formed between these hubs have not been adequately characterized. Herein, we use the precise temporal resolution of magnetoencephalography (MEG) to define spectrally specific functional nodes and connections that underlie the deployment of attention in visual space. Twenty-three healthy young adults completed a visuospatial discrimination task designed to elicit multispectral activity in visual cortex during MEG, and the resulting data were preprocessed and reconstructed in the time-frequency domain. Oscillatory responses were projected to the cortical surface using a beamformer, and time series were extracted from peak voxels to examine their temporal evolution. Dynamic functional connectivity was then computed between nodes within each frequency band of interest. We find that visual attention network nodes are defined functionally by oscillatory frequency, that the allocation of attention to the visual space dynamically modulates functional connectivity between these regions on a millisecond timescale, and that these modulations significantly correlate with performance on a spatial discrimination task. We conclude that functional hubs underlying visuospatial attention are segregated not only anatomically but also by oscillatory frequency, and importantly that these oscillatory signatures promote dynamic communication between these hubs. Hum Brain Mapp 38:5128-5140, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Perceptual Grouping Enhances Visual Plasticity
Mastropasqua, Tommaso; Turatto, Massimo
2013-01-01
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100
Attention improves encoding of task-relevant features in the human visual cortex.
Jehee, Janneke F M; Brady, Devin K; Tong, Frank
2011-06-01
When spatial attention is directed toward a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer's task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in visual cortical areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature and not when the contrast of the grating had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks, but color-selective responses were enhanced only when color was task relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information.
Working memory resources are shared across sensory modalities.
Salmela, V R; Moisala, M; Alho, K
2014-10-01
A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones-both containing two varying features-were presented simultaneously. In Experiment 2, two gratings and two tones-each containing only one varying feature-were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.
Nimodipine alters acquisition of a visual discrimination task in chicks.
Deyo, R; Panksepp, J; Conner, R L
1990-03-01
Chicks 5 days old received intraperitoneal injections of nimodipine 30 min before training on either a visual discrimination task (0, 0.5, 1.0, or 5.0 mg/kg) or a test of separation-induced distress vocalizations (0, 0.5, or 2.5 mg/kg). Chicks receiving 1.0 mg/kg nimodipine made significantly fewer visual discrimination errors than vehicle controls by trials 41-60, but did not differ from controls 24 h later. Chicks in the 5 mg/kg group made significantly more errors when compared to controls both during acquisition of the task and during retention. Nimodipine did not alter separation-induced distress vocalizations at any of the doses tested, suggesting that nimodipine's effects on learning cannot be attributed to a reduction in separation distress. These data indicate that nimodipine's facilitation of learning in young subjects is dose dependent, but nimodipine failed to enhance retention.
Crowding with detection and coarse discrimination of simple visual features.
Põder, Endel
2008-04-24
Some recent studies have suggested that there are actually no crowding effects with detection and coarse discrimination of simple visual features. The present study tests the generality of this idea. A target Gabor patch, surrounded by either 2 or 6 flanker Gabors, was presented briefly at 4 deg eccentricity of the visual field. Each Gabor patch was oriented either vertically or horizontally (selected randomly). Observers' task was either to detect the presence of the target (presented with probability 0.5) or to identify the orientation of the target. The target-flanker distance was varied. Results were similar for the two tasks but different for 2 and 6 flankers. The idea that feature detection and coarse discrimination are immune to crowding may be valid for the two-flanker condition only. With six flankers, a normal crowding effect was observed. It is suggested that the complexity of the full pattern (target plus flankers) could explain the difference.
Category learning increases discriminability of relevant object dimensions in visual cortex.
Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel
2013-04-01
Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.
VISUAL FUNCTION CHANGES AFTER SUBCHRONIC TOLUENE INHALATION IN LONG-EVANS RATS.
Chronic exposure to volatile organic compounds, including toluene, has been associated with visual deficits such as reduced visual contrast sensitivity or impaired color discrimination in studies of occupational or residential exposure. These reports remain controversial, howeve...
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Washington County Public Schools, Washington, PA.
Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…
Heinen, Klaartje; Jolij, Jacob; Lamme, Victor A F
2005-09-08
Discriminating objects from their surroundings by the visual system is known as figure-ground segregation. This process entails two different subprocesses: boundary detection and subsequent surface segregation or 'filling in'. In this study, we used transcranial magnetic stimulation to test the hypothesis that temporally distinct processes in V1 and related early visual areas such as V2 or V3 are causally related to the process of figure-ground segregation. Our results indicate that correct discrimination between two visual stimuli, which relies on figure-ground segregation, requires two separate periods of information processing in the early visual cortex: one around 130-160 ms and the other around 250-280 ms.
NASA Astrophysics Data System (ADS)
Vilardi, Andrea; Tabarelli, Davide; Ricci, Leonardo
2015-02-01
Decision making is a widespread research topic and plays a crucial role in neuroscience as well as in other research and application fields of, for example, biology, medicine and economics. The most basic implementation of decision making, namely binary discrimination, is successfully interpreted by means of signal detection theory (SDT), a statistical model that is deeply linked to physics. An additional, widespread tool to investigate discrimination ability is the psychometric function, which measures the probability of a given response as a function of the magnitude of a physical quantity underlying the stimulus. However, the link between psychometric functions and binary discrimination experiments is often neglected or misinterpreted. Aim of the present paper is to provide a detailed description of an experimental investigation on a prototypical discrimination task and to discuss the results in terms of SDT. To this purpose, we provide an outline of the theory and describe the implementation of two behavioural experiments in the visual modality: upon the assessment of the so-called psychometric function, we show how to tailor a binary discrimination experiment on performance and decisional bias, and to measure these quantities on a statistical base. Attention is devoted to the evaluation of uncertainties, an aspect which is also often overlooked in the scientific literature.
Training directionally selective motion pathways can significantly improve reading efficiency
NASA Astrophysics Data System (ADS)
Lawton, Teri
2004-06-01
This study examined whether perceptual learning at early levels of visual processing would facilitate learning at higher levels of processing. This was examined by determining whether training the motion pathways by practicing leftright movement discrimination, as found previously, would improve the reading skills of inefficient readers significantly more than another computer game, a word discrimination game, or the reading program offered by the school. This controlled validation study found that practicing left-right movement discrimination 5-10 minutes twice a week (rapidly) for 15 weeks doubled reading fluency, and significantly improved all reading skills by more than one grade level, whereas inefficient readers in the control groups barely improved on these reading skills. In contrast to previous studies of perceptual learning, these experiments show that perceptual learning of direction discrimination significantly improved reading skills determined at higher levels of cognitive processing, thereby being generalized to a new task. The deficits in reading performance and attentional focus experienced by the person who struggles when reading are suggested to result from an information overload, resulting from timing deficits in the direction-selectivity network proposed by Russell De Valois et al. (2000), that following practice on direction discrimination goes away. This study found that practicing direction discrimination rapidly transitions the inefficient 7-year-old reader to an efficient reader.
Afraz, Arash; Boyden, Edward S.; DiCarlo, James J.
2015-01-01
Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with “face neurons,” such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception. PMID:25953336
Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex
Poort, Jasper; Khan, Adil G.; Pachitariu, Marius; Nemri, Abdellatif; Orsolic, Ivana; Krupic, Julija; Bauza, Marius; Sahani, Maneesh; Keller, Georg B.; Mrsic-Flogel, Thomas D.; Hofer, Sonja B.
2015-01-01
Summary We determined how learning modifies neural representations in primary visual cortex (V1) during acquisition of a visually guided behavioral task. We imaged the activity of the same layer 2/3 neuronal populations as mice learned to discriminate two visual patterns while running through a virtual corridor, where one pattern was rewarded. Improvements in behavioral performance were closely associated with increasingly distinguishable population-level representations of task-relevant stimuli, as a result of stabilization of existing and recruitment of new neurons selective for these stimuli. These effects correlated with the appearance of multiple task-dependent signals during learning: those that increased neuronal selectivity across the population when expert animals engaged in the task, and those reflecting anticipation or behavioral choices specifically in neuronal subsets preferring the rewarded stimulus. Therefore, learning engages diverse mechanisms that modify sensory and non-sensory representations in V1 to adjust its processing to task requirements and the behavioral relevance of visual stimuli. PMID:26051421
Auditory Confrontation Naming in Alzheimer’s Disease
Brandt, Jason; Bakker, Arnold; Maroof, David Aaron
2010-01-01
Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630
ERIC Educational Resources Information Center
Wessel, Dorothy
A 10-week classroom intervention program was implemented to facilitate the fine-motor development of eight first-grade children assessed as being deficient in motor skills. The program was divided according to five deficits to be remediated: visual motor, visual discrimination, visual sequencing, visual figure-ground, and visual memory. Each area…
Processing Resources in Attention, Dual Task Performance, and Workload Assessment.
1981-07-01
some levels of processing, discrete attention switching is clearly an identifiable phenomenon ( LaBerge , Van Gelder, & Yellott, 1971; Kristofferson...1967, 27, 93-101. LaBerge , D., Van Gilder, P., & Yellott, S. A cueing technique in choice reaction time. Journal of Experimental Psychology, 1971, 87...city processing in auditory and visual discrimination. Acta Psychologica, 1967, 27, 223-229. Teghtsoonian, R. On the exponent in Stevens ’ law and the
Visual and Auditory Sensitivities and Discriminations
2003-03-03
Experimental Psychology : Human Perception and Performance, 26, 1721-1723. The data have also been reported to ECVP at the Trieste meeting, and to the Edinburgh...design to measure the disparity required to just detect the cyclopean test bars (Macmillan & Creelman , 1991). Each trial consisted of a single...conventionally (Macmillan & Creelman , 1991). Results Grating detection threshold (d =1.0) for observer 1 was estimated as 0.18 arc min peak-to-trough
Learning object-to-class kernels for scene classification.
Zhang, Lei; Zhen, Xiantong; Shao, Ling
2014-08-01
High-level image representations have drawn increasing attention in visual recognition, e.g., scene classification, since the invention of the object bank. The object bank represents an image as a response map of a large number of pretrained object detectors and has achieved superior performance for visual recognition. In this paper, based on the object bank representation, we propose the object-to-class (O2C) distances to model scene images. In particular, four variants of O2C distances are presented, and with the O2C distances, we can represent the images using the object bank by lower-dimensional but more discriminative spaces, called distance spaces, which are spanned by the O2C distances. Due to the explicit computation of O2C distances based on the object bank, the obtained representations can possess more semantic meanings. To combine the discriminant ability of the O2C distances to all scene classes, we further propose to kernalize the distance representation for the final classification. We have conducted extensive experiments on four benchmark data sets, UIUC-Sports, Scene-15, MIT Indoor, and Caltech-101, which demonstrate that the proposed approaches can significantly improve the original object bank approach and achieve the state-of-the-art performance.
Hatamikia, Sepideh; Maghooli, Keivan; Nasrabadi, Ali Motie
2014-01-01
Electroencephalogram (EEG) is one of the useful biological signals to distinguish different brain diseases and mental states. In recent years, detecting different emotional states from biological signals has been merged more attention by researchers and several feature extraction methods and classifiers are suggested to recognize emotions from EEG signals. In this research, we introduce an emotion recognition system using autoregressive (AR) model, sequential forward feature selection (SFS) and K-nearest neighbor (KNN) classifier using EEG signals during emotional audio-visual inductions. The main purpose of this paper is to investigate the performance of AR features in the classification of emotional states. To achieve this goal, a distinguished AR method (Burg's method) based on Levinson-Durbin's recursive algorithm is used and AR coefficients are extracted as feature vectors. In the next step, two different feature selection methods based on SFS algorithm and Davies–Bouldin index are used in order to decrease the complexity of computing and redundancy of features; then, three different classifiers include KNN, quadratic discriminant analysis and linear discriminant analysis are used to discriminate two and three different classes of valence and arousal levels. The proposed method is evaluated with EEG signals of available database for emotion analysis using physiological signals, which are recorded from 32 participants during 40 1 min audio visual inductions. According to the results, AR features are efficient to recognize emotional states from EEG signals, and KNN performs better than two other classifiers in discriminating of both two and three valence/arousal classes. The results also show that SFS method improves accuracies by almost 10-15% as compared to Davies–Bouldin based feature selection. The best accuracies are %72.33 and %74.20 for two classes of valence and arousal and %61.10 and %65.16 for three classes, respectively. PMID:25298928
A Portable Platform for Evaluation of Visual Performance in Glaucoma Patients
Rosen, Peter N.; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; Diniz-Filho, Alberto; Marvasti, Amir H.; Medeiros, Felipe A.
2015-01-01
Purpose To propose a new tablet-enabled test for evaluation of visual performance in glaucoma, the PERformance CEntered Portable Test (PERCEPT), and to evaluate its ability to predict history of falls and motor vehicle crashes. Design Cross-sectional study. Methods The study involved 71 patients with glaucomatous visual field defects on standard automated perimetry (SAP) and 59 control subjects. The PERCEPT was based on the concept of increasing visual task difficulty to improve detection of central visual field losses in glaucoma patients. Subjects had to perform a foveal 8-alternative-forced-choice orientation discrimination task, while detecting a simultaneously presented peripheral stimulus within a limited presentation time. Subjects also underwent testing with the Useful Field of View (UFOV) divided attention test. The ability to predict history of motor vehicle crashes and falls was investigated by odds ratios and incident-rate ratios, respectively. Results When adjusted for age, only the PERCEPT processing speed parameter showed significantly larger values in glaucoma compared to controls (difference: 243ms; P<0.001). PERCEPT results had a stronger association with history of motor vehicle crashes and falls than UFOV. Each 1 standard deviation increase in PERCEPT processing speed was associated with an odds ratio of 2.69 (P = 0.003) for predicting history of motor vehicle crashes and with an incident-rate ratio of 1.95 (P = 0.003) for predicting history of falls. Conclusion A portable platform for testing visual function was able to detect functional deficits in glaucoma, and its results were significantly associated with history of involvement in motor vehicle crashes and history of falls. PMID:26445501
Kendrick, Keith M; Zhan, Yang; Fischer, Hanno; Nicol, Alister U; Zhang, Xuejuan; Feng, Jianfeng
2011-06-09
How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes. Following learning the amplitude of theta (4-8 Hz), but not gamma (30-70 Hz) oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma). The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance. Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex which are correlated with behavioral performance. A network model which can reproduce these changes suggests that a key function of such learning-evoked alterations in theta and theta-nested gamma activity may be increased temporal desynchronization in neuronal firing leading to optimal timing of inputs to downstream neural networks potentiating their responses. In this way learning can produce potentiation in neural networks simply through altering the temporal pattern of their inputs.
2011-01-01
Background How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes. Results Following learning the amplitude of theta (4-8 Hz), but not gamma (30-70 Hz) oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma). The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance. Conclusions Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex which are correlated with behavioral performance. A network model which can reproduce these changes suggests that a key function of such learning-evoked alterations in theta and theta-nested gamma activity may be increased temporal desynchronization in neuronal firing leading to optimal timing of inputs to downstream neural networks potentiating their responses. In this way learning can produce potentiation in neural networks simply through altering the temporal pattern of their inputs. PMID:21658251
Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.
Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling
2015-11-01
In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.
de Borst, Aline W; de Gelder, Beatrice
2017-08-01
Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Do Rats Use Shape to Solve "Shape Discriminations"?
ERIC Educational Resources Information Center
Minini, Loredana; Jeffery, Kathryn J.
2006-01-01
Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did…
Face and Object Discrimination in Autism, and Relationship to IQ and Age
ERIC Educational Resources Information Center
Pallett, Pamela M.; Cohen, Shereen J.; Dobkins, Karen R.
2014-01-01
The current study tested fine discrimination of upright and inverted faces and objects in adolescents with Autism Spectrum Disorder (ASD) as compared to age- and IQ-matched controls. Discrimination sensitivity was tested using morphed faces and morphed objects, and all stimuli were equated in low-level visual characteristics (luminance, contrast,…
Clery, Stephane; Cumming, Bruce G.
2017-01-01
Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal “noise” correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. SIGNIFICANCE STATEMENT Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. PMID:28100751
Basic visual function and cortical thickness patterns in posterior cortical atrophy.
Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J
2011-09-01
Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.
Navigation performance in virtual environments varies with fractal dimension of landscape.
Juliani, Arthur W; Bies, Alexander J; Boydston, Cooper R; Taylor, Richard P; Sereno, Margaret E
2016-09-01
Fractal geometry has been used to describe natural and built environments, but has yet to be studied in navigational research. In order to establish a relationship between the fractal dimension (D) of a natural environment and humans' ability to navigate such spaces, we conducted two experiments using virtual environments that simulate the fractal properties of nature. In Experiment 1, participants completed a goal-driven search task either with or without a map in landscapes that varied in D. In Experiment 2, participants completed a map-reading and location-judgment task in separate sets of fractal landscapes. In both experiments, task performance was highest at the low-to-mid range of D, which was previously reported as most preferred and discriminable in studies of fractal aesthetics and discrimination, respectively, supporting a theory of visual fluency. The applicability of these findings to architecture, urban planning, and the general design of constructed spaces is discussed.
McDonald, Robert J; Jones, Jana; Richards, Blake; Hong, Nancy S
2006-09-01
The objectives of this research were to further delineate the neural circuits subserving proposed memory-based behavioural subsystems in the hippocampal formation. These studies were guided by anatomical evidence showing a topographical organization of the hippocampal formation. Briefly, perpendicular to the medial/lateral entorhinal cortex division there is a second system of parallel circuits that separates the dorsal and ventral hippocampus. Recent work from this laboratory has provided evidence that the hippocampus incidentally encodes a context-specific inhibitory association during acquisition of a visual discrimination task. One question that emerges from this dataset is whether the dorsal or ventral hippocampus makes a unique contribution to this newly described function. Rats with neurotoxic lesions of the dorsal or ventral hippocampus were assessed on the acquisition of the visual discrimination task. Following asymptotic performance they were given reversal training in either the same or a different context from the original training. The results showed that the context-specific inhibition effect is mediated by a circuit that includes the ventral but not the dorsal hippocampus. Results from a control procedure showed that rats with either dorso-lateral striatum damage or dorsal hippocampal lesions were impaired on a tactile/spatial discrimination. Taken together, the results represent a double dissociation of learning and memory function between the ventral and dorsal hippocampus. The formation of an incidental inhibitory association was dependent on ventral but not dorsal hippocampal circuitry, and the opposite dependence was found for the spatial component of a tactile/spatial discrimination.
Pearce, Bradley; Crichton, Stuart; Mackiewicz, Michal; Finlayson, Graham D.; Hurlbert, Anya
2014-01-01
The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed. PMID:24586299
Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh
2012-01-01
Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635
Baldwin, C M; Houston, F P; Podgornik, M N; Young, R S; Barnes, C A; Witten, M L
2001-01-01
To determine whether JP-8 jet fuel affects parameters of the Functional Observational Battery (FOB), visual discrimination, or spatial learning and memory, the authors exposed groups of male Fischer Brown Norway hybrid rats for 28 d to aerosol/vapor-delivered JP-8, or to JP-8 followed by 15 min of aerosolized substance P analogue, or to sham-confined fresh room air. Behavioral testing was accomplished with the U.S. Environmental Protection Agency's Functional Observational Battery. The authors used the Morris swim task to test visual and spatial learning and memory testing. The spatial test included examination of memory for the original target location following 15 d of JP-8 exposure, as well as a 3-d new target location learning paradigm implemented the day that followed the final day of exposure. Only JP-8 exposed animals had significant weight loss by the 2nd week of exposure compared with JP-8 with substance P and control rats; this finding compares with those of prior studies of JP-8 jet fuel. Rats exposed to JP-8 with or without substance P exhibited significantly greater rearing and less grooming behavior over time than did controls during Functional Observational Battery open-field testing. Exposed rats also swam significantly faster than controls during the new target location training and testing, thus supporting the increased activity noted during Functional Observational Battery testing. There were no significant differences between the exposed and control groups' performances during acquisition, retention, or learning of the new platform location in either the visual discrimination or spatial version of the Morris swim task. The data suggest that although visual discrimination and spatial learning and memory were not disrupted by JP-8 exposure, arousal indices and activity measures were distinctly different in these animals.
Picchioni, Dante; Schmidt, Kathleen C; McWhirter, Kelly K; Loutaev, Inna; Pavletic, Adriana J; Speer, Andrew M; Zametkin, Alan J; Miao, Ning; Bishu, Shrinivas; Turetsky, Kate M; Morrow, Anne S; Nadel, Jeffrey L; Evans, Brittney C; Vesselinovitch, Diana M; Sheeler, Carrie A; Balkin, Thomas J; Smith, Carolyn B
2018-05-15
If protein synthesis during sleep is required for sleep-dependent memory consolidation, we might expect rates of cerebral protein synthesis (rCPS) to increase during sleep in the local brain circuits that support performance on a particular task following training on that task. To measure circuit-specific brain protein synthesis during a daytime nap opportunity, we used the L-[1-(11)C]leucine positron emission tomography (PET) method with simultaneous polysomnography. We trained subjects on the visual texture discrimination task (TDT). This was followed by a nap opportunity during the PET scan, and we retested them later in the day after the scan. The TDT is considered retinotopically specific, so we hypothesized that higher rCPS in primary visual cortex would be observed in the trained hemisphere compared to the untrained hemisphere in subjects who were randomized to a sleep condition. Our results indicate that the changes in rCPS in primary visual cortex depended on whether subjects were in the wakefulness or sleep condition but were independent of the side of the visual field trained. That is, only in the subjects randomized to sleep, rCPS in the right primary visual cortex was higher than the left regardless of side trained. Other brain regions examined were not so affected. In the subjects who slept, performance on the TDT improved similarly regardless of the side trained. Results indicate a regionally selective and sleep-dependent effect that occurs with improved performance on the TDT.
Melara, Robert D.; Singh, Shalini; Hien, Denise A.
2018-01-01
Two groups of healthy young adults were exposed to 3 weeks of cognitive training in a modified version of the visual flanker task, one group trained to discriminate the target (discrimination training) and the other group to ignore the flankers (inhibition training). Inhibition training, but not discrimination training, led to significant reductions in both Garner interference, indicating improved selective attention, and in Stroop interference, indicating more efficient resolution of stimulus conflict. The behavioral gains from training were greatest in participants who showed the poorest selective attention at pretest. Electrophysiological recordings revealed that inhibition training increased the magnitude of Rejection Positivity (RP) to incongruent distractors, an event-related potential (ERP) component associated with inhibitory control. Source modeling of RP uncovered a dipole in the medial frontal gyrus for those participants receiving inhibition training, but in the cingulate gyrus for those participants receiving discrimination training. Results suggest that inhibitory control is plastic; inhibition training improves conflict resolution, particularly in individuals with poor attention skills. PMID:29875644
Melara, Robert D; Singh, Shalini; Hien, Denise A
2018-01-01
Two groups of healthy young adults were exposed to 3 weeks of cognitive training in a modified version of the visual flanker task, one group trained to discriminate the target (discrimination training) and the other group to ignore the flankers (inhibition training). Inhibition training, but not discrimination training, led to significant reductions in both Garner interference, indicating improved selective attention, and in Stroop interference, indicating more efficient resolution of stimulus conflict. The behavioral gains from training were greatest in participants who showed the poorest selective attention at pretest. Electrophysiological recordings revealed that inhibition training increased the magnitude of Rejection Positivity (RP) to incongruent distractors, an event-related potential (ERP) component associated with inhibitory control. Source modeling of RP uncovered a dipole in the medial frontal gyrus for those participants receiving inhibition training, but in the cingulate gyrus for those participants receiving discrimination training. Results suggest that inhibitory control is plastic; inhibition training improves conflict resolution, particularly in individuals with poor attention skills.
A biologically plausible computational model for auditory object recognition.
Larson, Eric; Billimoria, Cyrus P; Sen, Kamal
2009-01-01
Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.
Perception of Self-Motion and Regulation of Walking Speed in Young-Old Adults.
Lalonde-Parsi, Marie-Jasmine; Lamontagne, Anouk
2015-07-01
Whether a reduced perception of self-motion contributes to poor walking speed adaptations in older adults is unknown. In this study, speed discrimination thresholds (perceptual task) and walking speed adaptations (walking task) were compared between young (19-27 years) and young-old individuals (63-74 years), and the relationship between the performance on the two tasks was examined. Participants were evaluated while viewing a virtual corridor in a helmet-mounted display. Speed discrimination thresholds were determined using a staircase procedure. Walking speed modulation was assessed on a self-paced treadmill while exposed to different self-motion speeds ranging from 0.25 to 2 times the participants' comfortable speed. For each speed, participants were instructed to match the self-motion speed described by the moving corridor. On the walking task, participants displayed smaller walking speed errors at comfortable walking speeds compared with slower of faster speeds. The young-old adults presented larger speed discrimination thresholds (perceptual experiment) and larger walking speed errors (walking experiment) compared with young adults. Larger walking speed errors were associated with higher discrimination thresholds. The enhanced performance on the walking task at comfortable speed suggests that intersensory calibration processes are influenced by experience, hence optimized for frequently encountered conditions. The altered performance of the young-old adults on the perceptual and walking tasks, as well as the relationship observed between the two tasks, suggest that a poor perception of visual motion information may contribute to the poor walking speed adaptations that arise with aging.
Gao, Dashan; Vasconcelos, Nuno
2009-01-01
A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.
Truppa, Valentina; Carducci, Paola; Trapanese, Cinzia; Hanus, Daniel
2015-01-01
Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation. PMID:25927363
Wu, Lin; Wang, Yang; Pan, Shirui
2017-12-01
It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.
Aging and the discrimination of 3-D shape from motion and binocular disparity.
Norman, J Farley; Holmin, Jessica S; Beers, Amanda M; Cheeseman, Jacob R; Ronning, Cecilia; Stethen, Angela G; Frost, Adam L
2012-10-01
Two experiments evaluated the ability of younger and older adults to visually discriminate 3-D shape as a function of surface coherence. The coherence was manipulated by embedding the 3-D surfaces in volumetric noise (e.g., for a 55 % coherent surface, 55 % of the stimulus points fell on a 3-D surface, while 45 % of the points occupied random locations within the same volume of space). The 3-D surfaces were defined by static binocular disparity, dynamic binocular disparity, and motion. The results of both experiments demonstrated significant effects of age: Older adults required more coherence (tolerated volumetric noise less) for reliable shape discrimination than did younger adults. Motion-defined and static-binocular-disparity-defined surfaces resulted in similar coherence thresholds. However, performance for dynamic-binocular-disparity-defined surfaces was superior (i.e., the observers' surface coherence thresholds were lowest for these stimuli). The results of both experiments showed that younger and older adults possess considerable tolerance to the disrupting effects of volumetric noise; the observers could reliably discriminate 3-D surface shape even when 45 % of the stimulus points (or more) constituted noise.
Yeari, Menahem; Isser, Michal; Schiff, Rachel
2017-07-01
A controversy has recently developed regarding the hypothesis that developmental dyslexia may be caused, in some cases, by a reduced visual attention span (VAS). To examine this hypothesis, independent of phonological abilities, researchers tested the ability of dyslexic participants to recognize arrays of unfamiliar visual characters. Employing this test, findings were rather equivocal: dyslexic participants exhibited poor performance in some studies but normal performance in others. The present study explored four methodological differences revealed between the two sets of studies that might underlie their conflicting results. Specifically, in two experiments we examined whether a VAS deficit is (a) specific to recognition of multi-character arrays as wholes rather than of individual characters within arrays, (b) specific to characters' position within arrays rather than to characters' identity, or revealed only under a higher attention load due to (c) low-discriminable characters, and/or (d) characters' short exposure. Furthermore, in this study we examined whether pure dyslexic participants who do not have attention disorder exhibit a reduced VAS. Although comorbidity of dyslexia and attention disorder is common and the ability to sustain attention for a long time plays a major rule in the visual recognition task, the presence of attention disorder was neither evaluated nor ruled out in previous studies. Findings did not reveal any differences between the performance of dyslexic and control participants on eight versions of the visual recognition task. These findings suggest that pure dyslexic individuals do not present a reduced visual attention span.
Monkey pulvinar neurons fire differentially to snake postures.
Le, Quan Van; Isbell, Lynne A; Matsumoto, Jumpei; Le, Van Quang; Hori, Etsuro; Tran, Anh Hai; Maior, Rafael S; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao
2014-01-01
There is growing evidence from both behavioral and neurophysiological approaches that primates are able to rapidly discriminate visually between snakes and innocuous stimuli. Recent behavioral evidence suggests that primates are also able to discriminate the level of threat posed by snakes, by responding more intensely to a snake model poised to strike than to snake models in coiled or sinusoidal postures (Etting and Isbell 2014). In the present study, we examine the potential for an underlying neurological basis for this ability. Previous research indicated that the pulvinar is highly sensitive to snake images. We thus recorded pulvinar neurons in Japanese macaques (Macaca fuscata) while they viewed photos of snakes in striking and non-striking postures in a delayed non-matching to sample (DNMS) task. Of 821 neurons recorded, 78 visually responsive neurons were tested with the all snake images. We found that pulvinar neurons in the medial and dorsolateral pulvinar responded more strongly to snakes in threat displays poised to strike than snakes in non-threat-displaying postures with no significant difference in response latencies. A multidimensional scaling analysis of the 78 visually responsive neurons indicated that threat-displaying and non-threat-displaying snakes were separated into two different clusters in the first epoch of 50 ms after stimulus onset, suggesting bottom-up visual information processing. These results indicate that pulvinar neurons in primates discriminate between poised to strike from those in non-threat-displaying postures. This neuronal ability likely facilitates behavioral discrimination and has clear adaptive value. Our results are thus consistent with the Snake Detection Theory, which posits that snakes were instrumental in the evolution of primate visual systems.
Visual cues for woodpeckers: light reflectance of decayed wood varies by decay fungus
O'Daniels, Sean T.; Kesler, Dylan C.; Mihail, Jeanne D.; Webb, Elisabeth B.; Werner, Scott J.
2018-01-01
The appearance of wood substrates is likely relevant to bird species with life histories that require regular interactions with wood for food and shelter. Woodpeckers detect decayed wood for cavity placement or foraging, and some species may be capable of detecting trees decayed by specific fungi; however, a mechanism allowing for such specificity remains unidentified. We hypothesized that decay fungi associated with woodpecker cavity sites alter the substrate reflectance in a species-specific manner that is visually discriminable by woodpeckers. We grew 10 species of wood decay fungi from pure cultures on sterile wood substrates of 3 tree species. We then measured the relative reflectance spectra of decayed and control wood wafers and compared them using the receptor noise-limited (RNL) color discrimination model. The RNL model has been used in studies of feather coloration, egg shells, flowers, and fruit to model how the colors of objects appear to birds. Our analyses indicated 6 of 10 decayed substrate/control comparisons were above the threshold of discrimination (i.e., indicating differences discriminable by avian viewers), and 12 of 13 decayed substrate comparisons were also above threshold for a hypothetical woodpecker. We conclude that woodpeckers should be capable of visually detecting decayed wood on trees where bark is absent, and they should also be able to detect visually species-specific differences in wood substrates decayed by fungi used in this study. Our results provide evidence for a visual mechanism by which woodpeckers could identify and select substrates decayed by specific fungi, which has implications for understanding ecologically important woodpecker–fungus interactions.