de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Lambert, Anthony J; Wootton, Adrienne
2017-08-01
Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela
2013-01-01
The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.
Fengler, Ineke; Nava, Elena; Röder, Brigitte
2015-01-01
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
Thomson, Eric E.; Zea, Ivan; França, Wendy
2017-01-01
Abstract Adult rats equipped with a sensory prosthesis, which transduced infrared (IR) signals into electrical signals delivered to somatosensory cortex (S1), took approximately 4 d to learn a four-choice IR discrimination task. Here, we show that when such IR signals are projected to the primary visual cortex (V1), rats that are pretrained in a visual-discrimination task typically learn the same IR discrimination task on their first day of training. However, without prior training on a visual discrimination task, the learning rates for S1- and V1-implanted animals converged, suggesting there is no intrinsic difference in learning rate between the two areas. We also discovered that animals were able to integrate IR information into the ongoing visual processing stream in V1, performing a visual-IR integration task in which they had to combine IR and visual information. Furthermore, when the IR prosthesis was implanted in S1, rats showed no impairment in their ability to use their whiskers to perform a tactile discrimination task. Instead, in some rats, this ability was actually enhanced. Cumulatively, these findings suggest that cortical sensory neuroprostheses can rapidly augment the representational scope of primary sensory areas, integrating novel sources of information into ongoing processing while incurring minimal loss of native function. PMID:29279860
Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun
2016-01-01
Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.
Improved Discrimination of Visual Stimuli Following Repetitive Transcranial Magnetic Stimulation
Waterston, Michael L.; Pack, Christopher C.
2010-01-01
Background Repetitive transcranial magnetic stimulation (rTMS) at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a “virtual lesion” in stimulated brain regions, with correspondingly diminished behavioral performance. Methodology/Principal Findings Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz) stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. Conclusions/Significance Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception. PMID:20442776
Sadato, Norihiro; Okada, Tomohisa; Kubota, Kiyokazu; Yonekura, Yoshiharu
2004-04-08
The occipital cortex of blind subjects is known to be activated during tactile discrimination tasks such as Braille reading. To investigate whether this is due to long-term learning of Braille or to sensory deafferentation, we used fMRI to study tactile discrimination tasks in subjects who had recently lost their sight and never learned Braille. The occipital cortex of the blind subjects without Braille training was activated during the tactile discrimination task, whereas that of control sighted subjects was not. This finding suggests that the activation of the visual cortex of the blind during performance of a tactile discrimination task may be due to sensory deafferentation, wherein a competitive imbalance favors the tactile over the visual modality.
Is improved contrast sensitivity a natural consequence of visual training?
Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.
2015-01-01
Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736
Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun
2016-01-01
Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer. PMID:26873777
Preschoolers Benefit From Visually Salient Speech Cues
Holt, Rachael Frush
2015-01-01
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336
Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria
2013-08-01
In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.
ERIC Educational Resources Information Center
Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace
2008-01-01
Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G
2017-01-01
Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.
Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh
2004-11-01
Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.
Dynamic functional brain networks involved in simple visual discrimination learning.
Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis
2014-10-01
Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.
Short-term visual deprivation, tactile acuity, and haptic solid shape discrimination.
Crabtree, Charles E; Norman, J Farley
2014-01-01
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task - perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task - the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation.
Face-gender discrimination is possible in the near-absence of attention.
Reddy, Leila; Wilken, Patrick; Koch, Christof
2004-03-02
The attentional cost associated with the visual discrimination of the gender of a face was investigated. Participants performed a face-gender discrimination task either alone (single-task) or concurrently (dual-task) with a known attentional demanding task (5-letter T/L discrimination). Overall performance on face-gender discrimination suffered remarkably little under the dual-task condition compared to the single-task condition. Similar results were obtained in experiments that controlled for potential training effects or the use of low-level cues in this discrimination task. Our results provide further evidence against the notion that only low-level representations can be accessed outside the focus of attention.
Treviño, Mario
2014-01-01
Animal choices depend on direct sensory information, but also on the dynamic changes in the magnitude of reward. In visual discrimination tasks, the emergence of lateral biases in the choice record from animals is often described as a behavioral artifact, because these are highly correlated with error rates affecting psychophysical measurements. Here, we hypothesized that biased choices could constitute a robust behavioral strategy to solve discrimination tasks of graded difficulty. We trained mice to swim in a two-alterative visual discrimination task with escape from water as the reward. Their prevalence of making lateral choices increased with stimulus similarity and was present in conditions of high discriminability. While lateralization occurred at the individual level, it was absent, on average, at the population level. Biased choice sequences obeyed the generalized matching law and increased task efficiency when stimulus similarity was high. A mathematical analysis revealed that strongly-biased mice used information from past rewards but not past choices to make their current choices. We also found that the amount of lateralized choices made during the first day of training predicted individual differences in the average learning behavior. This framework provides useful analysis tools to study individualized visual-learning trajectories in mice. PMID:25524257
Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian
2017-01-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469
Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen
2017-06-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.
Mental workload while driving: effects on visual search, discrimination, and decision making.
Recarte, Miguel A; Nunes, Luis M
2003-06-01
The effects of mental workload on visual search and decision making were studied in real traffic conditions with 12 participants who drove an instrumented car. Mental workload was manipulated by having participants perform several mental tasks while driving. A simultaneous visual-detection and discrimination test was used as performance criteria. Mental tasks produced spatial gaze concentration and visual-detection impairment, although no tunnel vision occurred. According to ocular behavior analysis, this impairment was due to late detection and poor identification more than to response selection. Verbal acquisition tasks were innocuous compared with production tasks, and complex conversations, whether by phone or with a passenger, are dangerous for road safety.
Transfer of perceptual learning between different visual tasks
McGovern, David P.; Webb, Ben S.; Peirce, Jonathan W.
2012-01-01
Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this ‘perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a ‘global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks. PMID:23048211
Transfer of perceptual learning between different visual tasks.
McGovern, David P; Webb, Ben S; Peirce, Jonathan W
2012-10-09
Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this 'perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a 'global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks.
Simultaneous Visual Discrimination in Asian Elephants
ERIC Educational Resources Information Center
Nissani, Moti; Hoefler-Nissani, Donna; Lay, U. Tin; Htun, U. Wan
2005-01-01
Two experiments explored the behavior of 20 Asian elephants ("Elephas aximus") in simultaneous visual discrimination tasks. In Experiment 1, 7 Burmese logging elephants acquired a white+/black- discrimination, reaching criterion in a mean of 2.6 sessions and 117 discrete trials, whereas 4 elephants acquired a black+/white- discrimination in 5.3…
Object localization, discrimination, and grasping with the optic nerve visual prosthesis.
Duret, Florence; Brelén, Måten E; Lambert, Valerie; Gérard, Benoît; Delbeke, Jean; Veraart, Claude
2006-01-01
This study involved a volunteer completely blind from retinis pigmentosa who had previously been implanted with an optic nerve visual prosthesis. The aim of this two-year study was to train the volunteer to localize a given object in nine different positions, to discriminate the object within a choice of six, and then to grasp it. In a closed-loop protocol including a head worn video camera, the nerve was stimulated whenever a part of the processed image of the object being scrutinized matched the center of an elicitable phosphene. The accessible visual field included 109 phosphenes in a 14 degrees x 41 degrees area. Results showed that training was required to succeed in the localization and discrimination tasks, but practically no training was required for grasping the object. The volunteer was able to successfully complete all tasks after training. The volunteer systematically performed several left-right and bottom-up scanning movements during the discrimination task. Discrimination strategies included stimulation phases and no-stimulation phases of roughly similar duration. This study provides a step towards the practical use of the optic nerve visual prosthesis in current daily life.
Short-Term Visual Deprivation, Tactile Acuity, and Haptic Solid Shape Discrimination
Crabtree, Charles E.; Norman, J. Farley
2014-01-01
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task – perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task – the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation. PMID:25397327
Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa
2017-02-01
The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.
Do rats use shape to solve “shape discriminations”?
Minini, Loredana; Jeffery, Kathryn J.
2006-01-01
Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did not use shape but instead relied on local luminance differences in the lower hemifield. A second experiment prevented this strategy by using stimuli—squares and rectangles—that varied in size and location, and for which the only constant predictor of reward was aspect ratio (ratio of height to width: a simple descriptor of “shape”). Rats eventually learned to use aspect ratio but only when no other discriminand was available, and performance remained very poor even at asymptote. These results suggest that although rats can process both dimensions simultaneously, they do not naturally solve shape discrimination tasks this way. This may reflect either a failure to visually process global shape information or a failure to discover shape as the discriminative stimulus in a simultaneous discrimination. Either way, our results suggest that simultaneous shape discrimination is not a good task for studies of visual perception in rodents. PMID:16705141
LaRoche, Ronee B; Morgan, Russell E
2007-01-01
Over the past two decades the use of selective serotonin reuptake inhibitors (SSRIs) to treat behavioral disorders in children has grown rapidly, despite little evidence regarding the safety and efficacy of these drugs for use in children. Utilizing a rat model, this study investigated whether post-weaning exposure to a prototype SSRI, fluoxetine (FLX), influenced performance on visual tasks designed to measure discrimination learning, sustained attention, inhibitory control, and reaction time. Additionally, sex differences in response to varying doses of fluoxetine were examined. In Experiment 1, female rats were administered (P.O.) fluoxetine (10 mg/kg ) or vehicle (apple juice) from PND 25 thru PND 49. After a 14 day washout period, subjects were trained to perform a simultaneous visual discrimination task. Subjects were then tested for 20 sessions on a visual attention task that consisted of varied stimulus delays (0, 3, 6, or 9 s) and cue durations (200, 400, or 700 ms). In Experiment 2, both male and female Long-Evans rats (24 F, 24 M) were administered fluoxetine (0, 5, 10, or 15 mg/kg) then tested in the same visual tasks used in Experiment 1, with the addition of open-field and elevated plus-maze testing. Few FLX-related differences were seen in the visual discrimination, open field, or plus-maze tasks. However, results from the visual attention task indicated a dose-dependent reduction in the performance of fluoxetine-treated males, whereas fluoxetine-treated females tended to improve over baseline. These findings indicate that enduring, behaviorally-relevant alterations of the CNS can occur following pharmacological manipulation of the serotonin system during postnatal development.
Examining the relationship between skilled music training and attention.
Wang, Xiao; Ossher, Lynn; Reuter-Lorenz, Patricia A
2015-11-01
While many aspects of cognition have been investigated in relation to skilled music training, surprisingly little work has examined the connection between music training and attentional abilities. The present study investigated the performance of skilled musicians on cognitively demanding sustained attention tasks, measuring both temporal and visual discrimination over a prolonged duration. Participants with extensive formal music training were found to have superior performance on a temporal discrimination task, but not a visual discrimination task, compared to participants with no music training. In addition, no differences were found between groups in vigilance decrement in either type of task. Although no differences were evident in vigilance per se, the results indicate that performance in an attention-demanding temporal discrimination task was superior in individuals with extensive music training. We speculate that this basic cognitive ability may contribute to advantages that musicians show in other cognitive measures. Copyright © 2015 Elsevier Inc. All rights reserved.
Kamitani, Toshiaki; Kuroiwa, Yoshiyuki
2009-01-01
Recent studies demonstrated an altered P3 component and prolonged reaction time during the visual discrimination tasks in multiple system atrophy (MSA). In MSA, however, little is known about the N2 component which is known to be closely related to the visual discrimination process. We therefore compared the N2 component as well as the N1 and P3 components in 17 MSA patients with these components in 10 normal controls, by using a visual selective attention task to color or to shape. While the P3 in MSA was significantly delayed in selective attention to shape, the N2 in MSA was significantly delayed in selective attention to color. N1 was normally preserved both in attention to color and in attention to shape. Our electrophysiological results indicate that the color discrimination process during selective attention is impaired in MSA.
Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K.; Fröhlich, Flavio
2016-01-01
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition. PMID:27025995
Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K; Fröhlich, Flavio
2016-03-30
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition.
Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.
Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling
2015-11-01
In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.
Do Rats Use Shape to Solve "Shape Discriminations"?
ERIC Educational Resources Information Center
Minini, Loredana; Jeffery, Kathryn J.
2006-01-01
Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did…
Acquisition of a visual discrimination and reversal learning task by Labrador retrievers.
Lazarowski, Lucia; Foster, Melanie L; Gruen, Margaret E; Sherman, Barbara L; Case, Beth C; Fish, Richard E; Milgram, Norton W; Dorman, David C
2014-05-01
Optimal cognitive ability is likely important for military working dogs (MWD) trained to detect explosives. An assessment of a dog's ability to rapidly learn discriminations might be useful in the MWD selection process. In this study, visual discrimination and reversal tasks were used to assess cognitive performance in Labrador retrievers selected for an explosives detection program using a modified version of the Toronto General Testing Apparatus (TGTA), a system developed for assessing performance in a battery of neuropsychological tests in canines. The results of the current study revealed that, as previously found with beagles tested using the TGTA, Labrador retrievers (N = 16) readily acquired both tasks and learned the discrimination task significantly faster than the reversal task. The present study confirmed that the modified TGTA system is suitable for cognitive evaluations in Labrador retriever MWDs and can be used to further explore effects of sex, phenotype, age, and other factors in relation to canine cognition and learning, and may provide an additional screening tool for MWD selection.
Neural networks for Braille reading by the blind.
Sadato, N; Pascual-Leone, A; Grafman, J; Deiber, M P; Ibañez, V; Hallett, M
1998-07-01
To explore the neural networks used for Braille reading, we measured regional cerebral blood flow with PET during tactile tasks performed both by Braille readers blinded early in life and by sighted subjects. Eight proficient Braille readers were studied during Braille reading with both right and left index fingers. Eight-character, non-contracted Braille-letter strings were used, and subjects were asked to discriminate between words and non-words. To compare the behaviour of the brain of the blind and the sighted directly, non-Braille tactile tasks were performed by six different blind subjects and 10 sighted control subjects using the right index finger. The tasks included a non-discrimination task and three discrimination tasks (angle, width and character). Irrespective of reading finger (right or left), Braille reading by the blind activated the inferior parietal lobule, primary visual cortex, superior occipital gyri, fusiform gyri, ventral premotor area, superior parietal lobule, cerebellum and primary sensorimotor area bilaterally, also the right dorsal premotor cortex, right middle occipital gyrus and right prefrontal area. During non-Braille discrimination tasks, in blind subjects, the ventral occipital regions, including the primary visual cortex and fusiform gyri bilaterally were activated while the secondary somatosensory area was deactivated. The reverse pattern was found in sighted subjects where the secondary somatosensory area was activated while the ventral occipital regions were suppressed. These findings suggest that the tactile processing pathways usually linked in the secondary somatosensory area are rerouted in blind subjects to the ventral occipital cortical regions originally reserved for visual shape discrimination.
Visual Discrimination and Motor Reproduction of Movement by Individuals with Mental Retardation.
ERIC Educational Resources Information Center
Shinkfield, Alison J.; Sparrow, W. A.; Day, R. H.
1997-01-01
Visual discrimination and motor reproduction tasks involving computer-simulated arm movements were administered to 12 adults with mental retardation and a gender-matched control group. The purpose was to examine whether inadequacies in visual perception account for the poorer motor performance of this population. Results indicate both perceptual…
Hippocampus, perirhinal cortex, and complex visual discriminations in rats and humans
Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.
2015-01-01
Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with perirhinal lesions were impaired and did not exhibit the normal preference for exploring the odd object. Notably, rats with hippocampal lesions exhibited the same impairment. Thus, the deficit is unlikely to illuminate functions attributed specifically to perirhinal cortex. Both lesion groups were able to acquire visual discriminations involving the same objects used in the oddity task. Patients with hippocampal damage or larger medial temporal lobe lesions were intact in a similar oddity task that allowed participants to explore objects quickly using eye movements. We suggest that humans were able to rely on an intact working memory capacity to perform this task, whereas rats (who moved slowly among the objects) needed to rely on long-term memory. PMID:25593294
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
The role of Broca's area in speech perception: evidence from aphasia revisited.
Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele
2011-12-01
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.
Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex.
Häkkinen, Suvi; Rinne, Teemu
2018-06-01
A hierarchical and modular organization is a central hypothesis in the current primate model of auditory cortex (AC) but lacks validation in humans. Here we investigated whether fMRI connectivity at rest and during active tasks is informative of the functional organization of human AC. Identical pitch-varying sounds were presented during a visual discrimination (i.e. no directed auditory attention), pitch discrimination, and two versions of pitch n-back memory tasks. Analysis based on fMRI connectivity at rest revealed a network structure consisting of six modules in supratemporal plane (STP), temporal lobe, and inferior parietal lobule (IPL) in both hemispheres. In line with the primate model, in which higher-order regions have more longer-range connections than primary regions, areas encircling the STP module showed the highest inter-modular connectivity. Multivariate pattern analysis indicated significant connectivity differences between the visual task and rest (driven by the presentation of sounds during the visual task), between auditory and visual tasks, and between pitch discrimination and pitch n-back tasks. Further analyses showed that these differences were particularly due to connectivity modulations between the STP and IPL modules. While the results are generally in line with the primate model, they highlight the important role of human IPL during the processing of both task-irrelevant and task-relevant auditory information. Importantly, the present study shows that fMRI connectivity at rest, during presentation of sounds, and during active listening provides novel information about the functional organization of human AC.
Graeber, R C; Schroeder, D M; Jane, J A; Ebbesson, S O
1978-07-15
An instrumental conditioning task was used to examine the role of the nurse shark telencephalon in black-white (BW) and horizontal-vertical stripes (HV) discrimination performance. In the first experiment, subjects initially received either bilateral anterior telencephalic control lesions or bilateral posterior telencephalic lesions aimed at destroying the central telencephalic nuclei (CN), which are known to receive direct input from the thalamic visual area. Postoperatively, the sharks were trained first on BW and then on HV. Those with anterior lesions learned both tasks as rapidly as unoperated subjects. Those with posterior lesions exhibited visual discrimination deficits related to the amount of damage to the CN and its connecting pathways. Severe damage resulted in an inability to learn either task but caused no impairments in motivation or general learning ability. In the second experiment, the sharks were first trained on BW and HV and then operated. Suction ablations were used to remove various portions of the CN. Sharks with 10% or less damage to the CN retained the preoperatively acquired discriminations almost perfectly. Those with 11-50% damage had to be retrained on both tasks. Almost total removal of the CN produced behavioral indications of blindness along with an inability to perform above the chance level on BW despite excellent retention of both discriminations over a 28-day period before surgery. It appears, however, that such sharks can still detect light. These results implicate the central telencephalic nuclei in the control of visually guided behavior in sharks.
Nimodipine alters acquisition of a visual discrimination task in chicks.
Deyo, R; Panksepp, J; Conner, R L
1990-03-01
Chicks 5 days old received intraperitoneal injections of nimodipine 30 min before training on either a visual discrimination task (0, 0.5, 1.0, or 5.0 mg/kg) or a test of separation-induced distress vocalizations (0, 0.5, or 2.5 mg/kg). Chicks receiving 1.0 mg/kg nimodipine made significantly fewer visual discrimination errors than vehicle controls by trials 41-60, but did not differ from controls 24 h later. Chicks in the 5 mg/kg group made significantly more errors when compared to controls both during acquisition of the task and during retention. Nimodipine did not alter separation-induced distress vocalizations at any of the doses tested, suggesting that nimodipine's effects on learning cannot be attributed to a reduction in separation distress. These data indicate that nimodipine's facilitation of learning in young subjects is dose dependent, but nimodipine failed to enhance retention.
Experience, Context, and the Visual Perception of Human Movement
ERIC Educational Resources Information Center
Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie
2004-01-01
Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…
THE ROLE OF THE HIPPOCAMPUS IN OBJECT DISCRIMINATION BASED ON VISUAL FEATURES.
Levcik, David; Nekovarova, Tereza; Antosova, Eliska; Stuchlik, Ales; Klement, Daniel
2018-06-07
The role of rodent hippocampus has been intensively studied in different cognitive tasks. However, its role in discrimination of objects remains controversial due to conflicting findings. We tested whether the number and type of features available for the identification of objects might affect the strategy (hippocampal-independent vs. hippocampal-dependent) that rats adopt to solve object discrimination tasks. We trained rats to discriminate 2D visual objects presented on a computer screen. The objects were defined either by their shape only or by multiple-features (a combination of filling pattern and brightness in addition to the shape). Our data showed that objects displayed as simple geometric shapes are not discriminated by trained rats after their hippocampi had been bilaterally inactivated by the GABA A -agonist muscimol. On the other hand, objects containing a specific combination of non-geometric features in addition to the shape are discriminated even without the hippocampus. Our results suggest that the involvement of the hippocampus in visual object discrimination depends on the abundance of object's features. Copyright © 2018. Published by Elsevier Inc.
The visual discrimination of negative facial expressions by younger and older adults.
Mienaltowski, Andrew; Johnson, Ellen R; Wittman, Rebecca; Wilson, Anne-Taylor; Sturycz, Cassandra; Norman, J Farley
2013-04-05
Previous research has demonstrated that older adults are not as accurate as younger adults at perceiving negative emotions in facial expressions. These studies rely on emotion recognition tasks that involve choosing between many alternatives, creating the possibility that age differences emerge for cognitive rather than perceptual reasons. In the present study, an emotion discrimination task was used to investigate younger and older adults' ability to visually discriminate between negative emotional facial expressions (anger, sadness, fear, and disgust) at low (40%) and high (80%) expressive intensity. Participants completed trials blocked by pairs of emotions. Discrimination ability was quantified from the participants' responses using signal detection measures. In general, the results indicated that older adults had more difficulty discriminating between low intensity expressions of negative emotions than did younger adults. However, younger and older adults did not differ when discriminating between anger and sadness. These findings demonstrate that age differences in visual emotion discrimination emerge when signal detection measures are used but that these differences are not uniform and occur only in specific contexts.
Attentional Capture of Objects Referred to by Spoken Language
ERIC Educational Resources Information Center
Salverda, Anne Pier; Altmann, Gerry T. M.
2011-01-01
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…
Distraction and Facilitation--Two Faces of the Same Coin?
ERIC Educational Resources Information Center
Wetzel, Nicole; Widmann, Andreas; Schroger, Erich
2012-01-01
Unexpected and task-irrelevant sounds can capture our attention and may cause distraction effects reflected by impaired performance in a primary task unrelated to the perturbing sound. The present auditory-visual oddball study examines the effect of the informational content of a sound on the performance in a visual discrimination task. The…
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue
2009-06-15
Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.
Hippocampus, Perirhinal Cortex, and Complex Visual Discriminations in Rats and Humans
ERIC Educational Resources Information Center
Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.; Squire, Larry R.; Clark, Robert E.
2015-01-01
Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with…
A dual-task investigation of automaticity in visual word processing
NASA Technical Reports Server (NTRS)
McCann, R. S.; Remington, R. W.; Van Selst, M.
2000-01-01
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.
The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.
Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin
2017-01-18
Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.
Gennari, Silvia P; Millman, Rebecca E; Hymers, Mark; Mattys, Sven L
2018-06-12
Perceiving speech while performing another task is a common challenge in everyday life. How the brain controls resource allocation during speech perception remains poorly understood. Using functional magnetic resonance imaging (fMRI), we investigated the effect of cognitive load on speech perception by examining brain responses of participants performing a phoneme discrimination task and a visual working memory task simultaneously. The visual task involved holding either a single meaningless image in working memory (low cognitive load) or four different images (high cognitive load). Performing the speech task under high load, compared to low load, resulted in decreased activity in pSTG/pMTG and increased activity in visual occipital cortex and two regions known to contribute to visual attention regulation-the superior parietal lobule (SPL) and the paracingulate and anterior cingulate gyrus (PaCG, ACG). Critically, activity in PaCG/ACG was correlated with performance in the visual task and with activity in pSTG/pMTG: Increased activity in PaCG/ACG was observed for individuals with poorer visual performance and with decreased activity in pSTG/pMTG. Moreover, activity in a pSTG/pMTG seed region showed psychophysiological interactions with areas of the PaCG/ACG, with stronger interaction in the high-load than the low-load condition. These findings show that the acoustic analysis of speech is affected by the demands of a concurrent visual task and that the PaCG/ACG plays a role in allocating cognitive resources to concurrent auditory and visual information. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Burnat, K; Zernicki, B
1997-01-01
We used 5 binocularly deprived cats (BD cats), 4 control cats reared also in the laboratory (C cats) and 4 cats reared in a normal environment (N cats). The cats were trained to discriminate an upward or downward-moving light spot versus a stationary spot (detection task) and then an upward versus a downward spot (direction task). The N and C cats learned slowly. The learning was slower than in previously studied discriminations of stationary stimuli. However, all N and C cats mastered the detection task and except one C cat the direction task. In contrast, 4 BD cats failed in the detection task and all in the direction task. This result is consistent with single-cell recording data showing impairment of direction analysis in the visual system in BD cats. After completing the training the upper part of the middle suprasylvian sulcus was removed unilaterally in 7 cats and bilaterally in 6 cats. Surprisingly, the unilateral lesions were more effective: the clear-cut retention deficits were found in 5 cats lesioned unilaterally, whereas only in one cat lesioned bilaterally.
Turchi, Janita; Devan, Bryan; Yin, Pingbo; Sigrist, Emmalynn; Mishkin, Mortimer
2010-01-01
The monkey's ability to learn a set of visual discriminations presented concurrently just once a day on successive days (24-hr ITI task) is based on habit formation, which is known to rely on a visuo-striatal circuit and to be independent of visuo-rhinal circuits that support one-trial memory. Consistent with this dissociation, we recently reported that performance on the 24-hr ITI task is impaired by a striatal-function blocking agent, the dopaminergic antagonist haloperidol, and not by a rhinal-function blocking agent, the muscarinic cholinergic antagonist scopolamine. In the present study, monkeys were trained on a short-ITI form of concurrent visual discrimination learning, one in which a set of stimulus pairs is repeated not only across daily sessions but also several times within each session (in this case, at about 4-min ITIs). Asymptotic discrimination learning rates in the non-drug condition were reduced by half, from ~11 trials/pair on the 24-hr ITI task to ~5 trials/pair on the 4-min ITI task, and this faster learning was impaired by systemic injections of either haloperidol or scopolamine. The results suggest that in the version of concurrent discrimination learning used here, the short ITIs within a session recruit both visuo-rhinal and visuo-striatal circuits, and that the final performance level is driven by both cognitive memory and habit formation working in concert. PMID:20144631
Turchi, Janita; Devan, Bryan; Yin, Pingbo; Sigrist, Emmalynn; Mishkin, Mortimer
2010-07-01
The monkey's ability to learn a set of visual discriminations presented concurrently just once a day on successive days (24-h ITI task) is based on habit formation, which is known to rely on a visuo-striatal circuit and to be independent of visuo-rhinal circuits that support one-trial memory. Consistent with this dissociation, we recently reported that performance on the 24-h ITI task is impaired by a striatal-function blocking agent, the dopaminergic antagonist haloperidol, and not by a rhinal-function blocking agent, the muscarinic cholinergic antagonist scopolamine. In the present study, monkeys were trained on a short-ITI form of concurrent visual discrimination learning, one in which a set of stimulus pairs is repeated not only across daily sessions but also several times within each session (in this case, at about 4-min ITIs). Asymptotic discrimination learning rates in the non-drug condition were reduced by half, from approximately 11 trials/pair on the 24-h ITI task to approximately 5 trials/pair on the 4-min ITI task, and this faster learning was impaired by systemic injections of either haloperidol or scopolamine. The results suggest that in the version of concurrent discrimination learning used here, the short ITIs within a session recruit both visuo-rhinal and visuo-striatal circuits, and that the final performance level is driven by both cognitive memory and habit formation working in concert.
Time course influences transfer of visual perceptual learning across spatial location.
Larcombe, S J; Kennard, C; Bridge, H
2017-06-01
Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Giersch, Anne; Glaser, Bronwyn; Pasca, Catherine; Chabloz, Mélanie; Debbané, Martin; Eliez, Stephan
2014-01-01
Individuals with 22q11.2 deletion syndrome (22q11.2DS) are impaired at exploring visual information in space; however, not much is known about visual form discrimination in the syndrome. Thirty-five individuals with 22q11.2DS and 41 controls completed a form discrimination task with global forms made up of local elements. Affected individuals…
Truppa, Valentina; Carducci, Paola; Trapanese, Cinzia; Hanus, Daniel
2015-01-01
Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation. PMID:25927363
Object versus spatial visual mental imagery in patients with schizophrenia
Aleman, André; de Haan, Edward H.F.; Kahn, René S.
2005-01-01
Objective Recent research has revealed a larger impairment of object perceptual discrimination than of spatial perceptual discrimination in patients with schizophrenia. It has been suggested that mental imagery may share processing systems with perception. We investigated whether patients with schizophrenia would show greater impairment regarding object imagery than spatial imagery. Methods Forty-four patients with schizophrenia and 20 healthy control subjects were tested on a task of object visual mental imagery and on a task of spatial visual mental imagery. Both tasks included a condition in which no imagery was needed for adequate performance, but which was in other respects identical to the imagery condition. This allowed us to adjust for nonspecific differences in individual performance. Results The results revealed a significant difference between patients and controls on the object imagery task (F1,63 = 11.8, p = 0.001) but not on the spatial imagery task (F1,63 = 0.14, p = 0.71). To test for a differential effect, we conducted a 2 (patients v. controls) х 2 (object task v. spatial task) analysis of variance. The interaction term was statistically significant (F1,62 = 5.2, p = 0.026). Conclusions Our findings suggest a differential dysfunction of systems mediating object and spatial visual mental imagery in schizophrenia. PMID:15644999
Moehler, Tobias; Fiehler, Katja
2014-12-01
The present study investigated the coupling of selection-for-perception and selection-for-action during saccadic eye movement planning in three dual-task experiments. We focused on the effects of spatial congruency of saccade target (ST) location and discrimination target (DT) location and the time between ST-cue and Go-signal (SOA) on saccadic eye movement performance. In two experiments, participants performed a visual discrimination task at a cued location while programming a saccadic eye movement to a cued location. In the third experiment, the discrimination task was not cued and appeared at a random location. Spatial congruency of ST-location and DT-location resulted in enhanced perceptual performance irrespective of SOA. Perceptual performance in spatially incongruent trials was above chance, but only when the DT-location was cued. Saccade accuracy and precision were also affected by spatial congruency showing superior performance when the ST- and DT-location coincided. Saccade latency was only affected by spatial congruency when the DT-cue was predictive of the ST-location. Moreover, saccades consistently curved away from the incongruent DT-locations. Importantly, the effects of spatial congruency on saccade parameters only occurred when the DT-location was cued; therefore, results from experiments 1 and 2 are due to the endogenous allocation of attention to the DT-location and not caused by the salience of the probe. The SOA affected saccade latency showing decreasing latencies with increasing SOA. In conclusion, our results demonstrate that visuospatial attention can be voluntarily distributed upon spatially distinct perceptual and motor goals in dual-task situations, resulting in a decline of visual discrimination and saccade performance.
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
A Role for Mouse Primary Visual Cortex in Motion Perception.
Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo
2018-06-04
Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.
Crowding with detection and coarse discrimination of simple visual features.
Põder, Endel
2008-04-24
Some recent studies have suggested that there are actually no crowding effects with detection and coarse discrimination of simple visual features. The present study tests the generality of this idea. A target Gabor patch, surrounded by either 2 or 6 flanker Gabors, was presented briefly at 4 deg eccentricity of the visual field. Each Gabor patch was oriented either vertically or horizontally (selected randomly). Observers' task was either to detect the presence of the target (presented with probability 0.5) or to identify the orientation of the target. The target-flanker distance was varied. Results were similar for the two tasks but different for 2 and 6 flankers. The idea that feature detection and coarse discrimination are immune to crowding may be valid for the two-flanker condition only. With six flankers, a normal crowding effect was observed. It is suggested that the complexity of the full pattern (target plus flankers) could explain the difference.
Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.
Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio
2015-07-08
When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. Copyright © 2015 the authors 0270-6474/15/359786-13$15.00/0.
Color coding of control room displays: the psychocartography of visual layering effects.
Van Laar, Darren; Deshe, Ofer
2007-06-01
To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).
Braun, J
1994-02-01
In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.
Pina Rodrigues, Ana; Rebola, José; Jorge, Helena; Ribeiro, Maria José; Pereira, Marcelino; van Asselen, Marieke; Castelo-Branco, Miguel
2017-01-01
The specificity of visual channel impairment in dyslexia has been the subject of much controversy. The purpose of this study was to determine if a differential pattern of impairment can be verified between visual channels in children with developmental dyslexia, and in particular, if the pattern of deficits is more conspicuous in tasks where the magnocellular-dorsal system recruitment prevails. Additionally, we also aimed at investigating the association between visual perception thresholds and reading. In the present case-control study, we compared perception thresholds of 33 children diagnosed with developmental dyslexia and 34 controls in a speed discrimination task, an achromatic contrast sensitivity task, and a chromatic contrast sensitivity task. Moreover, we addressed the correlation between the different perception thresholds and reading performance, as assessed by means of a standardized reading test (accuracy and fluency). Group comparisons were performed by the Mann-Whitney U test, and Spearman's rho was used as a measure of correlation. Results showed that, when compared to controls, children with dyslexia were more impaired in the speed discrimination task, followed by the achromatic contrast sensitivity task, with no impairment in the chromatic contrast sensitivity task. These results are also consistent with the magnocellular theory since the impairment profile of children with dyslexia in the visual threshold tasks reflected the amount of magnocellular-dorsal stream involvement. Moreover, both speed and achromatic thresholds were significantly correlated with reading performance, in terms of accuracy and fluency. Notably, chromatic contrast sensitivity thresholds did not correlate with any of the reading measures. Our evidence stands in favor of a differential visual channel deficit in children with developmental dyslexia and contributes to the debate on the pathophysiology of reading impairments.
Attention improves encoding of task-relevant features in the human visual cortex.
Jehee, Janneke F M; Brady, Devin K; Tong, Frank
2011-06-01
When spatial attention is directed toward a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer's task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in visual cortical areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature and not when the contrast of the grating had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks, but color-selective responses were enhanced only when color was task relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information.
Comparing visual search and eye movements in bilinguals and monolinguals
Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.
2017-01-01
Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116
Lack of power enhances visual perceptual discrimination.
Weick, Mario; Guinote, Ana; Wilkinson, David
2011-09-01
Powerless individuals face much challenge and uncertainty. As a consequence, they are highly vigilant and closely scrutinize their social environments. The aim of the present research was to determine whether these qualities enhance performance in more basic cognitive tasks involving simple visual feature discrimination. To test this hypothesis, participants performed a series of perceptual matching and search tasks involving colour, texture, and size discrimination. As predicted, those primed with powerlessness generated shorter reaction times and made fewer eye movements than either powerful or control participants. The results indicate that the heightened vigilance shown by powerless individuals is associated with an advantage in performing simple types of psychophysical discrimination. These findings highlight, for the first time, an underlying competency in perceptual cognition that sets powerless individuals above their powerful counterparts, an advantage that may reflect functional adaptation to the environmental challenge and uncertainty that they face. © 2011 Canadian Psychological Association
Gould, R W; Dencker, D; Grannan, M; Bubser, M; Zhan, X; Wess, J; Xiang, Z; Locuson, C; Lindsley, C W; Conn, P J; Jones, C K
2015-10-21
The M1 muscarinic acetylcholine receptor (mAChR) subtype has been implicated in the underlying mechanisms of learning and memory and represents an important potential pharmacotherapeutic target for the cognitive impairments observed in neuropsychiatric disorders such as schizophrenia. Patients with schizophrenia show impairments in top-down processing involving conflict between sensory-driven and goal-oriented processes that can be modeled in preclinical studies using touchscreen-based cognition tasks. The present studies used a touchscreen visual pairwise discrimination task in which mice discriminated between a less salient and a more salient stimulus to assess the influence of the M1 mAChR on top-down processing. M1 mAChR knockout (M1 KO) mice showed a slower rate of learning, evidenced by slower increases in accuracy over 12 consecutive days, and required more days to acquire (achieve 80% accuracy) this discrimination task compared to wild-type mice. In addition, the M1 positive allosteric modulator BQCA enhanced the rate of learning this discrimination in wild-type, but not in M1 KO, mice when BQCA was administered daily prior to testing over 12 consecutive days. Importantly, in discriminations between stimuli of equal salience, M1 KO mice did not show impaired acquisition and BQCA did not affect the rate of learning or acquisition in wild-type mice. These studies are the first to demonstrate performance deficits in M1 KO mice using touchscreen cognitive assessments and enhanced rate of learning and acquisition in wild-type mice through M1 mAChR potentiation when the touchscreen discrimination task involves top-down processing. Taken together, these findings provide further support for M1 potentiation as a potential treatment for the cognitive symptoms associated with schizophrenia.
Perceptual learning in visual search: fast, enduring, but non-specific.
Sireteanu, R; Rettenbach, R
1995-07-01
Visual search has been suggested as a tool for isolating visual primitives. Elementary "features" were proposed to involve parallel search, while serial search is necessary for items without a "feature" status, or, in some cases, for conjunctions of "features". In this study, we investigated the role of practice in visual search tasks. We found that, under some circumstances, initially serial tasks can become parallel after a few hundred trials. Learning in visual search is far less specific than learning of visual discriminations and hyperacuity, suggesting that it takes place at another level in the central visual pathway, involving different neural circuits.
Long-term memory of color stimuli in the jungle crow (Corvus macrorhynchos).
Bogale, Bezawork Afework; Sugawara, Satoshi; Sakano, Katsuhisa; Tsuda, Sonoko; Sugita, Shoei
2012-03-01
Wild-caught jungle crows (n = 20) were trained to discriminate between color stimuli in a two-alternative discrimination task. Next, crows were tested for long-term memory after 1-, 2-, 3-, 6-, and 10-month retention intervals. This preliminary study showed that jungle crows learn the task and reach a discrimination criterion (80% or more correct choices in two consecutive sessions of ten trials) in a few trials, and some even in a single session. Most, if not all, crows successfully remembered the constantly reinforced visual stimulus during training after all retention intervals. These results suggest that jungle crows have a high retention capacity for learned information, at least after a 10-month retention interval and make no or very few errors. This study is the first to show long-term memory capacity of color stimuli in corvids following a brief training that memory rather than rehearsal was apparent. Memory of visual color information is vital for exploitation of biological resources in crows. We suspect that jungle crows could remember the learned color discrimination task even after a much longer retention interval.
Melara, Robert D.; Singh, Shalini; Hien, Denise A.
2018-01-01
Two groups of healthy young adults were exposed to 3 weeks of cognitive training in a modified version of the visual flanker task, one group trained to discriminate the target (discrimination training) and the other group to ignore the flankers (inhibition training). Inhibition training, but not discrimination training, led to significant reductions in both Garner interference, indicating improved selective attention, and in Stroop interference, indicating more efficient resolution of stimulus conflict. The behavioral gains from training were greatest in participants who showed the poorest selective attention at pretest. Electrophysiological recordings revealed that inhibition training increased the magnitude of Rejection Positivity (RP) to incongruent distractors, an event-related potential (ERP) component associated with inhibitory control. Source modeling of RP uncovered a dipole in the medial frontal gyrus for those participants receiving inhibition training, but in the cingulate gyrus for those participants receiving discrimination training. Results suggest that inhibitory control is plastic; inhibition training improves conflict resolution, particularly in individuals with poor attention skills. PMID:29875644
Melara, Robert D; Singh, Shalini; Hien, Denise A
2018-01-01
Two groups of healthy young adults were exposed to 3 weeks of cognitive training in a modified version of the visual flanker task, one group trained to discriminate the target (discrimination training) and the other group to ignore the flankers (inhibition training). Inhibition training, but not discrimination training, led to significant reductions in both Garner interference, indicating improved selective attention, and in Stroop interference, indicating more efficient resolution of stimulus conflict. The behavioral gains from training were greatest in participants who showed the poorest selective attention at pretest. Electrophysiological recordings revealed that inhibition training increased the magnitude of Rejection Positivity (RP) to incongruent distractors, an event-related potential (ERP) component associated with inhibitory control. Source modeling of RP uncovered a dipole in the medial frontal gyrus for those participants receiving inhibition training, but in the cingulate gyrus for those participants receiving discrimination training. Results suggest that inhibitory control is plastic; inhibition training improves conflict resolution, particularly in individuals with poor attention skills.
Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R.
2017-01-01
Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee (Bombus terrestris) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. PMID:28978727
Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R; Chittka, Lars; Perry, Clint J
2017-10-11
Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee ( Bombus terrestris ) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. © 2017 The Authors.
Visual training improves perceptual grouping based on basic stimulus features.
Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M
2017-10-01
Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.
Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.
Byers, Anna; Serences, John T
2014-09-01
Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.
Wolf, Christian; Schütz, Alexander C
2017-06-01
Saccades bring objects of interest onto the fovea for high-acuity processing. Saccades to rewarded targets show shorter latencies that correlate negatively with expected motivational value. Shorter latencies are also observed when the saccade target is relevant for a perceptual discrimination task. Here we tested whether saccade preparation is equally influenced by informational value as it is by motivational value. We defined informational value as the probability that information is task-relevant times the ratio between postsaccadic foveal and presaccadic peripheral discriminability. Using a gaze-contingent display, we independently manipulated peripheral and foveal discriminability of the saccade target. Latencies of saccades with perceptual task were reduced by 36 ms in general, but they were not modulated by the information saccades provide (Experiments 1 and 2). However, latencies showed a clear negative linear correlation with the probability that the target is task-relevant (Experiment 3). We replicated that the facilitation by a perceptual task is spatially specific and not due to generally heightened arousal (Experiment 4). Finally, the facilitation only emerged when the perceptual task is in the visual but not in the auditory modality (Experiment 5). Taken together, these results suggest that saccade latencies are not equally modulated by informational value as by motivational value. The facilitation by a perceptual task only arises when task-relevant visual information is foveated, irrespective of whether the foveation is useful or not.
Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.
Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359
Visuoperceptual impairment in dementia with Lewy bodies.
Mori, E; Shimomura, T; Fujimori, M; Hirono, N; Imamura, T; Hashimoto, M; Tanimukai, S; Kazui, H; Hanihara, T
2000-04-01
In dementia with Lewy bodies (DLB), vision-related cognitive and behavioral symptoms are common, and involvement of the occipital visual cortices has been demonstrated in functional neuroimaging studies. To delineate visuoperceptual disturbance in patients with DLB in comparison with that in patients with Alzheimer disease and to explore the relationship between visuoperceptual disturbance and the vision-related cognitive and behavioral symptoms. Case-control study. Research-oriented hospital. Twenty-four patients with probable DLB (based on criteria of the Consortium on DLB International Workshop) and 48 patients with probable Alzheimer disease (based on criteria of the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association) who were matched to those with DLB 2:1 by age, sex, education, and Mini-Mental State Examination score. Four test items to examine visuoperceptual functions, including the object size discrimination, form discrimination, overlapping figure identification, and visual counting tasks. Compared with patients with probable Alzheimer disease, patients with probable DLB scored significantly lower on all the visuoperceptive tasks (P<.04 to P<.001). In the DLB group, patients with visual hallucinations (n = 18) scored significantly lower on the overlapping figure identification (P = .01) than those without them (n = 6), and patients with television misidentifications (n = 5) scored significantly lower on the size discrimination (P<.001), form discrimination (P = .01), and visual counting (P = .007) than those without them (n = 19). Visual perception is defective in probable DLB. The defective visual perception plays a role in development of visual hallucinations, delusional misidentifications, visual agnosias, and visuoconstructive disability charcteristic of DLB.
Scully, Erin N; Acerbo, Martin J; Lazareva, Olga F
2014-01-01
Earlier, we reported that nucleus rotundus (Rt) together with its inhibitory complex, nucleus subpretectalis/interstitio-pretecto-subpretectalis (SP/IPS), had significantly higher activity in pigeons performing figure-ground discrimination than in the control group that did not perform any visual discriminations. In contrast, color discrimination produced significantly higher activity than control in the Rt but not in the SP/IPS. Finally, shape discrimination produced significantly lower activity than control in both the Rt and the SP/IPS. In this study, we trained pigeons to simultaneously perform three visual discriminations (figure-ground, color, and shape) using the same stimulus displays. When birds learned to perform all three tasks concurrently at high levels of accuracy, we conducted bilateral chemical lesions of the SP/IPS. After a period of recovery, the birds were retrained on the same tasks to evaluate the effect of lesions on maintenance of these discriminations. We found that the lesions of the SP/IPS had no effect on color or shape discrimination and that they significantly impaired figure-ground discrimination. Together with our earlier data, these results suggest that the nucleus Rt and the SP/IPS are the key structures involved in figure-ground discrimination. These results also imply that thalamic processing is critical for figure-ground segregation in avian brain.
Deep neural networks for modeling visual perceptual learning.
Wenliang, Li; Seitz, Aaron R
2018-05-23
Understanding visual perceptual learning (VPL) has become increasingly more challenging as new phenomena are discovered with novel stimuli and training paradigms. While existing models aid our knowledge of critical aspects of VPL, the connections shown by these models between behavioral learning and plasticity across different brain areas are typically superficial. Most models explain VPL as readout from simple perceptual representations to decision areas and are not easily adaptable to explain new findings. Here, we show that a well-known instance of deep neural network (DNN), while not designed specifically for VPL, provides a computational model of VPL with enough complexity to be studied at many levels of analyses. After learning a Gabor orientation discrimination task, the DNN model reproduced key behavioral results, including increasing specificity with higher task precision, and also suggested that learning precise discriminations could asymmetrically transfer to coarse discriminations when the stimulus conditions varied. In line with the behavioral findings, the distribution of plasticity moved towards lower layers when task precision increased, and this distribution was also modulated by tasks with different stimulus types. Furthermore, learning in the network units demonstrated close resemblance to extant electrophysiological recordings in monkey visual areas. Altogether, the DNN fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. Although the comparisons were mostly qualitative, the DNN provides a new method of studying VPL and can serve as a testbed for theories and assist in generating predictions for physiological investigations. SIGNIFICANCE STATEMENT Visual perceptual learning (VPL) has been found to cause changes at multiple stages of the visual hierarchy. We found that training a deep neural network (DNN) on an orientation discrimination task produced similar behavioral and physiological patterns found in human and monkey experiments. Unlike existing VPL models, the DNN was pre-trained on natural images to reach high performance in object recognition but was not designed specifically for VPL, and yet it fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. When used with care, this unbiased and deep-hierarchical model can provide new ways of studying VPL from behavior to physiology. Copyright © 2018 the authors.
Perceptual grouping enhances visual plasticity.
Mastropasqua, Tommaso; Turatto, Massimo
2013-01-01
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.
ERIC Educational Resources Information Center
Lee, Inah; Shin, Ji Yun
2012-01-01
The exact roles of the medial prefrontal cortex (mPFC) in conditional choice behavior are unknown and a visual contextual response selection task was used for examining the issue. Inactivation of the mPFC severely disrupted performance in the task. mPFC inactivations, however, did not disrupt the capability of perceptual discrimination for visual…
Astié, Andrea A; Scardamaglia, Romina C; Muzio, Rubén N; Reboreda, Juan C
2015-10-01
Females of avian brood parasites, like the shiny cowbird (Molothrus bonariensis), locate host nests and on subsequent days return to parasitize them. This ecological pressure for remembering the precise location of multiple host nests may have selected for superior spatial memory abilities. We tested the hypothesis that shiny cowbirds show sex differences in spatial memory abilities associated with sex differences in host nest searching behavior and relative hippocampus volume. We evaluated sex differences during acquisition, reversal and retention after extinction in a visual and a spatial discrimination learning task. Contrary to our prediction, females did not outperform males in the spatial task in either the acquisition or the reversal phases. Similarly, there were no sex differences in either phase in the visual task. During extinction, in both tasks the retention of females was significantly higher than expected by chance up to 50 days after the last rewarded session (∼85-90% of the trials with correct responses), but the performance of males at that time did not differ than that expected by chance. This last result shows a long-term memory capacity of female shiny cowbirds, which were able to remember information learned using either spatial or visual cues after a long retention interval. Copyright © 2015 Elsevier B.V. All rights reserved.
Gordon, Barry
2018-01-01
Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages. PMID:29558513
Sung, Kyongje; Gordon, Barry
2018-01-01
Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.
Task-irrelevant emotion facilitates face discrimination learning.
Lorenzino, Martina; Caudek, Corrado
2015-03-01
We understand poorly how the ability to discriminate faces from one another is shaped by visual experience. The purpose of the present study is to determine whether face discrimination learning can be facilitated by facial emotions. To answer this question, we used a task-irrelevant perceptual learning paradigm because it closely mimics the learning processes that, in daily life, occur without a conscious intention to learn and without an attentional focus on specific facial features. We measured face discrimination thresholds before and after training. During the training phase (4 days), participants performed a contrast discrimination task on face images. They were not informed that we introduced (task-irrelevant) subtle variations in the face images from trial to trial. For the Identity group, the task-irrelevant features were variations along a morphing continuum of facial identity. For the Emotion group, the task-irrelevant features were variations along an emotional expression morphing continuum. The Control group did not undergo contrast discrimination learning and only performed the pre-training and post-training tests, with the same temporal gap between them as the other two groups. Results indicate that face discrimination improved, but only for the Emotion group. Participants in the Emotion group, moreover, showed face discrimination improvements also for stimulus variations along the facial identity dimension, even if these (task-irrelevant) stimulus features had not been presented during training. The present results highlight the importance of emotions for face discrimination learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Visual and auditory perception in preschool children at risk for dyslexia.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
2014-11-01
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
Colour discrimination and categorisation in Williams syndrome.
Farran, Emily K; Cranwell, Matthew B; Alvarez, James; Franklin, Anna
2013-10-01
Individuals with Williams syndrome (WS) present with impaired functioning of the dorsal visual stream relative to the ventral visual stream. As such, little attention has been given to ventral stream functions in WS. We investigated colour processing, a predominantly ventral stream function, for the first time in nineteen individuals with Williams syndrome. Colour discrimination was assessed using the Farnsworth-Munsell 100 hue test. Colour categorisation was assessed using a match-to-sample test and a colour naming task. A visual search task was also included as a measure of sensitivity to the size of perceptual colour difference. Results showed that individuals with WS have reduced colour discrimination relative to typically developing participants matched for chronological age; performance was commensurate with a typically developing group matched for non-verbal ability. In contrast, categorisation was typical in WS, although there was some evidence that sensitivity to the size of perceptual colour differences was reduced in this group. Copyright © 2013 Elsevier Ltd. All rights reserved.
Impaired Filtering of Behaviourally Irrelevant Visual Information in Dyslexia
ERIC Educational Resources Information Center
Roach, Neil W.; Hogben, John H.
2007-01-01
A recent proposal suggests that dyslexic individuals suffer from attentional deficiencies, which impair the ability to selectively process incoming visual information. To investigate this possibility, we employed a spatial cueing procedure in conjunction with a single fixation visual search task measuring thresholds for discriminating the…
Effects of attention and laterality on motion and orientation discrimination in deaf signers.
Bosworth, Rain G; Petrich, Jennifer A F; Dobkins, Karen R
2013-06-01
Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left hemisphere advantage for motion processing, while hearing nonsigners do not. To examine whether this finding extends to other aspects of visual processing, we compared deaf signers and hearing nonsigners on motion, form, and brightness discrimination tasks. Secondly, to examine whether hemispheric lateralities are affected by attention, we employed a dual-task paradigm to measure form and motion thresholds under "full" vs. "poor" attention conditions. Deaf signers, but not hearing nonsigners, exhibited a right visual field advantage for motion processing. This effect was also seen for form processing and not for the brightness task. Moreover, no group differences were observed in attentional effects, and the motion and form visual field asymmetries were not modulated by attention, suggesting they occur at early levels of sensory processing. In sum, the results show that processing of motion and form, believed to be mediated by dorsal and ventral visual pathways, respectively, are left-hemisphere dominant in deaf signers. Published by Elsevier Inc.
Task-related modulation of visual neglect in cancellation tasks
Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Driver, Jon
2008-01-01
Unilateral neglect involves deficits of spatial exploration and awareness that do not always affect a fixed portion of extrapersonal space, but may vary with current stimulation and possibly with task demands. Here, we assessed any ‘top-down’, task-related influences on visual neglect, with novel experimental variants of the cancellation test. Many different versions of the cancellation test are used clinically, and can differ in the extent of neglect revealed, though the exact factors determining this are not fully understood. Few cancellation studies have isolated the influence of top-down factors, as typically the stimuli are changed also when comparing different tests. Within each of three cancellation studies here, we manipulated task factors, while keeping visual displays identical across conditions to equate purely bottom-up factors. Our results show that top-down task-demands can significantly modulate neglect as revealed by cancellation on the same displays. Varying the target/non-target discrimination required for identical displays has a significant impact. Varying the judgement required can also have an impact on neglect even when all items are targets, so that non-targets no longer need filtering out. Requiring local versus global aspects of shape to be judged for the same displays also has a substantial impact, but the nature of discrimination required by the task still matters even when local/global level is held constant (e.g. for different colour discriminations on the same stimuli). Finally, an exploratory analysis of lesions among our neglect patients suggested that top-down task-related influences on neglect, as revealed by the new cancellation experiments here, might potentially depend on right superior temporal gyrus surviving the lesion. PMID:18790703
Task-related modulation of visual neglect in cancellation tasks.
Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Driver, Jon
2009-01-01
Unilateral neglect involves deficits of spatial exploration and awareness that do not always affect a fixed portion of extrapersonal space, but may vary with current stimulation and possibly with task demands. Here, we assessed any 'top-down', task-related influences on visual neglect, with novel experimental variants of the cancellation test. Many different versions of the cancellation test are used clinically, and can differ in the extent of neglect revealed, though the exact factors determining this are not fully understood. Few cancellation studies have isolated the influence of top-down factors, as typically the stimuli are changed also when comparing different tests. Within each of three cancellation studies here, we manipulated task factors, while keeping visual displays identical across conditions to equate purely bottom-up factors. Our results show that top-down task demands can significantly modulate neglect as revealed by cancellation on the same displays. Varying the target/non-target discrimination required for identical displays has a significant impact. Varying the judgement required can also have an impact on neglect even when all items are targets, so that non-targets no longer need filtering out. Requiring local versus global aspects of shape to be judged for the same displays also has a substantial impact, but the nature of discrimination required by the task still matters even when local/global level is held constant (e.g. for different colour discriminations on the same stimuli). Finally, an exploratory analysis of lesions among our neglect patients suggested that top-down task-related influences on neglect, as revealed by the new cancellation experiments here, might potentially depend on right superior temporal gyrus surviving the lesion.
Impairing the useful field of view in natural scenes: Tunnel vision versus general interference.
Ringer, Ryan V; Throneburg, Zachary; Johnson, Aaron P; Kramer, Arthur F; Loschky, Lester C
2016-01-01
A fundamental issue in visual attention is the relationship between the useful field of view (UFOV), the region of visual space where information is encoded within a single fixation, and eccentricity. A common assumption is that impairing attentional resources reduces the size of the UFOV (i.e., tunnel vision). However, most research has not accounted for eccentricity-dependent changes in spatial resolution, potentially conflating fixed visual properties with flexible changes in visual attention. Williams (1988, 1989) argued that foveal loads are necessary to reduce the size of the UFOV, producing tunnel vision. Without a foveal load, it is argued that the attentional decrement is constant across the visual field (i.e., general interference). However, other research asserts that auditory working memory (WM) loads produce tunnel vision. To date, foveal versus auditory WM loads have not been compared to determine if they differentially change the size of the UFOV. In two experiments, we tested the effects of a foveal (rotated L vs. T discrimination) task and an auditory WM (N-back) task on an extrafoveal (Gabor) discrimination task. Gabor patches were scaled for size and processing time to produce equal performance across the visual field under single-task conditions, thus removing the confound of eccentricity-dependent differences in visual sensitivity. The results showed that although both foveal and auditory loads reduced Gabor orientation sensitivity, only the foveal load interacted with retinal eccentricity to produce tunnel vision, clearly demonstrating task-specific changes to the form of the UFOV. This has theoretical implications for understanding the UFOV.
“Global” visual training and extent of transfer in amblyopic macaque monkeys
Kiorpes, Lynne; Mangal, Paul
2015-01-01
Perceptual learning is gaining acceptance as a potential treatment for amblyopia in adults and children beyond the critical period. Many perceptual learning paradigms result in very specific improvement that does not generalize beyond the training stimulus, closely related stimuli, or visual field location. To be of use in amblyopia, a less specific effect is needed. To address this problem, we designed a more general training paradigm intended to effect improvement in visual sensitivity across tasks and domains. We used a “global” visual stimulus, random dot motion direction discrimination with 6 training conditions, and tested for posttraining improvement on a motion detection task and 3 spatial domain tasks (contrast sensitivity, Vernier acuity, Glass pattern detection). Four amblyopic macaques practiced the motion discrimination with their amblyopic eye for at least 20,000 trials. All showed improvement, defined as a change of at least a factor of 2, on the trained task. In addition, all animals showed improvements in sensitivity on at least some of the transfer test conditions, mainly the motion detection task; transfer to the spatial domain was inconsistent but best at fine spatial scales. However, the improvement on the transfer tasks was largely not retained at long-term follow-up. Our generalized training approach is promising for amblyopia treatment, but sustaining improved performance may require additional intervention. PMID:26505868
Reimer, Christina B; Strobach, Tilo; Schubert, Torsten
2017-12-01
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.
Systematic distortions of perceptual stability investigated using immersive virtual reality
Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew
2010-01-01
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248
Time course of discrimination between emotional facial expressions: the role of visual saliency.
Calvo, Manuel G; Nummenmaa, Lauri
2011-08-01
Saccadic and manual responses were used to investigate the speed of discrimination between happy and non-happy facial expressions in two-alternative-forced-choice tasks. The minimum latencies of correct saccadic responses indicated that the earliest time point at which discrimination occurred ranged between 200 and 280ms, depending on type of expression. Corresponding minimum latencies for manual responses ranged between 440 and 500ms. For both response modalities, visual saliency of the mouth region was a critical factor in facilitating discrimination: The more salient the mouth was in happy face targets in comparison with non-happy distracters, the faster discrimination was. Global image characteristics (e.g., luminance) and semantic factors (i.e., categorical similarity and affective valence of expression) made minor or no contribution to discrimination efficiency. This suggests that visual saliency of distinctive facial features, rather than the significance of expression, is used to make both early and later expression discrimination decisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search
Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.
2012-01-01
Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766
Peripheral vision of youths with low vision: motion perception, crowding, and visual search.
Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S
2012-08-24
Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.
The Task-Relevant Attribute Representation Can Mediate the Simon Effect
Chen, Antao
2014-01-01
Researchers have previously suggested a working memory (WM) account of spatial codes, and based on this suggestion, the present study carries out three experiments to investigate how the task-relevant attribute representation (verbal or visual) in the typical Simon task affects the Simon effect. Experiment 1 compared the Simon effect between the between- and within-category color conditions, which required subjects to discriminate between red and blue stimuli (presumed to be represented by verbal WM codes because it was easy and fast to name the colors verbally) and to discriminate between two similar green stimuli (presumed to be represented by visual WM codes because it was hard and time-consuming to name the colors verbally), respectively. The results revealed a reliable Simon effect that only occurs in the between-category condition. Experiment 2 assessed the Simon effect by requiring subjects to discriminate between two different isosceles trapezoids (within-category shapes) and to discriminate isosceles trapezoid from rectangle (between-category shapes), and the results replicated and expanded the findings of Experiment 1. In Experiment 3, subjects were required to perform both tasks from Experiment 1. Wherein, in Experiment 3A, the between-category task preceded the within-category task; in Experiment 3B, the task order was opposite. The results showed the reliable Simon effect when subjects represented the task-relevant stimulus attributes by verbal WM encoding. In addition, the response times (RTs) distribution analysis for both the between- and within-category conditions of Experiments 3A and 3B showed decreased Simon effect with the RTs lengthened. Altogether, although the present results are consistent with the temporal coding account, we put forth that the Simon effect also depends on the verbal WM representation of task-relevant stimulus attribute. PMID:24618692
Alais, David; Cass, John
2010-06-23
An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.
Both hand position and movement direction modulate visual attention
Festman, Yariv; Adam, Jos J.; Pratt, Jay; Fischer, Martin H.
2013-01-01
The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festman et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention. PMID:24098288
Braille character discrimination in blindfolded human subjects.
Kauffman, Thomas; Théoret, Hugo; Pascual-Leone, Alvaro
2002-04-16
Visual deprivation may lead to enhanced performance in other sensory modalities. Whether this is the case in the tactile modality is controversial and may depend upon specific training and experience. We compared the performance of sighted subjects on a Braille character discrimination task to that of normal individuals blindfolded for a period of five days. Some participants in each group (blindfolded and sighted) received intensive Braille training to offset the effects of experience. Blindfolded subjects performed better than sighted subjects in the Braille discrimination task, irrespective of tactile training. For the left index finger, which had not been used in the formal Braille classes, blindfolding had no effect on performance while subjects who underwent tactile training outperformed non-stimulated participants. These results suggest that visual deprivation speeds up Braille learning and may be associated with behaviorally relevant neuroplastic changes.
Attention improves encoding of task-relevant features in the human visual cortex
Jehee, Janneke F.M.; Brady, Devin K.; Tong, Frank
2011-01-01
When spatial attention is directed towards a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer’s task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature, and not when the grating’s contrast had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks but color-selective responses were enhanced only when color was task-relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location, but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features, and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information. PMID:21632942
Auditory processing deficits in bipolar disorder with and without a history of psychotic features.
Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N
2015-11-01
Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Sleep-dependent consolidation benefits fast transfer of time interval training.
Chen, Lihan; Guo, Lu; Bao, Ming
2017-03-01
Previous study has shown that short training (15 min) for explicitly discriminating temporal intervals between two paired auditory beeps, or between two paired tactile taps, can significantly improve observers' ability to classify the perceptual states of visual Ternus apparent motion while the training of task-irrelevant sensory properties did not help to improve visual timing (Chen and Zhou in Exp Brain Res 232(6):1855-1864, 2014). The present study examined the role of 'consolidation' after training of temporal task-irrelevant properties, or whether a pure delay (i.e., blank consolidation) following pretest of the target task would give rise to improved ability of visual interval timing, typified in visual Ternus display. A procedure of pretest-training-posttest was adopted, with the probe of discriminating Ternus apparent motion. The extended implicit training of timing in which the time intervals between paired auditory beeps or paired tactile taps were manipulated but the task was discrimination of the auditory pitches or tactile intensities, did not lead to the training benefits (Exps 1 and 3); however, a delay of 24 h after implicit training of timing, including solving 'Sudoku puzzles,' made the otherwise absent training benefits observable (Exps 2, 4, 5 and 6). The above improvements in performance were not due to a practice effect of Ternus motion (Exp 7). A general 'blank' consolidation period of 24 h also made improvements of visual timing observable (Exp 8). Taken together, the current findings indicated that sleep-dependent consolidation imposed a general effect, by potentially triggering and maintaining neuroplastic changes in the intrinsic (timing) network to enhance the ability of time perception.
Seki, Yoshimasa; Okanoya, Kazuo
2008-02-01
Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.
Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2011-10-01
The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination.
Palanica, Adam; Itier, Roxane J
2014-01-01
Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity.
Transfer in motion perceptual learning depends on the difficulty of the training task.
Wang, Xiaoxiao; Zhou, Yifeng; Liu, Zili
2013-06-07
One hypothesis in visual perceptual learning is that the amount of transfer depends on the difficulty of the training and transfer tasks (Ahissar & Hochstein, 1997; Liu, 1995, 1999). Jeter, Dosher, Petrov, and Lu (2009), using an orientation discrimination task, challenged this hypothesis by arguing that the amount of transfer depends only on the transfer task but not on the training task. Here we show in a motion direction discrimination task that the amount of transfer indeed depends on the difficulty of the training task. Specifically, participants were first trained with either 4° or 8° direction discrimination along one average direction. Their transfer performance was then tested along an average direction 90° away from the trained direction. A variety of transfer measures consistently demonstrated that transfer performance depended on whether the participants were trained on 4° or 8° directional difference. The results contradicted the prediction that transfer was independent of the training task difficulty.
Matching cue size and task properties in exogenous attention.
Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet
2013-01-01
Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.
Brain activity associated with selective attention, divided attention and distraction.
Salo, Emma; Salmela, Viljami; Salmi, Juha; Numminen, Jussi; Alho, Kimmo
2017-06-01
Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks. Copyright © 2017 Elsevier B.V. All rights reserved.
Characteristic and intermingled neocortical circuits encode different visual object discriminations.
Zhang, Guo-Rong; Zhao, Hua; Cook, Nathan; Svestka, Michael; Choi, Eui M; Jan, Mary; Cook, Robert G; Geller, Alfred I
2017-07-28
Synaptic plasticity and neural network theories hypothesize that the essential information for advanced cognitive tasks is encoded in specific circuits and neurons within distributed neocortical networks. However, these circuits are incompletely characterized, and we do not know if a specific discrimination is encoded in characteristic circuits among multiple animals. Here, we determined the spatial distribution of active neurons for a circuit that encodes some of the essential information for a cognitive task. We genetically activated protein kinase C pathways in several hundred spatially-grouped glutamatergic and GABAergic neurons in rat postrhinal cortex, a multimodal associative area that is part of a distributed circuit that encodes visual object discriminations. We previously established that this intervention enhances accuracy for specific discriminations. Moreover, the genetically-modified, local circuit in POR cortex encodes some of the essential information, and this local circuit is preferentially activated during performance, as shown by activity-dependent gene imaging. Here, we mapped the positions of the active neurons, which revealed that two image sets are encoded in characteristic and different circuits. While characteristic circuits are known to process sensory information, in sensory areas, this is the first demonstration that characteristic circuits encode specific discriminations, in a multimodal associative area. Further, the circuits encoding the two image sets are intermingled, and likely overlapping, enabling efficient encoding. Consistent with reconsolidation theories, intermingled and overlapping encoding could facilitate formation of associations between related discriminations, including visually similar discriminations or discriminations learned at the same time or place. Copyright © 2017 Elsevier B.V. All rights reserved.
Empiric determination of corrected visual acuity standards for train crews.
Schwartz, Steven H; Swanson, William H
2005-08-01
Probably the most common visual standard for employment in the transportation industry is best-corrected, high-contrast visual acuity. Because such standards were often established absent empiric linkage to job performance, it is possible that a job applicant or employee who has visual acuity less than the standard may be able to satisfactorily perform the required job activities. For the transportation system that we examined, the train crew is required to inspect visually the length of the train before and during the time it leaves the station. The purpose of the inspection is to determine if an individual is in a hazardous position with respect to the train. In this article, we determine the extent to which high-contrast visual acuity can predict performance on a simulated task. Performance at discriminating hazardous from safe conditions, as depicted in projected photographic slides, was determined as a function of visual acuity. For different levels of visual acuity, which was varied through the use of optical defocus, a subject was required to label scenes as hazardous or safe. Task performance was highly correlated with visual acuity as measured under conditions normally used for vision screenings (high-illumination and high-contrast): as the acuity decreases, performance at discriminating hazardous from safe scenes worsens. This empirically based methodology can be used to establish a corrected high-contrast visual acuity standard for safety-sensitive work in transportation that is linked to the performance of a job-critical task.
Perceptual Grouping Enhances Visual Plasticity
Mastropasqua, Tommaso; Turatto, Massimo
2013-01-01
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100
Attentional demands of movement observation as tested by a dual task approach.
Saucedo Marquez, Cinthia M; Ceux, Tanja; Wenderoth, Nicole
2011-01-01
Movement observation (MO) has been shown to activate the motor cortex of the observer as indicated by an increase of corticomotor excitability for muscles involved in the observed actions. Moreover, behavioral work has strongly suggested that this process occurs in a near-automatic manner. Here we further tested this proposal by applying transcranial magnetic stimulation (TMS) when subjects observed how an actor lifted objects of different weights as a single or a dual task. The secondary task was either an auditory discrimination task (experiment 1) or a visual discrimination task (experiment 2). In experiment 1, we found that corticomotor excitability reflected the force requirements indicated in the observed movies (i.e. higher responses when the actor had to apply higher forces). Interestingly, this effect was found irrespective of whether MO was performed as a single or a dual task. By contrast, no such systematic modulations of corticomotor excitability were observed in experiment 2 when visual distracters were present. We conclude that interference effects might arise when MO is performed while competing visual stimuli are present. However, when a secondary task is situated in a different modality, neural responses are in line with the notion that the observers motor system responds in a near-automatic manner. This suggests that MO is a task with very low cognitive demands which might be a valuable supplement for rehabilitation training, particularly, in the acute phase after the incident or in patients suffering from attention deficits. However, it is important to keep in mind that visual distracters might interfere with the neural response in M1.
Lindor, Ebony; Rinehart, Nicole; Fielding, Joanne
2018-05-22
Individuals with Autism Spectrum Disorder (ASD) often excel on visual search and crowding tasks; however, inconsistent findings suggest that this 'islet of ability' may not be characteristic of the entire spectrum. We examined whether performance on these tasks changed as a function of motor proficiency in children with varying levels of ASD symptomology. Children with high ASD symptomology outperformed all others on complex visual search tasks, but only if their motor skills were rated at, or above, age expectations. For the visual crowding task, children with high ASD symptomology and superior motor skills exhibited enhanced target discrimination, whereas those with high ASD symptomology but poor motor skills experienced deficits. These findings may resolve some of the discrepancies in the literature.
Roijendijk, Linsey; Farquhar, Jason; van Gerven, Marcel; Jensen, Ole; Gielen, Stan
2013-01-01
Objective Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. Approach We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. Main Results Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. Significance Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research. PMID:24312477
Roijendijk, Linsey; Farquhar, Jason; van Gerven, Marcel; Jensen, Ole; Gielen, Stan
2013-01-01
Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research.
Multiple task performance as a predictor of the potential of air traffic controller trainees.
DOT National Transportation Integrated Search
1972-01-01
Two hundred and twenty-nine air traffic controller trainees were tested on the CAMI Multiple Task Performance Battery. The battery provides objective measures of monitoring, arithmetical skills, visual discrimination, and group problem solving. The c...
Reimer, Christina B; Schubert, Torsten
2017-09-15
Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.
The dual rod system of amphibians supports colour discrimination at the absolute visual threshold
Yovanovich, Carola A. M.; Koskela, Sanna M.; Nevala, Noora; Kondrashev, Sergei L.
2017-01-01
The presence of two spectrally different kinds of rod photoreceptors in amphibians has been hypothesized to enable purely rod-based colour vision at very low light levels. The hypothesis has never been properly tested, so we performed three behavioural experiments at different light intensities with toads (Bufo) and frogs (Rana) to determine the thresholds for colour discrimination. The thresholds of toads were different in mate choice and prey-catching tasks, suggesting that the differential sensitivities of different spectral cone types as well as task-specific factors set limits for the use of colour in these behavioural contexts. In neither task was there any indication of rod-based colour discrimination. By contrast, frogs performing phototactic jumping were able to distinguish blue from green light down to the absolute visual threshold, where vision relies only on rod signals. The remarkable sensitivity of this mechanism comparing signals from the two spectrally different rod types approaches theoretical limits set by photon fluctuations and intrinsic noise. Together, the results indicate that different pathways are involved in processing colour cues depending on the ecological relevance of this information for each task. This article is part of the themed issue ‘Vision in dim light’. PMID:28193811
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Spatial frequency discrimination learning in normal and developmentally impaired human vision
Astle, Andrew T.; Webb, Ben S.; McGraw, Paul V.
2010-01-01
Perceptual learning effects demonstrate that the adult visual system retains neural plasticity. If perceptual learning holds any value as a treatment tool for amblyopia, trained improvements in performance must generalise. Here we investigate whether spatial frequency discrimination learning generalises within task to other spatial frequencies, and across task to contrast sensitivity. Before and after training, we measured contrast sensitivity and spatial frequency discrimination (at a range of reference frequencies 1, 2, 4, 8, 16 c/deg). During training, normal and amblyopic observers were divided into three groups. Each group trained on a spatial frequency discrimination task at one reference frequency (2, 4, or 8 c/deg). Normal and amblyopic observers who trained at lower frequencies showed a greater rate of within task learning (at their reference frequency) compared to those trained at higher frequencies. Compared to normals, amblyopic observers showed greater within task learning, at the trained reference frequency. Normal and amblyopic observers showed asymmetrical transfer of learning from high to low spatial frequencies. Both normal and amblyopic subjects showed transfer to contrast sensitivity. The direction of transfer for contrast sensitivity measurements was from the trained spatial frequency to higher frequencies, with the bandwidth and magnitude of transfer greater in the amblyopic observers compared to normals. The findings provide further support for the therapeutic efficacy of this approach and establish general principles that may help develop more effective protocols for the treatment of developmental visual deficits. PMID:20832416
Sounds Activate Visual Cortex and Improve Visual Discrimination
Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.
2014-01-01
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419
Plescia, Fulvio; Sardo, Pierangelo; Rizzo, Valerio; Cacace, Silvana; Marino, Rosa Anna Maria; Brancato, Anna; Ferraro, Giuseppe; Carletti, Fabio; Cannizzaro, Carla
2014-01-01
Neurosteroids can alter neuronal excitability interacting with specific neurotransmitter receptors, thus affecting several functions such as cognition and emotionality. In this study we investigated, in adult male rats, the effects of the acute administration of pregnenolone-sulfate (PREGS) (10mg/kg, s.c.) on cognitive processes using the Can test, a non aversive spatial/visual task which allows the assessment of both spatial orientation-acquisition and object discrimination in a simple and in a complex version of the visual task. Electrophysiological recordings were also performed in vivo, after acute PREGS systemic administration in order to investigate on the neuronal activation in the hippocampus and the perirhinal cortex. Our results indicate that, PREGS induces an improvement in spatial orientation-acquisition and in object discrimination in the simple and in the complex visual task; the behavioural responses were also confirmed by electrophysiological recordings showing a potentiation in the neuronal activity of the hippocampus and the perirhinal cortex. In conclusion, this study demonstrates that PREGS systemic administration in rats exerts cognitive enhancing properties which involve both the acquisition and utilization of spatial information, and object discrimination memory, and also correlates the behavioural potentiation observed to an increase in the neuronal firing of discrete cerebral areas critical for spatial learning and object recognition. This provides further evidence in support of the role of PREGS in exerting a protective and enhancing role on human memory. Copyright © 2013. Published by Elsevier B.V.
Detection of visual signals by rats: A computational model
We applied a neural network model of classical conditioning proposed by Schmajuk, Lam, and Gray (1996) to visual signal detection and discrimination tasks designed to assess sustained attention in rats (Bushnell, 1999). The model describes the animals’ expectation of receiving fo...
Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter
2018-05-01
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.
Cognitive tunneling: use of visual information under stress.
Dirkin, G R
1983-02-01
References to "tunnel vision" under stress are considered to describe a process of attentional, rather than visual, narrowing. The hypothesis of Easterbrook that the range of cue utilization is reduced under stress was tested with a primary task located in the visual periphery. High school volunteers performed a visual discrimination task with choice reaction time (RT) as the dependent variable. A 2 X 3 order of presentation by practice design, with repeated measures on the last factor, was employed. Two levels of stress, high and low, were operationalized by the subject's performing in the presence of an evaluative audience or alone. Pulse rate was employed as a manipulation check on arousal. The results partially supported the hypothesis that a peripherally visual primary task could be attended to under stress without decrement in performance.
Role of Gamma-Band Synchronization in Priming of Form Discrimination for Multiobject Displays
ERIC Educational Resources Information Center
Lu, Hongjing; Morrison, Robert G.; Hummel, John E.; Holyoak, Keith J.
2006-01-01
Previous research has shown that synchronized flicker can facilitate detection of a single Kanizsa square. The present study investigated the role of temporally structured priming in discrimination tasks involving perceptual relations between multiple Kanizsa-type figures. Results indicate that visual information presented as temporally structured…
Life Span Changes in Visual Enumeration: The Number Discrimination Task.
ERIC Educational Resources Information Center
Trick, Lana M.; And Others
1996-01-01
Ninety-eight participants from 5 age groups with mean ages of 6, 8, 10, 22, and 72 years were tested in a series of speeded number discriminations. Found that response time slope as a function of number size decreased with age for numbers in the 1-4 range. (MDM)
Olfactory discrimination: when vision matters?
Demattè, M Luisa; Sanabria, Daniel; Spence, Charles
2009-02-01
Many previous studies have attempted to investigate the effect of visual cues on olfactory perception in humans. The majority of this research has only looked at the modulatory effect of color, which has typically been explained in terms of multisensory perceptual interactions. However, such crossmodal effects may equally well relate to interactions taking place at a higher level of information processing as well. In fact, it is well-known that semantic knowledge can have a substantial effect on people's olfactory perception. In the present study, we therefore investigated the influence of visual cues, consisting of color patches and/or shapes, on people's olfactory discrimination performance. Participants had to make speeded odor discrimination responses (lemon vs. strawberry) while viewing a red or yellow color patch, an outline drawing of a strawberry or lemon, or a combination of these color and shape cues. Even though participants were instructed to ignore the visual stimuli, our results demonstrate that the accuracy of their odor discrimination responses was influenced by visual distractors. This result shows that both color and shape information are taken into account during speeded olfactory discrimination, even when such information is completely task irrelevant, hinting at the automaticity of such higher level visual-olfactory crossmodal interactions.
Mattys, Sven L; Scharenborg, Odette
2014-03-01
This study investigates the extent to which age-related language processing difficulties are due to a decline in sensory processes or to a deterioration of cognitive factors, specifically, attentional control. Two facets of attentional control were examined: inhibition of irrelevant information and divided attention. Younger and older adults were asked to categorize the initial phoneme of spoken syllables ("Was it m or n?"), trying to ignore the lexical status of the syllables. The phonemes were manipulated to range in eight steps from m to n. Participants also did a discrimination task on syllable pairs ("Were the initial sounds the same or different?"). Categorization and discrimination were performed under either divided attention (concurrent visual-search task) or focused attention (no visual task). The results showed that even when the younger and older adults were matched on their discrimination scores: (1) the older adults had more difficulty inhibiting lexical knowledge than did younger adults, (2) divided attention weakened lexical inhibition in both younger and older adults, and (3) divided attention impaired sound discrimination more in older than younger listeners. The results confirm the independent and combined contribution of sensory decline and deficit in attentional control to language processing difficulties associated with aging. The relative weight of these variables and their mechanisms of action are discussed in the context of theories of aging and language. (c) 2014 APA, all rights reserved.
Simple and conditional visual discrimination with wheel running as reinforcement in rats.
Iversen, I H
1998-09-01
Three experiments explored whether access to wheel running is sufficient as reinforcement to establish and maintain simple and conditional visual discriminations in nondeprived rats. In Experiment 1, 2 rats learned to press a lit key to produce access to running; responding was virtually absent when the key was dark, but latencies to respond were longer than for customary food and water reinforcers. Increases in the intertrial interval did not improve the discrimination performance. In Experiment 2, 3 rats acquired a go-left/go-right discrimination with a trial-initiating response and reached an accuracy that exceeded 80%; when two keys showed a steady light, pressing the left key produced access to running whereas pressing the right key produced access to running when both keys showed blinking light. Latencies to respond to the lights shortened when the trial-initiation response was introduced and became much shorter than in Experiment 1. In Experiment 3, 1 rat acquired a conditional discrimination task (matching to sample) with steady versus blinking lights at an accuracy exceeding 80%. A trial-initiation response allowed self-paced trials as in Experiment 2. When the rat was exposed to the task for 19 successive 24-hr periods with access to food and water, the discrimination performance settled in a typical circadian pattern and peak accuracy exceeded 90%. When the trial-initiation response was under extinction, without access to running, the circadian activity pattern determined the time of spontaneous recovery. The experiments demonstrate that wheel-running reinforcement can be used to establish and maintain simple and conditional visual discriminations in nondeprived rats.
Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex
Poort, Jasper; Khan, Adil G.; Pachitariu, Marius; Nemri, Abdellatif; Orsolic, Ivana; Krupic, Julija; Bauza, Marius; Sahani, Maneesh; Keller, Georg B.; Mrsic-Flogel, Thomas D.; Hofer, Sonja B.
2015-01-01
Summary We determined how learning modifies neural representations in primary visual cortex (V1) during acquisition of a visually guided behavioral task. We imaged the activity of the same layer 2/3 neuronal populations as mice learned to discriminate two visual patterns while running through a virtual corridor, where one pattern was rewarded. Improvements in behavioral performance were closely associated with increasingly distinguishable population-level representations of task-relevant stimuli, as a result of stabilization of existing and recruitment of new neurons selective for these stimuli. These effects correlated with the appearance of multiple task-dependent signals during learning: those that increased neuronal selectivity across the population when expert animals engaged in the task, and those reflecting anticipation or behavioral choices specifically in neuronal subsets preferring the rewarded stimulus. Therefore, learning engages diverse mechanisms that modify sensory and non-sensory representations in V1 to adjust its processing to task requirements and the behavioral relevance of visual stimuli. PMID:26051421
Saliency affects feedforward more than feedback processing in early visual cortex.
Emmanouil, Tatiana Aloi; Avigan, Philip; Persuh, Marjan; Ro, Tony
2013-07-01
Early visual cortex activity is influenced by both bottom-up and top-down factors. To investigate the influences of bottom-up (saliency) and top-down (task) factors on different stages of visual processing, we used transcranial magnetic stimulation (TMS) of areas V1/V2 to induce visual suppression at varying temporal intervals. Subjects were asked to detect and discriminate the color or the orientation of briefly-presented small lines that varied on color saliency based on color contrast with the surround. Regardless of task, color saliency modulated the magnitude of TMS-induced visual suppression, especially at earlier temporal processing intervals that reflect the feedforward stage of visual processing in V1/V2. In a second experiment we found that our color saliency effects were also influenced by an inherent advantage of the color red relative to other hues and that color discrimination difficulty did not affect visual suppression. These results support the notion that early visual processing is stimulus driven and that feedforward and feedback processing encode different types of information about visual scenes. They further suggest that certain hues can be prioritized over others within our visual systems by being more robustly represented during early temporal processing intervals. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Laverghetta, A. V.; Shimizu, T.
1999-01-01
The nucleus rotundus is a large thalamic nucleus in birds and plays a critical role in many visual discrimination tasks. In order to test the hypothesis that there are functionally distinct subdivisions in the nucleus rotundus, effects of selective lesions of the nucleus were studied in pigeons. The birds were trained to discriminate between different types of stationary objects and between different directions of moving objects. Multiple regression analyses revealed that lesions in the anterior, but not posterior, division caused deficits in discrimination of small stationary stimuli. Lesions in neither the anterior nor posterior divisions predicted effects in discrimination of moving stimuli. These results are consistent with a prediction led from the hypothesis that the nucleus is composed of functional subdivisions.
Statistical learning and auditory processing in children with music training: An ERP study.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne
2017-07-01
The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
DOT National Transportation Integrated Search
1974-11-01
Two hundred and twenty-nine air traffic controller trainees were tested on the CAMI Multiple Task Performance Battery. The battery provides objective measures of monitoring, arithmetical skills, visual discrimination, and group problem solving. The c...
NMDA receptor antagonist ketamine impairs feature integration in visual perception.
Meuwese, Julia D I; van Loon, Anouk M; Scholte, H Steven; Lirk, Philipp B; Vulink, Nienke C C; Hollmann, Markus W; Lamme, Victor A F
2013-01-01
Recurrent interactions between neurons in the visual cortex are crucial for the integration of image elements into coherent objects, such as in figure-ground segregation of textured images. Blocking N-methyl-D-aspartate (NMDA) receptors in monkeys can abolish neural signals related to figure-ground segregation and feature integration. However, it is unknown whether this also affects perceptual integration itself. Therefore, we tested whether ketamine, a non-competitive NMDA receptor antagonist, reduces feature integration in humans. We administered a subanesthetic dose of ketamine to healthy subjects who performed a texture discrimination task in a placebo-controlled double blind within-subject design. We found that ketamine significantly impaired performance on the texture discrimination task compared to the placebo condition, while performance on a control fixation task was much less impaired. This effect is not merely due to task difficulty or a difference in sedation levels. We are the first to show a behavioral effect on feature integration by manipulating the NMDA receptor in humans.
Bosworth, Rain G.; Petrich, Jennifer A.; Dobkins, Karen R.
2012-01-01
In order to investigate differences in the effects of spatial attention between the left visual field (LVF) and the right visual field (RVF), we employed a full/poor attention paradigm using stimuli presented in the LVF vs. RVF. In addition, to investigate differences in the effects of spatial attention between the Dorsal and Ventral processing streams, we obtained motion thresholds (motion coherence thresholds and fine direction discrimination thresholds) and orientation thresholds, respectively. The results of this study showed negligible effects of attention on the orientation task, in either the LVF or RVF. In contrast, for both motion tasks, there was a significant effect of attention in the LVF, but not in the RVF. These data provide psychophysical evidence for greater effects of spatial attention in the LVF/right hemisphere, specifically, for motion processing in the Dorsal stream. PMID:22051893
Sensitivity of the lane change test as a measure of in-vehicle system demand.
Young, Kristie L; Lenné, Michael G; Williamson, Amy R
2011-05-01
The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Variability in visual working memory ability limits the efficiency of perceptual decision making.
Ester, Edward F; Ho, Tiffany C; Brown, Scott D; Serences, John T
2014-04-02
The ability to make rapid and accurate decisions based on limited sensory information is a critical component of visual cognition. Available evidence suggests that simple perceptual discriminations are based on the accumulation and integration of sensory evidence over time. However, the memory system(s) mediating this accumulation are unclear. One candidate system is working memory (WM), which enables the temporary maintenance of information in a readily accessible state. Here, we show that individual variability in WM capacity is strongly correlated with the speed of evidence accumulation in speeded two-alternative forced choice tasks. This relationship generalized across different decision-making tasks, and could not be easily explained by variability in general arousal or vigilance. Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM storage and decision making are directly linked.
Task-specific reorganization of the auditory cortex in deaf humans
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-01
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964
Task-specific reorganization of the auditory cortex in deaf humans.
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-24
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
Hecker, Elizabeth A.; Serences, John T.; Srinivasan, Ramesh
2013-01-01
Interacting with the environment requires the ability to flexibly direct attention to relevant features. We examined the degree to which individuals attend to visual features within and across Detection, Fine Discrimination, and Coarse Discrimination tasks. Electroencephalographic (EEG) responses were measured to an unattended peripheral flickering (4 or 6 Hz) grating while individuals (n = 33) attended to orientations that were offset by 0°, 10°, 20°, 30°, 40°, and 90° from the orientation of the unattended flicker. These unattended responses may be sensitive to attentional gain at the attended spatial location, since attention to features enhances early visual responses throughout the visual field. We found no significant differences in tuning curves across the three tasks in part due to individual differences in strategies. We sought to characterize individual attention strategies using hierarchical Bayesian modeling, which grouped individuals into families of curves that reflect attention to the physical target orientation (“on-channel”) or away from the target orientation (“off-channel”) or a uniform distribution of attention. The different curves were related to behavioral performance; individuals with “on-channel” curves had lower thresholds than individuals with uniform curves. Individuals with “off-channel” curves during Fine Discrimination additionally had lower thresholds than those assigned to uniform curves, highlighting the perceptual benefits of attending away from the physical target orientation during fine discriminations. Finally, we showed that a subset of individuals with optimal curves (“on-channel”) during Detection also demonstrated optimal curves (“off-channel”) during Fine Discrimination, indicating that a subset of individuals can modulate tuning optimally for detection and discrimination. PMID:23678013
ERIC Educational Resources Information Center
Bartko, Susan J.; Winters, Boyer D.; Cowell, Rosemary A.; Saksida, Lisa M.; Bussey, Timothy J.
2007-01-01
The perirhinal cortex (PRh) has a well-established role in object recognition memory. More recent studies suggest that PRh is also important for two-choice visual discrimination tasks. Specifically, it has been suggested that PRh contains conjunctive representations that help resolve feature ambiguity, which occurs when a task cannot easily be…
Limited transfer of long-term motion perceptual learning with double training.
Liang, Ju; Zhou, Yifeng; Fahle, Manfred; Liu, Zili
2015-01-01
A significant recent development in visual perceptual learning research is the double training technique. With this technique, Xiao, Zhang, Wang, Klein, Levi, and Yu (2008) have found complete transfer in tasks that had previously been shown to be stimulus specific. The significance of this finding is that this technique has since been successful in all tasks tested, including motion direction discrimination. Here, we investigated whether or not this technique could generalize to longer-term learning, using the method of constant stimuli. Our task was learning to discriminate motion directions of random dots. The second leg of training was contrast discrimination along a new average direction of the same moving dots. We found that, although exposure of moving dots along a new direction facilitated motion direction discrimination, this partial transfer was far from complete. We conclude that, although perceptual learning is transferrable under certain conditions, stimulus specificity also remains an inherent characteristic of motion perceptual learning.
Colour processing in complex environments: insights from the visual system of bees
Dyer, Adrian G.; Paulk, Angelique C.; Reser, David H.
2011-01-01
Colour vision enables animals to detect and discriminate differences in chromatic cues independent of brightness. How the bee visual system manages this task is of interest for understanding information processing in miniaturized systems, as well as the relationship between bee pollinators and flowering plants. Bees can quickly discriminate dissimilar colours, but can also slowly learn to discriminate very similar colours, raising the question as to how the visual system can support this, or whether it is simply a learning and memory operation. We discuss the detailed neuroanatomical layout of the brain, identify probable brain areas for colour processing, and suggest that there may be multiple systems in the bee brain that mediate either coarse or fine colour discrimination ability in a manner dependent upon individual experience. These multiple colour pathways have been identified along both functional and anatomical lines in the bee brain, providing us with some insights into how the brain may operate to support complex colour discrimination behaviours. PMID:21147796
Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination
Palanica, Adam; Itier, Roxane J.
2017-01-01
Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity. PMID:28344501
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
Evaluation of a pilot workload metric for simulated VTOL landing tasks
NASA Technical Reports Server (NTRS)
North, R. A.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Multivariate discriminant functions were formed from conventional flight performance and/or visual response variables to maximize detection of experimental differences. The flight performance variable discriminant showed maximum differentiation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition/trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus, represented higher workload levels.
Discrimination of holograms and real objects by pigeons (Columba livia) and humans (Homo sapiens).
Stephan, Claudia; Steurer, Michael M; Aust, Ulrike
2014-08-01
The type of stimulus material employed in visual tasks is crucial to all comparative cognition research that involves object recognition. There is considerable controversy about the use of 2-dimensional stimuli and the impact that the lack of the 3rd dimension (i.e., depth) may have on animals' performance in tests for their visual and cognitive abilities. We report evidence of discrimination learning using a completely novel type of stimuli, namely, holograms. Like real objects, holograms provide full 3-dimensional shape information but they also offer many possibilities for systematically modifying the appearance of a stimulus. Hence, they provide a promising means for investigating visual perception and cognition of different species in a comparative way. We trained pigeons and humans to discriminate either between 2 real objects or between holograms of the same 2 objects, and we subsequently tested both species for the transfer of discrimination to the other presentation mode. The lack of any decrements in accuracy suggests that real objects and holograms were perceived as equivalent in both species and shows the general appropriateness of holograms as stimuli in visual tasks. A follow-up experiment involving the presentation of novel views of the training objects and holograms revealed some interspecies differences in rotational invariance, thereby confirming and extending the results of previous studies. Taken together, these results suggest that holograms may not only provide a promising tool for investigating yet unexplored issues, but their use may also lead to novel insights into some crucial aspects of comparative visual perception and categorization.
Prestimulus EEG Power Predicts Conscious Awareness But Not Objective Visual Performance
Veniero, Domenica
2017-01-01
Abstract Prestimulus oscillatory neural activity has been linked to perceptual outcomes during performance of psychophysical detection and discrimination tasks. Specifically, the power and phase of low frequency oscillations have been found to predict whether an upcoming weak visual target will be detected or not. However, the mechanisms by which baseline oscillatory activity influences perception remain unclear. Recent studies suggest that the frequently reported negative relationship between α power and stimulus detection may be explained by changes in detection criterion (i.e., increased target present responses regardless of whether the target was present/absent) driven by the state of neural excitability, rather than changes in visual sensitivity (i.e., more veridical percepts). Here, we recorded EEG while human participants performed a luminance discrimination task on perithreshold stimuli in combination with single-trial ratings of perceptual awareness. Our aim was to investigate whether the power and/or phase of prestimulus oscillatory activity predict discrimination accuracy and/or perceptual awareness on a trial-by-trial basis. Prestimulus power (3–28 Hz) was inversely related to perceptual awareness ratings (i.e., higher ratings in states of low prestimulus power/high excitability) but did not predict discrimination accuracy. In contrast, prestimulus oscillatory phase did not predict awareness ratings or accuracy in any frequency band. These results provide evidence that prestimulus α power influences the level of subjective awareness of threshold visual stimuli but does not influence visual sensitivity when a decision has to be made regarding stimulus features. Hence, we find a clear dissociation between the influence of ongoing neural activity on conscious awareness and objective performance. PMID:29255794
Effects of task demands on the early neural processing of fearful and happy facial expressions
Itier, Roxane J.; Neath-Tavares, Karly N.
2017-01-01
Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200–350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150–350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. PMID:28315309
Abbey, Craig K.; Zemp, Roger J.; Liu, Jie; Lindfors, Karen K.; Insana, Michael F.
2009-01-01
We investigate and extend the ideal observer methodology developed by Smith and Wagner to detection and discrimination tasks related to breast sonography. We provide a numerical approach for evaluating the ideal observer acting on radio-frequency (RF) frame data, which involves inversion of large nonstationary covariance matrices, and we describe a power-series approach to computing this inverse. Considering a truncated power series suggests that the RF data be Wiener-filtered before forming the final envelope image. We have compared human performance for Wiener-filtered and conventional B-mode envelope images using psychophysical studies for 5 tasks related to breast cancer classification. We find significant improvements in visual detection and discrimination efficiency in four of these five tasks. We also use the Smith-Wagner approach to distinguish between human and processing inefficiencies, and find that generally the principle limitation comes from the information lost in computing the final envelope image. PMID:16468454
Sounds activate visual cortex and improve visual discrimination.
Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A
2014-07-16
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.
Visual versus Phonological Abilities in Spanish Dyslexic Boys and Girls
ERIC Educational Resources Information Center
Bednarek, Dorota; Saldana, David; Garcia, Isabel
2009-01-01
Phonological and visual theories propose different primary deficits as part of the explanation for dyslexia. Both theories were put to test in a sample of Spanish dyslexic readers. Twenty-one dyslexic and 22 typically-developing children matched on chronological age were administered phonological discrimination and awareness tasks and coherent…
Spatial Probability Cuing and Right Hemisphere Damage
ERIC Educational Resources Information Center
Shaqiri, Albulena; Anderson, Britt
2012-01-01
In this experiment we studied statistical learning, inter-trial priming, and visual attention. We assessed healthy controls and right brain damaged (RBD) patients with and without neglect, on a simple visual discrimination task designed to measure priming effects and probability learning. All participants showed a preserved priming effect for item…
Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception
ERIC Educational Resources Information Center
Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake
2006-01-01
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…
Context-dependent similarity effects in letter recognition.
Kinoshita, Sachiko; Robidoux, Serje; Guilbert, Daniel; Norris, Dennis
2015-10-01
In visual word recognition tasks, digit primes that are visually similar to letter string targets (e.g., 4/A, 8/B) are known to facilitate letter identification relative to visually dissimilar digits (e.g., 6/A, 7/B); in contrast, with letter primes, visual similarity effects have been elusive. In the present study we show that the visual similarity effect with letter primes can be made to come and go, depending on whether it is necessary to discriminate between visually similar letters. The results support a Bayesian view which regards letter recognition not as a passive activation process driven by the fixed stimulus properties, but as a dynamic evidence accumulation process for a decision that is guided by the task context.
Fornix and medial temporal lobe lesions lead to comparable deficits in complex visual perception.
Lech, Robert K; Koch, Benno; Schwarz, Michael; Suchan, Boris
2016-05-04
Recent research dealing with the structures of the medial temporal lobe (MTL) has shifted away from exclusively investigating memory-related processes and has repeatedly incorporated the investigation of complex visual perception. Several studies have demonstrated that higher level visual tasks can recruit structures like the hippocampus and perirhinal cortex in order to successfully perform complex visual discriminations, leading to a perceptual-mnemonic or representational view of the medial temporal lobe. The current study employed a complex visual discrimination paradigm in two patients suffering from brain lesions with differing locations and origin. Both patients, one with extensive medial temporal lobe lesions (VG) and one with a small lesion of the anterior fornix (HJK), were impaired in complex discriminations while showing otherwise mostly intact cognitive functions. The current data confirmed previous results while also extending the perceptual-mnemonic theory of the MTL to the main output structure of the hippocampus, the fornix. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Sofer, Imri; Crouzet, Sébastien M.; Serre, Thomas
2015-01-01
Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability. PMID:26335683
Explicit attention interferes with selective emotion processing in human extrastriate cortex.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2007-02-22
Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (approximately 150-300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon.
Explicit attention interferes with selective emotion processing in human extrastriate cortex
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2007-01-01
Background Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (~150–300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Results Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. Conclusion The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon. PMID:17316444
Visual body recognition in a prosopagnosic patient.
Moro, V; Pernigo, S; Avesani, R; Bulgarelli, C; Urgesi, C; Candidi, M; Aglioti, S M
2012-01-01
Conspicuous deficits in face recognition characterize prosopagnosia. Information on whether agnosic deficits may extend to non-facial body parts is lacking. Here we report the neuropsychological description of FM, a patient affected by a complete deficit in face recognition in the presence of mild clinical signs of visual object agnosia. His deficit involves both overt and covert recognition of faces (i.e. recognition of familiar faces, but also categorization of faces for gender or age) as well as the visual mental imagery of faces. By means of a series of matching-to-sample tasks we investigated: (i) a possible association between prosopagnosia and disorders in visual body perception; (ii) the effect of the emotional content of stimuli on the visual discrimination of faces, bodies and objects; (iii) the existence of a dissociation between identity recognition and the emotional discrimination of faces and bodies. Our results document, for the first time, the co-occurrence of body agnosia, i.e. the visual inability to discriminate body forms and body actions, and prosopagnosia. Moreover, the results show better performance in the discrimination of emotional face and body expressions with respect to body identity and neutral actions. Since FM's lesions involve bilateral fusiform areas, it is unlikely that the amygdala-temporal projections explain the relative sparing of emotion discrimination performance. Indeed, the emotional content of the stimuli did not improve the discrimination of their identity. The results hint at the existence of two segregated brain networks involved in identity and emotional discrimination that are at least partially shared by face and body processing. Copyright © 2011 Elsevier Ltd. All rights reserved.
Auditory Confrontation Naming in Alzheimer’s Disease
Brandt, Jason; Bakker, Arnold; Maroof, David Aaron
2010-01-01
Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630
Measuring the effect of multiple eye fixations on memory for visual attributes.
Palmer, J; Ames, C T
1992-09-01
Because of limited peripheral vision, many visual tasks depend on multiple eye fixations. Good performance in such tasks demonstrates that some memory must survive from one fixation to the next. One factor that must influence performance is the degree to which multiple eye fixations interfere with the critical memories. In the present study, the amount of interference was measured by comparing visual discriminations based on multiple fixations to visual discriminations based on a single fixation. The procedure resembled partial report, but used a discrimination measure. In the prototype study, two lines were presented, followed by a single line and a cue. The cue pointed toward one of the positions of the first two lines. Observers were required to judge if the single line in the second display was longer or shorter than the cued line of the first display. These judgments were used to estimate a length threshold. The critical manipulation was to instruct observers either to maintain fixation between the lines of the first display or to fixate each line in sequence. The results showed an advantage for multiple fixations despite the intervening eye movements. In fact, thresholds for the multiple-fixation condition were nearly as good as those in a control condition where the lines were foveally viewed without eye movements. Thus, eye movements had little or no interfering effect in this task. Additional studies generalized the procedure and the stimuli. In conclusion, information about a variety of size and shape attributes was remembered with essentially no interference across eye fixations.
Color-dependent learning in restrained Africanized honey bees.
Jernigan, C M; Roubik, D W; Wcislo, W T; Riveros, A J
2014-02-01
Associative color learning has been demonstrated to be very poor using restrained European honey bees unless the antennae are amputated. Consequently, our understanding of proximate mechanisms in visual information processing is handicapped. Here we test learning performance of Africanized honey bees under restrained conditions with visual and olfactory stimulation using the proboscis extension response (PER) protocol. Restrained individuals were trained to learn an association between a color stimulus and a sugar-water reward. We evaluated performance for 'absolute' learning (learned association between a stimulus and a reward) and 'discriminant' learning (discrimination between two stimuli). Restrained Africanized honey bees (AHBs) readily learned the association of color stimulus for both blue and green LED stimuli in absolute and discriminatory learning tasks within seven presentations, but not with violet as the rewarded color. Additionally, 24-h memory improved considerably during the discrimination task, compared with absolute association (15-55%). We found that antennal amputation was unnecessary and reduced performance in AHBs. Thus color learning can now be studied using the PER protocol with intact AHBs. This finding opens the way towards investigating visual and multimodal learning with application of neural techniques commonly used in restrained honey bees.
The nootropic properties of ginseng saponin Rb1 are linked to effects on anxiety.
Churchill, James D; Gerson, Jennifer L; Hinton, Kendra A; Mifek, Jennifer L; Walter, Michael J; Winslow, Cynthia L; Deyo, Richard A
2002-01-01
Previous studies have shown that crude ginseng extracts enhance performance on shock-motivated tasks. Whether such performance enhancements are due to memory-enhancing (nootropic) properties of ginseng, or to other non-specific effects such as an influence on anxiety has not been determined. In the present study, we evaluated both the nootropic and anxiolytic effects of the ginseng saponin Rb1. In the first experiment, 80 five-day-old male chicks received intraperitoneal injections of 0, 0.25, 2.5 or 5.0 mg/kg Rb1. Performance on a visual discrimination task was evaluated 15 minutes, 24 and 72 hours later. Acquisition of a visual discrimination task was unaffected by drug treatment, but the number of errors was significantly reduced in the 0.25 mg/kg group during retention trials completed 24 and 72 hours after injection. Animals receiving higher dosages showed trends towards enhancement initially, but demonstrated impaired performance when tested 72 hours later. Rb1 had no effect on response rates or body weight. In the second experiment, 64 five-day-old male chicks received similar injections of Rb1 (0, 0.25, 2.5 or 5.0 mg/kg) and separation distress was evaluated 15 minutes, 24 and 72 hours later. Rb1 produced a change in separation distress that depended on the dose and environmental condition under which distress was recorded. These data suggest that Rb1 can improve memory for a visual discrimination task and that the nootropic effect may be related to changes in anxiety.
Kanaya, Shoko; Fujisaki, Waka; Nishida, Shin'ya; Furukawa, Shigeto; Yokosawa, Kazuhiko
2015-02-01
Temporal phase discrimination is a useful psychophysical task to evaluate how sensory signals, synchronously detected in parallel, are perceptually bound by human observers. In this task two stimulus sequences synchronously alternate between two states (say, A-B-A-B and X-Y-X-Y) in either of two temporal phases (ie A and B are respectively paired with X and Y, or vice versa). The critical alternation frequency beyond which participants cannot discriminate the temporal phase is measured as an index characterizing the temporal property of the underlying binding process. This task has been used to reveal the mechanisms underlying visual and cross-modal bindings. To directly compare these binding mechanisms with those in another modality, this study used the temporal phase discrimination task to reveal the processes underlying auditory bindings. The two sequences were alternations between two pitches. We manipulated the distance between the two sequences by changing intersequence frequency separation, or presentation ears (diotic vs dichotic). Results showed that the alternation frequency limit ranged from 7 to 30 Hz, becoming higher as the intersequence distance decreased, as is the case with vision. However, unlike vision, auditory phase discrimination limits were higher and more variable across participants. © 2015 SAGE Publications.
Aphasic Patients Exhibit a Reversal of Hemispheric Asymmetries in Categorical Color Discrimination
ERIC Educational Resources Information Center
Paluy, Yulia; Gilbert, Aubrey L.; Baldo, Juliana V.; Dronkers, Nina F.; Ivry, Richard B.
2011-01-01
Patients with left hemisphere (LH) or right hemisphere (RH) brain injury due to stroke were tested on a speeded, color discrimination task in which two factors were manipulated: (1) the categorical relationship between the target and the distracters and (2) the visual field in which the target was presented. Similar to controls, the RH patients…
Visual discrimination predicts naming and semantic association accuracy in Alzheimer disease.
Harnish, Stacy M; Neils-Strunjas, Jean; Eliassen, James; Reilly, Jamie; Meinzer, Marcus; Clark, John Greer; Joseph, Jane
2010-12-01
Language impairment is a common symptom of Alzheimer disease (AD), and is thought to be related to semantic processing. This study examines the contribution of another process, namely visual perception, on measures of confrontation naming and semantic association abilities in persons with probable AD. Twenty individuals with probable mild-moderate Alzheimer disease and 20 age-matched controls completed a battery of neuropsychologic measures assessing visual perception, naming, and semantic association ability. Visual discrimination tasks that varied in the degree to which they likely accessed stored structural representations were used to gauge whether structural processing deficits could account for deficits in naming and in semantic association in AD. Visual discrimination abilities of nameable objects in AD strongly predicted performance on both picture naming and semantic association ability, but lacked the same predictive value for controls. Although impaired, performance on visual discrimination tests of abstract shapes and novel faces showed no significant relationship with picture naming and semantic association. These results provide additional evidence to support that structural processing deficits exist in AD, and may contribute to object recognition and naming deficits. Our findings suggest that there is a common deficit in discrimination of pictures using nameable objects, picture naming, and semantic association of pictures in AD. Disturbances in structural processing of pictured items may be associated with lexical-semantic impairment in AD, owing to degraded internal storage of structural knowledge.
Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia
2012-10-01
A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.
Investigation of Neural Strategies of Visual Search
NASA Technical Reports Server (NTRS)
Krauzlis, Richard J.
2003-01-01
The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.
Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E
2016-01-01
Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.
Visual perceptual load induces inattentional deafness.
Macdonald, James S P; Lavie, Nilli
2011-08-01
In this article, we establish a new phenomenon of "inattentional deafness" and highlight the level of load on visual attention as a critical determinant of this phenomenon. In three experiments, we modified an inattentional blindness paradigm to assess inattentional deafness. Participants made either a low- or high-load visual discrimination concerning a cross shape (respectively, a discrimination of line color or of line length with a subtle length difference). A brief pure tone was presented simultaneously with the visual task display on a final trial. Failures to notice the presence of this tone (i.e., inattentional deafness) reached a rate of 79% in the high-visual-load condition, significantly more than in the low-load condition. These findings establish the phenomenon of inattentional deafness under visual load, thereby extending the load theory of attention (e.g., Lavie, Journal of Experimental Psychology. Human Perception and Performance, 25, 596-616, 1995) to address the cross-modal effects of visual perceptual load.
Aging and the visual, haptic, and cross-modal perception of natural object shape.
Norman, J Farley; Crabtree, Charles E; Norman, Hideko F; Moncrief, Brandon K; Herrmann, Molly; Kapley, Noah
2006-01-01
One hundred observers participated in two experiments designed to investigate aging and the perception of natural object shape. In the experiments, younger and older observers performed either a same/different shape discrimination task (experiment 1) or a cross-modal matching task (experiment 2). Quantitative effects of age were found in both experiments. The effect of age in experiment 1 was limited to cross-modal shape discrimination: there was no effect of age upon unimodal (ie within a single perceptual modality) shape discrimination. The effect of age in experiment 2 was eliminated when the older observers were either given an unlimited amount of time to perform the task or when the number of response alternatives was decreased. Overall, the results of the experiments reveal that older observers can effectively perceive 3-D shape from both vision and haptics.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Smell or vision? The use of different sensory modalities in predator discrimination.
Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara
2017-01-01
Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.
Development of a computerized visual search test.
Reid, Denise; Babani, Harsha; Jon, Eugenia
2009-09-01
Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed to provide an alternative to existing cancellation tests. Data from two pilot studies will be reported that examined some aspects of the test's validity. To date, our assessment of the test shows that it discriminates between healthy and head-injured persons. More research and development work is required to examine task performance changes in relation to task complexity. It is suggested that the conceptual design for the test is worthy of further investigation.
Enhanced Pure-Tone Pitch Discrimination among Persons with Autism but not Asperger Syndrome
ERIC Educational Resources Information Center
Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A.; Mottron, Laurent
2010-01-01
Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis.…
Investigating the role of the superior colliculus in active vision with the visual search paradigm.
Shen, Kelly; Valero, Jerome; Day, Gregory S; Paré, Martin
2011-06-01
We review here both the evidence that the functional visuomotor organization of the optic tectum is conserved in the primate superior colliculus (SC) and the evidence for the linking proposition that SC discriminating activity instantiates saccade target selection. We also present new data in response to questions that arose from recent SC visual search studies. First, we observed that SC discriminating activity predicts saccade initiation when monkeys perform an unconstrained search for a target defined by either a single visual feature or a conjunction of two features. Quantitative differences between the results in these two search tasks suggest, however, that SC discriminating activity does not only reflect saccade programming. This finding concurs with visual search studies conducted in posterior parietal cortex and the idea that, during natural active vision, visual attention is shifted concomitantly with saccade programming. Second, the analysis of a large neuronal sample recorded during feature search revealed that visual neurons in the superficial layers do possess discriminating activity. In addition, the hypotheses that there are distinct types of SC neurons in the deeper layers and that they are differently involved in saccade target selection were not substantiated. Third, we found that the discriminating quality of single-neuron activity substantially surpasses the ability of the monkeys to discriminate the target from distracters, raising the possibility that saccade target selection is a noisy process. We discuss these new findings in light of the visual search literature and the view that the SC is a visual salience map for orienting eye movements. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Sex Discrimination and Cerebral Bias: Implications for the Reading Curriculum.
ERIC Educational Resources Information Center
Keenan, Donna; Smith, Michael
1983-01-01
Reviews research supporting the concept that girls usually outperform boys on tasks requiring verbal skills and that boys outperform girls on tasks using visual and spatial skills. Offers an explanation for this situation based on left brain/right brain research. Concludes that the curriculum in American schools is clearly left-brain biased. (FL)
Blindness enhances tactile acuity and haptic 3-D shape discrimination.
Norman, J Farley; Bartholomew, Ashley N
2011-10-01
This study compared the sensory and perceptual abilities of the blind and sighted. The 32 participants were required to perform two tasks: tactile grating orientation discrimination (to determine tactile acuity) and haptic three-dimensional (3-D) shape discrimination. The results indicated that the blind outperformed their sighted counterparts (individually matched for both age and sex) on both tactile tasks. The improvements in tactile acuity that accompanied blindness occurred for all blind groups (congenital, early, and late). However, the improvements in haptic 3-D shape discrimination only occurred for the early-onset and late-onset blindness groups; the performance of the congenitally blind was no better than that of the sighted controls. The results of the present study demonstrate that blindness does lead to an enhancement of tactile abilities, but they also suggest that early visual experience may play a role in facilitating haptic 3-D shape discrimination.
Effects of task demands on the early neural processing of fearful and happy facial expressions.
Itier, Roxane J; Neath-Tavares, Karly N
2017-05-15
Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. Copyright © 2017 Elsevier B.V. All rights reserved.
The effect of auditory memory load on intensity resolution in individuals with Parkinson's disease
NASA Astrophysics Data System (ADS)
Richardson, Kelly C.
Purpose: The purpose of the current study was to investigate the effect of auditory memory load on intensity resolution in individuals with Parkinson's disease (PD) as compared to two groups of listeners without PD. Methods: Nineteen individuals with Parkinson's disease, ten healthy age- and hearing-matched adults, and ten healthy young adults were studied. All listeners participated in two intensity discrimination tasks differing in auditory memory load; a lower memory load, 4IAX task and a higher memory load, ABX task. Intensity discrimination performance was assessed using a bias-free measurement of signal detectability known as d' (d-prime). Listeners further participated in a continuous loudness scaling task where they were instructed to rate the loudness level of each signal intensity using a computerized 150mm visual analogue scale. Results: Group discrimination functions indicated significantly lower intensity discrimination sensitivity (d') across tasks for the individuals with PD, as compared to the older and younger controls. No significant effect of aging on intensity discrimination was observed for either task. All three listeners groups demonstrated significantly lower intensity discrimination sensitivity for the higher auditory memory load, ABX task, compared to the lower auditory memory load, 4IAX task. Furthermore, a significant effect of aging was identified for the loudness scaling condition. The younger controls were found to rate most stimuli along the continuum as significantly louder than the older controls and the individuals with PD. Conclusions: The persons with PD showed evidence of impaired auditory perception for intensity information, as compared to the older and younger controls. The significant effect of aging on loudness perception may indicate peripheral and/or central auditory involvement.
Do Visually Impaired People Develop Superior Smell Ability?
Majchrzak, Dorota; Eberhard, Julia; Kalaus, Barbara; Wagner, Karl-Heinz
2017-10-01
It is well known that visually impaired people perform better in orientation by sound than sighted individuals, but it is not clear whether this enhanced awareness also extends to other senses. Therefore, the aim of this study was to observe whether visually impaired subjects develop superior abilities in olfactory perception to compensate for their lack of vision. We investigated the odor perception of visually impaired individuals aged 7 to 89 ( n = 99; 52 women, 47 men) and compared them with subjects of a control group aged 8 to 82 years ( n = 100; 45 women, 55 men) without any visual impairment. The participants were evaluated by Sniffin' Sticks odor identification and discrimination test. Identification ability was assessed for 16 common odors presented in felt-tip pens. In the odor discrimination task, subjects had to determine which of three pens in 16 triplets had a different odor. The median number of correctly identified odorant pens in both groups was the same, 13 of the offered 16. In the discrimination test, there was also no significant difference observed. Gender did not influence results. Age-related changes were observed in both groups with olfactory perception decreasing after the age of 51. We could not confirm that visually impaired people were better in smell identification and discrimination ability than sighted individuals.
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Impact of stimulus uncanniness on speeded response
Takahashi, Kohske; Fukuda, Haruaki; Samejima, Kazuyuki; Watanabe, Katsumi; Ueda, Kazuhiro
2015-01-01
In the uncanny valley phenomenon, the causes of the feeling of uncanniness as well as the impact of the uncanniness on behavioral performances still remain open. The present study investigated the behavioral effects of stimulus uncanniness, particularly with respect to speeded response. Pictures of fish were used as visual stimuli. Participants engaged in direction discrimination, spatial cueing, and dot-probe tasks. The results showed that pictures rated as strongly uncanny delayed speeded response in the discrimination of the direction of the fish. In the cueing experiment, where a fish served as a task-irrelevant and unpredictable cue for a peripheral target, we again observed that the detection of a target was slowed when the cue was an uncanny fish. Conversely, the dot-probe task suggested that uncanny fish, unlike threatening stimulus, did not capture visual spatial attention. These results suggested that stimulus uncanniness resulted in the delayed response, and importantly this modulation was not mediated by the feelings of threat. PMID:26052297
Fournier, Lisa R; Herbert, Rhonda J; Farris, Carrie
2004-10-01
This study examined how response mapping of features within single- and multiple-feature targets affects decision-based processing and attentional capacity demands. Observers judged the presence or absence of 1 or 2 target features within an object either presented alone or with distractors. Judging the presence of 2 features relative to the less discriminable of these features alone was faster (conjunction benefits) when the task-relevant features differed in discriminability and were consistently mapped to responses. Conjunction benefits were attributed to asynchronous decision priming across attended, task-relevant dimensions. A failure to find conjunction benefits for disjunctive conjunctions was attributed to increased memory demands and variable feature-response mapping for 2- versus single-feature targets. Further, attentional demands were similar between single- and 2-feature targets when response mapping, memory demands, and discriminability of the task-relevant features were equated between targets. Implications of the findings for recent attention models are discussed. (c) 2004 APA, all rights reserved
Meng, Xiangzhi; Lin, Ou; Wang, Fang; Jiang, Yuzheng; Song, Yan
2014-01-01
Background High order cognitive processing and learning, such as reading, interact with lower-level sensory processing and learning. Previous studies have reported that visual perceptual training enlarges visual span and, consequently, improves reading speed in young and old people with amblyopia. Recently, a visual perceptual training study in Chinese-speaking children with dyslexia found that the visual texture discrimination thresholds of these children in visual perceptual training significantly correlated with their performance in Chinese character recognition, suggesting that deficits in visual perceptual processing/learning might partly underpin the difficulty in reading Chinese. Methodology/Principal Findings To further clarify whether visual perceptual training improves the measures of reading performance, eighteen children with dyslexia and eighteen typically developed readers that were age- and IQ-matched completed a series of reading measures before and after visual texture discrimination task (TDT) training. Prior to the TDT training, each group of children was split into two equivalent training and non-training groups in terms of all reading measures, IQ, and TDT. The results revealed that the discrimination threshold SOAs of TDT were significantly higher for the children with dyslexia than for the control children before training. Interestingly, training significantly decreased the discrimination threshold SOAs of TDT for both the typically developed readers and the children with dyslexia. More importantly, the training group with dyslexia exhibited significant enhancement in reading fluency, while the non-training group with dyslexia did not show this improvement. Additional follow-up tests showed that the improvement in reading fluency is a long-lasting effect and could be maintained for up to two months in the training group with dyslexia. Conclusion/Significance These results suggest that basic visual perceptual processing/learning and reading ability in Chinese might at least partially rely on overlapping mechanisms. PMID:25247602
Clery, Stephane; Cumming, Bruce G.
2017-01-01
Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal “noise” correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. SIGNIFICANCE STATEMENT Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. PMID:28100751
Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex
Vaden, Ryan J.; Visscher, Kristina M.
2015-01-01
Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. PMID:26163806
Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.
Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U
2016-03-01
We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Hager, Audrey M; Dringenberg, Hans C
2012-12-01
The rat visual system is structured such that the large (>90 %) majority of retinal ganglion axons reach the contralateral lateral geniculate nucleus (LGN) and visual cortex (V1). This anatomical design allows for the relatively selective activation of one cerebral hemisphere under monocular viewing conditions. Here, we describe the design of a harness and face mask allowing simple and noninvasive monocular occlusion in rats. The harness is constructed from synthetic fiber (shoelace-type material) and fits around the girth region and neck, allowing for easy adjustments to fit rats of various weights. The face mask consists of soft rubber material that is attached to the harness by Velcro strips. Eyeholes in the mask can be covered by additional Velcro patches to occlude either one or both eyes. Rats readily adapt to wearing the device, allowing behavioral testing under different types of viewing conditions. We show that rats successfully acquire a water-maze-based visual discrimination task under monocular viewing conditions. Following task acquisition, interocular transfer was assessed. Performance with the previously occluded, "untrained" eye was impaired, suggesting that training effects were partially confined to one cerebral hemisphere. The method described herein provides a simple and noninvasive means to restrict visual input for studies of visual processing and learning in various rodent species.
Utz, Kathrin S.; Hankeln, Thomas M. A.; Jung, Lena; Lämmer, Alexandra; Waschbisch, Anne; Lee, De-Hyung; Linker, Ralf A.; Schenk, Thomas
2013-01-01
Background Despite the high frequency of cognitive impairment in multiple sclerosis, its assessment has not gained entrance into clinical routine yet, due to lack of time-saving and suitable tests for patients with multiple sclerosis. Objective The aim of the study was to compare the paradigm of visual search with neuropsychological standard tests, in order to identify the test that discriminates best between patients with multiple sclerosis and healthy individuals concerning cognitive functions, without being susceptible to practice effects. Methods Patients with relapsing remitting multiple sclerosis (n = 38) and age-and gender-matched healthy individuals (n = 40) were tested with common neuropsychological tests and a computer-based visual search task, whereby a target stimulus has to be detected amongst distracting stimuli on a touch screen. Twenty-eight of the healthy individuals were re-tested in order to determine potential practice effects. Results Mean reaction time reflecting visual attention and movement time indicating motor execution in the visual search task discriminated best between healthy individuals and patients with multiple sclerosis, without practice effects. Conclusions Visual search is a promising instrument for the assessment of cognitive functions and potentially cognitive changes in patients with multiple sclerosis thanks to its good discriminatory power and insusceptibility to practice effects. PMID:24282604
Gálosi, Rita; Szalay, Csaba; Aradi, Mihály; Perlaki, Gábor; Pál, József; Steier, Roy; Lénárd, László; Karádi, Zoltán
2017-04-01
Manganese-enhanced magnetic resonance imaging (MEMRI) offers unique advantages such as studying brain activation in freely moving rats, but its usefulness has not been previously evaluated during operant behavior training. Manganese in a form of MnCl 2 , at a dose of 20mg/kg, was intraperitoneally infused. The administration was repeated and separated by 24h to reach the dose of 40mg/kg or 60mg/kg, respectively. Hepatotoxicity of the MnCl 2 was evaluated by determining serum aspartate aminotransferase, alanine aminotransferase, total bilirubin, albumin and protein levels. Neurological examination was also carried out. The animals were tested in visual cue discriminated operant task. Imaging was performed using a 3T clinical MR scanner. T1 values were determined before and after MnCl 2 administrations. Manganese-enhanced images of each animal were subtracted from their baseline images to calculate decrease in the T1 value (ΔT1) voxel by voxel. The subtracted T1 maps of trained animals performing visual cue discriminated operant task, and those of naive rats were compared. The dose of 60mg/kg MnCl 2 showed hepatotoxic effect, but even these animals did not exhibit neurological symptoms. The dose of 20 and 40mg/kg MnCl 2 increased the number of omissions and did not affect the accuracy of performing the visual cue discriminated operant task. Using the accumulated dose of 40mg/kg, voxels with a significant enhanced ΔT1 value were detected in the following brain areas of the visual cue discriminated operant behavior performed animals compared to those in the controls: the visual, somatosensory, motor and premotor cortices, the insula, cingulate, ectorhinal, entorhinal, perirhinal and piriform cortices, hippocampus, amygdala with amygdalohippocampal areas, dorsal striatum, nucleus accumbens core, substantia nigra, and retrorubral field. In conclusion, the MEMRI proved to be a reliable method to accomplish brain activity mapping in correlation with the operant behavior of freely moving rodents. Copyright © 2016 Elsevier Inc. All rights reserved.
Kelly, Debbie M; Cook, Robert G
2003-06-01
Three experiment examined the role of contextual information during line orientation and line position discriminations by pigeons (Columba livia) and humans (Homo sapiens). Experiment 1 tested pigeons' performance with these stimuli in a target localization task using texture displays. Experiments 2 and 3 tested pigeons and humans, respectively, with small and large variations of these stimuli in a same-different task. Humans showed a configural superiority effect when tested with displays constructed from large elements but not when tested with the smaller, more densely packed texture displays. The pigeons, in contrast, exhibited a configural inferiority effect when required to discriminate line orientation, regardless of stimulus size. These contrasting results suggest a species difference in the perceptionand use of features and contextual information in the discrimination of line information.
Sugden, Nicole A; Marquis, Alexandra R
2017-11-01
Infants show facility for discriminating between individual faces within hours of birth. Over the first year of life, infants' face discrimination shows continued improvement with familiar face types, such as own-race faces, but not with unfamiliar face types, like other-race faces. The goal of this meta-analytic review is to provide an effect size for infants' face discrimination ability overall, with own-race faces, and with other-race faces within the first year of life, how this differs with age, and how it is influenced by task methodology. Inclusion criteria were (a) infant participants aged 0 to 12 months, (b) completing a human own- or other-race face discrimination task, (c) with discrimination being determined by infant looking. Our analysis included 30 works (165 samples, 1,926 participants participated in 2,623 tasks). The effect size for infants' face discrimination was small, 6.53% greater than chance (i.e., equal looking to the novel and familiar). There was a significant difference in discrimination by race, overall (own-race, 8.18%; other-race, 3.18%) and between ages (own-race: 0- to 4.5-month-olds, 7.32%; 5- to 7.5-month-olds, 9.17%; and 8- to 12-month-olds, 7.68%; other-race: 0- to 4.5-month-olds, 6.12%; 5- to 7.5-month-olds, 3.70%; and 8- to 12-month-olds, 2.79%). Multilevel linear (mixed-effects) models were used to predict face discrimination; infants' capacity to discriminate faces is sensitive to face characteristics including race, gender, and emotion as well as the methods used, including task timing, coding method, and visual angle. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Right Hand Presence Modulates Shifts of Exogenous Visuospatial Attention in Near Perihand Space
ERIC Educational Resources Information Center
Lloyd, Donna M.; Azanon, Elena; Poliakoff, Ellen
2010-01-01
To investigate attentional shifting in perihand space, we measured performance on a covert visual orienting task under different hand positions. Participants discriminated visual shapes presented on a screen and responded using footpedals placed under their right foot. With the right hand positioned by the right side of the screen, mean cueing…
ERIC Educational Resources Information Center
Vercillo, Tiziana; Burr, David; Gori, Monica
2016-01-01
A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…
Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai
2012-10-01
Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.
Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.
Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T
2012-01-02
Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.
Isolating Discriminant Neural Activity in the Presence of Eye Movements and Concurrent Task Demands
Touryan, Jon; Lawhern, Vernon J.; Connolly, Patrick M.; Bigdely-Shamlo, Nima; Ries, Anthony J.
2017-01-01
A growing number of studies use the combination of eye-tracking and electroencephalographic (EEG) measures to explore the neural processes that underlie visual perception. In these studies, fixation-related potentials (FRPs) are commonly used to quantify early and late stages of visual processing that follow the onset of each fixation. However, FRPs reflect a mixture of bottom-up (sensory-driven) and top-down (goal-directed) processes, in addition to eye movement artifacts and unrelated neural activity. At present there is little consensus on how to separate this evoked response into its constituent elements. In this study we sought to isolate the neural sources of target detection in the presence of eye movements and over a range of concurrent task demands. Here, participants were asked to identify visual targets (Ts) amongst a grid of distractor stimuli (Ls), while simultaneously performing an auditory N-back task. To identify the discriminant activity, we used independent components analysis (ICA) for the separation of EEG into neural and non-neural sources. We then further separated the neural sources, using a modified measure-projection approach, into six regions of interest (ROIs): occipital, fusiform, temporal, parietal, cingulate, and frontal cortices. Using activity from these ROIs, we identified target from non-target fixations in all participants at a level similar to other state-of-the-art classification techniques. Importantly, we isolated the time course and spectral features of this discriminant activity in each ROI. In addition, we were able to quantify the effect of cognitive load on both fixation-locked potential and classification performance across regions. Together, our results show the utility of a measure-projection approach for separating task-relevant neural activity into meaningful ROIs within more complex contexts that include eye movements. PMID:28736519
Humans do not have direct access to retinal flow during walking
Souman, Jan L.; Freeman, Tom C.A.; Eikmeier, Verena; Ernst, Marc O.
2013-01-01
Perceived visual speed has been reported to be reduced during walking. This reduction has been attributed to a partial subtraction of walking speed from visual speed (Durgin & Gigone, 2007; Durgin, Gigone, & Scott, 2005). We tested whether observers still have access to the retinal flow before subtraction takes place. Observers performed a 2IFC visual speed discrimination task while walking on a treadmill. In one condition, walking speed was identical in the two intervals, while in a second condition walking speed differed between intervals. If observers have access to the retinal flow before subtraction, any changes in walking speed across intervals should not affect their ability to discriminate retinal flow speed. Contrary to this “direct-access hypothesis”, we found that observers were worse at discrimination when walking speed differed between intervals. The results therefore suggest that observers do not have access to retinal flow before subtraction. We also found that the amount of subtraction depended on the visual speed presented, suggesting that the interaction between the processing of visual input and of self-motion is more complex than previously proposed. PMID:20884509
Processing of pitch and location in human auditory cortex during visual and auditory tasks.
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand.
Processing of pitch and location in human auditory cortex during visual and auditory tasks
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand. PMID:26594185
Howard, Christina J; Wilding, Robert; Guest, Duncan
2017-02-01
There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.
Supramodal parametric working memory processing in humans.
Spitzer, Bernhard; Blankenburg, Felix
2012-03-07
Previous studies of delayed-match-to-sample (DMTS) frequency discrimination in animals and humans have succeeded in delineating the neural signature of frequency processing in somatosensory working memory (WM). During retention of vibrotactile frequencies, stimulus-dependent single-cell and population activity in prefrontal cortex was found to reflect the task-relevant memory content, whereas increases in occipital alpha activity signaled the disengagement of areas not relevant for the tactile task. Here, we recorded EEG from human participants to determine the extent to which these mechanisms can be generalized to frequency retention in the visual and auditory domains. Subjects performed analogous variants of a DMTS frequency discrimination task, with the frequency information presented either visually, auditorily, or by vibrotactile stimulation. Examining oscillatory EEG activity during frequency retention, we found characteristic topographical distributions of alpha power over visual, auditory, and somatosensory cortices, indicating systematic patterns of inhibition and engagement of early sensory areas, depending on stimulus modality. The task-relevant frequency information, in contrast, was found to be represented in right prefrontal cortex, independent of presentation mode. In each of the three modality conditions, parametric modulations of prefrontal upper beta activity (20-30 Hz) emerged, in a very similar manner as recently found in vibrotactile tasks. Together, the findings corroborate a view of parametric WM as supramodal internal scaling of abstract quantity information and suggest strong relevance of previous evidence from vibrotactile work for a more general framework of quantity processing in human working memory.
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Seeing visual word forms: spatial summation, eccentricity and spatial configuration.
Kao, Chien-Hui; Chen, Chien-Chung
2012-06-01
We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.
Figure-ground discrimination in the avian brain: the nucleus rotundus and its inhibitory complex.
Acerbo, Martin J; Lazareva, Olga F; McInnerney, John; Leiker, Emily; Wasserman, Edward A; Poremba, Amy
2012-10-01
In primates, neurons sensitive to figure-ground status are located in striate cortex (area V1) and extrastriate cortex (area V2). Although much is known about the anatomical structure and connectivity of the avian visual pathway, the functional organization of the avian brain remains largely unexplored. To pinpoint the areas associated with figure-ground segregation in the avian brain, we used a radioactively labeled glucose analog to compare differences in glucose uptake after figure-ground, color, and shape discriminations. We also included a control group that received food on a variable-interval schedule, but was not required to learn a visual discrimination. Although the discrimination task depended on group assignment, the stimulus displays were identical for all three experimental groups, ensuring that all animals were exposed to the same visual input. Our analysis concentrated on the primary thalamic nucleus associated with visual processing, the nucleus rotundus (Rt), and two nuclei providing regulatory feedback, the pretectum (PT) and the nucleus subpretectalis/interstitio-pretecto-subpretectalis complex (SP/IPS). We found that figure-ground discrimination was associated with strong and nonlateralized activity of Rt and SP/IPS, whereas color discrimination produced strong and lateralized activation in Rt alone. Shape discrimination was associated with lower activity of Rt than in the control group. Taken together, our results suggest that figure-ground discrimination is associated with Rt and that SP/IPS may be a main source of inhibitory control. Thus, figure-ground segregation in the avian brain may occur earlier than in the primate brain. Copyright © 2012 Elsevier Ltd. All rights reserved.
Figure-ground discrimination in the avian brain: The nucleus rotundus and its inhibitory complex
Acerbo, Martin J.; Lazareva, Olga F.; McInnerney, John; Leiker, Emily; Wasserman, Edward A.; Poremba, Amy
2012-01-01
In primates, neurons sensitive to figure-ground status are located in striate cortex (area V1) and extrastriate cortex (area V2). Although much is known about the anatomical structure and connectivity of the avian visual pathway, the functional organization of the avian brain remains largely unexplored. To pinpoint the areas associated with figure-ground segregation in the avian brain, we used a radioactively labeled glucose analog to compare differences in glucose uptake after figure-ground, color, and shape discriminations. We also included a control group that received food on a variable-interval schedule, but was not required to learn a visual discrimination. Although the discrimination task depended on group assignment, the stimulus displays were identical for all three experimental groups, ensuring that all animals were exposed to the same visual input. Our analysis concentrated on the primary thalamic nucleus associated with visual processing, the nucleus rotundus (Rt), and two nuclei providing regulatory feedback, the pretectum (PT) and the nucleus subpretectalis/interstitio-pretecto-subpretectalis complex (SP/IPS). We found that figure-ground discrimination was associated with strong and nonlateralized activity of Rt and SP/IPS, whereas color discrimination produced strong and lateralized activation in Rt alone. Shape discrimination was associated with lower activity of Rt than in the control group. Taken together, our results suggest that figure-ground discrimination is associated with Rt and that SP/IPS may be a main source of inhibitory control. Thus, figure-ground segregation in the avian brain may occur earlier than in the primate brain. PMID:22917681
Left hemispheric advantage for numerical abilities in the bottlenose dolphin.
Kilian, Annette; von Fersen, Lorenzo; Güntürkün, Onur
2005-02-28
In a two-choice discrimination paradigm, a bottlenose dolphin discriminated relational dimensions between visual numerosity stimuli under monocular viewing conditions. After prior binocular acquisition of the task, two monocular test series with different number stimuli were conducted. In accordance with recent studies on visual lateralization in the bottlenose dolphin, our results revealed an overall advantage of the right visual field. Due to the complete decussation of the optic nerve fibers, this suggests a specialization of the left hemisphere for analysing relational features between stimuli as required in tests for numerical abilities. These processes are typically right hemisphere-based in other mammals (including humans) and birds. The present data provide further evidence for a general right visual field advantage in bottlenose dolphins for visual information processing. It is thus assumed that dolphins possess a unique functional architecture of their cerebral asymmetries. (c) 2004 Elsevier B.V. All rights reserved.
Effects of visual attention on chromatic and achromatic detection sensitivities.
Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko
2014-05-01
Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.
Larcombe, Stephanie J.; Kennard, Chris
2017-01-01
Abstract Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145–156, 2018. © 2017 Wiley Periodicals, Inc. PMID:28963815
Altering sensorimotor feedback disrupts visual discrimination of facial expressions.
Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula
2016-08-01
Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.
ERIC Educational Resources Information Center
Lawson, Rebecca
2009-01-01
A sequential matching task was used to compare how the difficulty of shape discrimination influences the achievement of object constancy for depth rotations across haptic and visual object recognition. Stimuli were nameable, 3-dimensional plastic models of familiar objects (e.g., bed, chair) and morphs midway between these endpoint shapes (e.g., a…
Moehler, Tobias; Fiehler, Katja
2015-11-01
Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation=simultaneous vs. before saccade preparation=sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi
2005-01-01
Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…
Concurrent visuomotor behaviour improves form discrimination in a patient with visual form agnosia.
Schenk, Thomas; Milner, A David
2006-09-01
It is now well established that the visual brain is divided into two visual streams, the ventral and the dorsal stream. Milner and Goodale have suggested that the ventral stream is dedicated for processing vision for perception and the dorsal stream vision for action [A.D. Milner & M.A. Goodale (1995) The Visual Brain in Action, Oxford University Press, Oxford]. However, it is possible that ongoing processes in the visuomotor stream will nevertheless have an effect on perceptual processes. This possibility was examined in the present study. We have examined the visual form-discrimination performance of the form-agnosic patient D.F. with and without a concurrent visuomotor task, and found that her performance was significantly improved in the former condition. This suggests that the visuomotor behaviour provides cues that enhance her ability to recognize the form of the target object. In control experiments we have ruled out proprioceptive and efferent cues, and therefore propose that D.F. can, to a significant degree, access the object's visuomotor representation in the dorsal stream. Moreover, we show that the grasping-induced perceptual improvement disappears if the target objects only differ with respect to their shape but not their width. This suggests that shape information per se is not used for this grasping task.
Delhey, Kaspar; Hall, Michelle; Kingma, Sjouke A; Peters, Anne
2013-01-07
Colour signals are expected to match visual sensitivities of intended receivers. In birds, evolutionary shifts from violet-sensitive (V-type) to ultraviolet-sensitive (U-type) vision have been linked to increased prevalence of colours rich in shortwave reflectance (ultraviolet/blue), presumably due to better perception of such colours by U-type vision. Here we provide the first test of this widespread idea using fairy-wrens and allies (Family Maluridae) as a model, a family where shifts in visual sensitivities from V- to U-type eyes are associated with male nuptial plumage rich in ultraviolet/blue colours. Using psychophysical visual models, we compared the performance of both types of visual systems at two tasks: (i) detecting contrast between male plumage colours and natural backgrounds, and (ii) perceiving intraspecific chromatic variation in male plumage. While U-type outperforms V-type vision at both tasks, the crucial test here is whether U-type vision performs better at detecting and discriminating ultraviolet/blue colours when compared with other colours. This was true for detecting contrast between plumage colours and natural backgrounds (i), but not for discriminating intraspecific variability (ii). Our data indicate that selection to maximize conspicuousness to conspecifics may have led to the correlation between ultraviolet/blue colours and U-type vision in this clade of birds.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Wang, Rui; Zhang, Jun-Yun; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong
2014-01-01
Perceptual learning, a process in which training improves visual discrimination, is often specific to the trained retinal location, and this location specificity is frequently regarded as an indication of neural plasticity in the retinotopic visual cortex. However, our previous studies have shown that “double training” enables location-specific perceptual learning, such as Vernier learning, to completely transfer to a new location where an irrelevant task is practiced. Here we show that Vernier learning can be actuated by less location-specific orientation or motion-direction learning to transfer to completely untrained retinal locations. This “piggybacking” effect occurs even if both tasks are trained at the same retinal location. However, piggybacking does not occur when the Vernier task is paired with a more location-specific contrast-discrimination task. This previously unknown complexity challenges the current understanding of perceptual learning and its specificity/transfer. Orientation and motion-direction learning, but not contrast and Vernier learning, appears to activate a global process that allows learning transfer to untrained locations. Moreover, when paired with orientation or motion-direction learning, Vernier learning may be “piggybacked” by the activated global process to transfer to other untrained retinal locations. How this task-specific global activation process is achieved is as yet unknown. PMID:25398974
Midline thalamic reuniens lesions improve executive behaviors.
Prasad, J A; Abela, A R; Chudasama, Y
2017-03-14
The role of the thalamus in complex cognitive behavior is a topic of increasing interest. Here we demonstrate that lesions of the nucleus reuniens (NRe), a midline thalamic nucleus interconnected with both hippocampal and prefrontal circuitry, lead to enhancement of executive behaviors typically associated with the prefrontal cortex. Rats were tested on four behavioral tasks: (1) the combined attention-memory (CAM) task, which simultaneously assessed attention to a visual target and memory for that target over a variable delay; (2) spatial memory using a radial arm maze, (3) discrimination and reversal learning using a touchscreen operant platform, and (4) decision-making with delayed outcomes. Following NRe lesions, the animals became more efficient in their performance, responding with shorter reaction times but also less impulsively than controls. This change, combined with a decrease in perseverative responses, led to focused attention in the CAM task and accelerated learning in the visual discrimination task. There were no observed changes in tasks involving either spatial memory or value-based decision making. These data complement ongoing efforts to understand the role of midline thalamic structures in human cognition, including the development of thalamic stimulation as a therapeutic strategy for acquired cognitive disabilities (Schiff, 2008; Mair et al., 2011), and point to the NRe as a potential target for clinical intervention. Published by Elsevier Ltd.
Prestimulus alpha-band power biases visual discrimination confidence, but not accuracy.
Samaha, Jason; Iemi, Luca; Postle, Bradley R
2017-09-01
The magnitude of power in the alpha-band (8-13Hz) of the electroencephalogram (EEG) prior to the onset of a near threshold visual stimulus predicts performance. Together with other findings, this has been interpreted as evidence that alpha-band dynamics reflect cortical excitability. We reasoned, however, that non-specific changes in excitability would be expected to influence signal and noise in the same way, leaving actual discriminability unchanged. Indeed, using a two-choice orientation discrimination task, we found that discrimination accuracy was unaffected by fluctuations in prestimulus alpha power. Decision confidence, on the other hand, was strongly negatively correlated with prestimulus alpha power. This finding constitutes a clear dissociation between objective and subjective measures of visual perception as a function of prestimulus cortical excitability. This dissociation is predicted by a model where the balance of evidence supporting each choice drives objective performance but only the magnitude of evidence supporting the selected choice drives subjective reports, suggesting that human perceptual confidence can be suboptimal with respect to tracking objective accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.
Herrmann, C S; Mecklinger, A
2000-12-01
We examined evoked and induced responses in event-related fields and gamma activity in the magnetoencephalogram (MEG) during a visual classification task. The objective was to investigate the effects of target classification and the different levels of discrimination between certain stimulus features. We performed two experiments, which differed only in the subjects' task while the stimuli were identical. In Experiment 1, subjects responded by a button-press to rare Kanizsa squares (targets) among Kanizsa triangles and non-Kanizsa figures (standards). This task requires the processing of both stimulus features (colinearity and number of inducer disks). In Experiment 2, the four stimuli of Experiment 1 were used as standards and the occurrence of an additional stimulus without any feature overlap with the Kanizsa stimuli (a rare and highly salient red fixation cross) had to be detected. Discrimination of colinearity and number of inducer disks was not necessarily required for task performance. We applied a wavelet-based time-frequency analysis to the data and calculated topographical maps of the 40 Hz activity. The early evoked gamma activity (100-200 ms) in Experiment 1 was higher for targets as compared to standards. In Experiment 2, no significant differences were found in the gamma responses to the Kanizsa figures and non-Kanizsa figures. This pattern of results suggests that early evoked gamma activity in response to visual stimuli is affected by the targetness of a stimulus and the need to discriminate between the features of a stimulus.
Discrimination of Complex Human Behavior by Pigeons (Columba livia) and Humans
Qadri, Muhammad A. J.; Sayde, Justin M.; Cook, Robert G.
2014-01-01
The cognitive and neural mechanisms for recognizing and categorizing behavior are not well understood in non-human animals. In the current experiments, pigeons and humans learned to categorize two non-repeating, complex human behaviors (“martial arts” vs. “Indian dance”). Using multiple video exemplars of a digital human model, pigeons discriminated these behaviors in a go/no-go task and humans in a choice task. Experiment 1 found that pigeons already experienced with discriminating the locomotive actions of digital animals acquired the discrimination more rapidly when action information was available than when only pose information was available. Experiments 2 and 3 found this same dynamic superiority effect with naïve pigeons and human participants. Both species used the same combination of immediately available static pose information and more slowly perceived dynamic action cues to discriminate the behavioral categories. Theories based on generalized visual mechanisms, as opposed to embodied, species-specific action networks, offer a parsimonious account of how these different animals recognize behavior across and within species. PMID:25379777
Rodríguez-Gironés, Miguel A.; Trillo, Alejandro; Corcobado, Guadalupe
2013-01-01
The results of behavioural experiments provide important information about the structure and information-processing abilities of the visual system. Nevertheless, if we want to infer from behavioural data how the visual system operates, it is important to know how different learning protocols affect performance and to devise protocols that minimise noise in the response of experimental subjects. The purpose of this work was to investigate how reinforcement schedule and individual variability affect the learning process in a colour discrimination task. Free-flying bumblebees were trained to discriminate between two perceptually similar colours. The target colour was associated with sucrose solution, and the distractor could be associated with water or quinine solution throughout the experiment, or with one substance during the first half of the experiment and the other during the second half. Both acquisition and final performance of the discrimination task (measured as proportion of correct choices) were determined by the choice of reinforcer during the first half of the experiment: regardless of whether bees were trained with water or quinine during the second half of the experiment, bees trained with quinine during the first half learned the task faster and performed better during the whole experiment. Our results confirm that the choice of stimuli used during training affects the rate at which colour discrimination tasks are acquired and show that early contact with a strongly aversive stimulus can be sufficient to maintain high levels of attention during several hours. On the other hand, bees which took more time to decide on which flower to alight were more likely to make correct choices than bees which made fast decisions. This result supports the existence of a trade-off between foraging speed and accuracy, and highlights the importance of measuring choice latencies during behavioural experiments focusing on cognitive abilities. PMID:23951186
DOT National Transportation Integrated Search
1988-01-01
Operational monitoring situations, in contrast to typical laboratory vigilance tasks, generally involve more than just stimulus detection and recognition. They frequently involve complex multidimensional discriminations, interpretations of significan...
Perception of Self-Motion and Regulation of Walking Speed in Young-Old Adults.
Lalonde-Parsi, Marie-Jasmine; Lamontagne, Anouk
2015-07-01
Whether a reduced perception of self-motion contributes to poor walking speed adaptations in older adults is unknown. In this study, speed discrimination thresholds (perceptual task) and walking speed adaptations (walking task) were compared between young (19-27 years) and young-old individuals (63-74 years), and the relationship between the performance on the two tasks was examined. Participants were evaluated while viewing a virtual corridor in a helmet-mounted display. Speed discrimination thresholds were determined using a staircase procedure. Walking speed modulation was assessed on a self-paced treadmill while exposed to different self-motion speeds ranging from 0.25 to 2 times the participants' comfortable speed. For each speed, participants were instructed to match the self-motion speed described by the moving corridor. On the walking task, participants displayed smaller walking speed errors at comfortable walking speeds compared with slower of faster speeds. The young-old adults presented larger speed discrimination thresholds (perceptual experiment) and larger walking speed errors (walking experiment) compared with young adults. Larger walking speed errors were associated with higher discrimination thresholds. The enhanced performance on the walking task at comfortable speed suggests that intersensory calibration processes are influenced by experience, hence optimized for frequently encountered conditions. The altered performance of the young-old adults on the perceptual and walking tasks, as well as the relationship observed between the two tasks, suggest that a poor perception of visual motion information may contribute to the poor walking speed adaptations that arise with aging.
A task-irrelevant stimulus attribute affects perception and short-term memory
Huang, Jie; Kahana, Michael J.; Sekuler, Robert
2010-01-01
Selective attention protects cognition against intrusions of task-irrelevant stimulus attributes. This protective function was tested in coordinated psychophysical and memory experiments. Stimuli were superimposed, horizontally and vertically oriented gratings of varying spatial frequency; only one orientation was task relevant. Experiment 1 demonstrated that a task-irrelevant spatial frequency interfered with visual discrimination of the task-relevant spatial frequency. Experiment 2 adopted a two-item Sternberg task, using stimuli that had been scaled to neutralize interference at the level of vision. Despite being visually neutralized, the task-irrelevant attribute strongly influenced recognition accuracy and associated reaction times (RTs). This effect was sharply tuned, with the task-irrelevant spatial frequency having an impact only when the task-relevant spatial frequencies of the probe and study items were highly similar to one another. Model-based analyses of judgment accuracy and RT distributional properties converged on the point that the irrelevant orientation operates at an early stage in memory processing, not at a later one that supports decision making. PMID:19933454
Bett, David; Allison, Elizabeth; Murdoch, Lauren H.; Kaefer, Karola; Wood, Emma R.; Dudchenko, Paul A.
2012-01-01
Vicarious trial-and-errors (VTEs) are back-and-forth movements of the head exhibited by rodents and other animals when faced with a decision. These behaviors have recently been associated with prospective sweeps of hippocampal place cell firing, and thus may reflect a rodent model of deliberative decision-making. The aim of the current study was to test whether the hippocampus is essential for VTEs in a spatial memory task and in a simple visual discrimination (VD) task. We found that lesions of the hippocampus with ibotenic acid produced a significant impairment in the accuracy of choices in a serial spatial reversal (SR) task. In terms of VTEs, whereas sham-lesioned animals engaged in more VTE behavior prior to identifying the location of the reward as opposed to repeated trials after it had been located, the lesioned animals failed to show this difference. In contrast, damage to the hippocampus had no effect on acquisition of a VD or on the VTEs seen in this task. For both lesion and sham-lesion animals, adding an additional choice to the VD increased the number of VTEs and decreased the accuracy of choices. Together, these results suggest that the hippocampus may be specifically involved in VTE behavior during spatial decision making. PMID:23115549
Frontal–Occipital Connectivity During Visual Search
Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas
2012-01-01
Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993
Clery, Stephane; Cumming, Bruce G; Nienborg, Hendrikje
2017-01-18
Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal "noise" correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. Copyright © 2017 the authors 0270-6474/17/370715-11$15.00/0.
Cognitive Load in Voice Therapy Carry-Over Exercises.
Iwarsson, Jenny; Morris, David Jackson; Balling, Laura Winther
2017-01-01
The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication. Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire. Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks. The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.
Ono, T; Tamura, R; Nishijo, H; Nakamura, K; Tabuchi, E
1989-02-01
Visual information processing was investigated in the inferotemporal cortical (ITCx)-amygdalar (AM)-lateral hypothalamic (LHA) axis which contributes to food-nonfood discrimination. Neuronal activity was recorded from monkey AM and LHA during discrimination of sensory stimuli including sight of food or nonfood. The task had four phases: control, visual, bar press, and ingestion. Of 710 AM neurons tested, 220 (31.0%) responded during visual phase: 48 to only visual stimulation, 13 (1.9%) to visual plus oral sensory stimulation, 142 (20.0%) to multimodal stimulation and 17 (2.4%) to one affectively significant item. Of 669 LHA neurons tested, 106 (15.8%) responded in the visual phase. Of 80 visual-related neurons tested systematically, 33 (41.2%) responded selectively to the sight of any object predicting the availability of reward, and 47 (58.8%) responded nondifferentially to both food and nonfood. Many of AM neuron responses were graded according to the degree of affective significance of sensory stimuli (sensory-affective association), but responses of LHA food responsive neurons did not depend on the kind of reward indicated by the sensory stimuli (stimulus-reinforcement association). Some AM and LHA food responses were modulated by extinction or reversal. Dynamic information processing in ITCx-AM-LHA axis was investigated by reversible deficits of bilateral ITCx or AM by cooling. ITCx cooling suppressed discrimination by vision responding AM neurons (8/17). AM cooling suppressed LHA responses to food (9/22). We suggest deep AM-LHA involvement in food-nonfood discrimination based on AM sensory-affective association and LHA stimulus-reinforcement association.
Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng
2016-01-01
Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF) for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS) and a 55% gain in visual acuity (VA). Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1) than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.
Allon, Ayala S.; Balaban, Halely; Luria, Roy
2014-01-01
In three experiments we manipulated the resolution of novel complex objects in visual working memory (WM) by changing task demands. Previous studies that investigated the trade-off between quantity and resolution in visual WM yielded mixed results for simple familiar stimuli. We used the contralateral delay activity as an electrophysiological marker to directly track the deployment of visual WM resources while participants preformed a change-detection task. Across three experiments we presented the same novel complex items but changed the task demands. In Experiment 1 we induced a medium resolution task by using change trials in which a random polygon changed to a different type of polygon and replicated previous findings showing that novel complex objects are represented with higher resolution relative to simple familiar objects. In Experiment 2 we induced a low resolution task that required distinguishing between polygons and other types of stimulus categories, but we failed in finding a corresponding decrease in the resolution of the represented item. Finally, in Experiment 3 we induced a high resolution task that required discriminating between highly similar polygons with somewhat different contours. This time, we observed an increase in the item’s resolution. Our findings indicate that the resolution for novel complex objects can be increased but not decreased according to task demands, suggesting that minimal resolution is required in order to maintain these items in visual WM. These findings support studies claiming that capacity and resolution in visual WM reflect different mechanisms. PMID:24734026
ERIC Educational Resources Information Center
Knutson, Ashley R.; Hopkins, Ramona O.; Squire, Larry R.
2013-01-01
We tested proposals that medial temporal lobe (MTL) structures support not just memory but certain kinds of visual perception as well. Patients with hippocampal lesions or larger MTL lesions attempted to identify the unique object among twin pairs of objects that had a high degree of feature overlap. Patients were markedly impaired under the more…
Effects of regular aerobic exercise on visual perceptual learning.
Connell, Charlotte J W; Thompson, Benjamin; Green, Hayden; Sullivan, Rachel K; Gant, Nicholas
2017-12-02
This study investigated the influence of five days of moderate intensity aerobic exercise on the acquisition and consolidation of visual perceptual learning using a motion direction discrimination (MDD) task. The timing of exercise relative to learning was manipulated by administering exercise either before or after perceptual training. Within a matched-subjects design, twenty-seven healthy participants (n = 9 per group) completed five consecutive days of perceptual training on a MDD task under one of three interventions: no exercise, exercise before the MDD task, or exercise after the MDD task. MDD task accuracy improved in all groups over the five-day period, but there was a trend for impaired learning when exercise was performed before visual perceptual training. MDD task accuracy (mean ± SD) increased in exercise before by 4.5 ± 6.5%; exercise after by 11.8 ± 6.4%; and no exercise by 11.3 ± 7.2%. All intervention groups displayed similar MDD threshold reductions for the trained and untrained motion axes after training. These findings suggest that moderate daily exercise does not enhance the rate of visual perceptual learning for an MDD task or the transfer of learning to an untrained motion axis. Furthermore, exercise performed immediately prior to a visual perceptual learning task may impair learning. Further research with larger groups is required in order to better understand these effects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Larcombe, Stephanie J; Kennard, Chris; Bridge, Holly
2018-01-01
Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145-156, 2018. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Kyllingsbæk, Søren; Sy, Jocelyn L; Giesbrecht, Barry
2011-05-01
The allocation of visual processing capacity is a key topic in studies and theories of visual attention. The load theory of Lavie (1995) proposes that allocation happens in two steps where processing resources are first allocated to task-relevant stimuli and secondly remaining capacity 'spills over' to task-irrelevant distractors. In contrast, the Theory of Visual Attention (TVA) proposed by Bundesen (1990) assumes that allocation happens in a single step where processing capacity is allocated to all stimuli, both task-relevant and task-irrelevant, in proportion to their relative attentional weight. Here we present data from two partial report experiments where we varied the number and discriminability of the task-irrelevant stimuli (Experiment 1) and perceptual load (Experiment 2). The TVA fitted the data of the two experiments well thus favoring the simple explanation with a single step of capacity allocation. We also show that the effects of varying perceptual load can only be explained by a combined effect of allocation of processing capacity as well as limits in visual working memory. Finally, we link the results to processing capacity understood at the neural level based on the neural theory of visual attention by Bundesen et al. (2005). Copyright © 2010 Elsevier Ltd. All rights reserved.
Factors influencing self-reported vision-related activity limitation in the visually impaired.
Tabrett, Daryl R; Latham, Keziah
2011-07-15
The use of patient-reported outcome (PRO) measures to assess self-reported difficulty in visual activities is common in patients with impaired vision. This study determines the visual and psychosocial factors influencing patients' responses to self-report measures, to aid in understanding what is being measured. One hundred visually impaired participants completed the Activity Inventory (AI), which assesses self-reported, vision-related activity limitation (VRAL) in the task domains of reading, mobility, visual information, and visual motor tasks. Participants also completed clinical tests of visual function (distance visual acuity and near reading performance both with and without low vision aids [LVAs], contrast sensitivity, visual fields, and depth discrimination), and questionnaires assessing depressive symptoms, social support, adjustment to visual loss, and personality. Multiple regression analyses identified that an acuity measure (distance or near), and, to a lesser extent, near reading performance without LVAs, visual fields, and contrast sensitivity best explained self-reported VRAL (28%-50% variance explained). Significant psychosocial correlates were depression and adjustment, explaining an additional 6% to 19% unique variance. Dependent on task domain, the parameters assessed explained 59% to 71% of the variance in self-reported VRAL. Visual function, most notably acuity without LVAs, is the best predictor of self-reported VRAL assessed by the AI. Depression and adjustment to visual loss also significantly influence self-reported VRAL, largely independent of the severity of visual loss and most notably in the less vision-specific tasks. The results suggest that rehabilitation strategies addressing depression and adjustment could improve perceived visual disability.
The Mechanisms Underlying the ASD Advantage in Visual Search.
Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S; Blaser, Erik
2016-05-01
A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in Neuron 48:497-507, 2005; Simmons et al. in Vis Res 49:2705-2739, 2009). This "ASD advantage" was first identified in the domain of visual search by Plaisted et al. (J Child Psychol Psychiatry 39:777-783, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that-across development and a broad range of symptom severity-individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to 'enhanced perceptual discrimination', a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O'Riordan in Cognition 77:81-96, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn et al. in J Child Psychol Psychiatry 37:164-183, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests.
Dore, Patricia; Dumani, Ardian; Wyatt, Geddes; Shepherd, Alex J
2018-03-16
This study explored associations between local and global shape perception on coloured backgrounds, colour discrimination, and non-verbal IQ (NVIQ). Five background colours were chosen for the local and global shape tasks that were tailored for the cone-opponent pathways early in the visual system (cardinal colour directions: L-M, loosely, reddish-greenish; and S-(L + M), or tritan colours, loosely, blueish-yellowish; where L, M and S refer to the long, middle and short wavelength sensitive cones). Participants also completed the Farnsworth-Munsell 100-hue test (FM100) to determine whether performance on the local and global shape tasks correlated with colour discrimination overall, or with performance on the L-M and tritan subsets of the FM100 test. Overall performance on the local and global shape tasks did correlate with scores on the FM100 tests, despite the colour of the background being irrelevant to the shape tasks. There were also significantly larger associations between scores for the L-M subset of the FM100 test, compared to the tritan subset, and accuracy on some of the shape tasks on the reddish, greenish and neutral backgrounds. Participants also completed the non-verbal components of the WAIS and the SPM+ version of Raven's progressive matrices, to determine whether performance on the FM100 test, and on the local and global shape tasks, correlated with NVIQ. FM100 scores correlated significantly with both WAIS and SPM+ scores. These results extend previous work that has indicated FM100 performance is not purely a measure of colour discrimination, but also involves aspects of each participant's NVIQ, such as the ability to attend to local and global aspects of the test, part-whole relationships, perceptual organisation and good visuomotor skills. Overall performance on the local and global shape tasks correlated only with the WAIS scores, not the SPM+. These results indicate that those aspects of NVIQ that engage spatial comprehension of local-global relationships and manual manipulation (WAIS), rather than more abstract reasoning (SPM+), are related to performance on the local and global shape tasks. Links are presented between various measures of NVIQ and performance on visual tasks, but they are currently seldom addressed in studies of either shape or colour perception. Further studies to explore these issues are recommended. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kato, Shigeki; Kuramochi, Masahito; Kobayashi, Kenta; Fukabori, Ryoji; Okada, Kana; Uchigashima, Motokazu; Watanabe, Masahiko; Tsutsui, Yuji; Kobayashi, Kazuto
2011-11-23
The dorsal striatum receives converging excitatory inputs from diverse brain regions, including the cerebral cortex and the intralaminar/midline thalamic nuclei, and mediates learning processes contributing to instrumental motor actions. However, the roles of each striatal input pathway in these learning processes remain uncertain. We developed a novel strategy to target specific neural pathways and applied this strategy for studying behavioral roles of the pathway originating from the parafascicular nucleus (PF) and projecting to the dorsolateral striatum. A highly efficient retrograde gene transfer vector encoding the recombinant immunotoxin (IT) receptor was injected into the dorsolateral striatum in mice to express the receptor in neurons innervating the striatum. IT treatment into the PF of the vector-injected animals caused a selective elimination of neurons of the PF-derived thalamostriatal pathway. The elimination of this pathway impaired the response selection accuracy and delayed the motor response in the acquisition of a visual cue-dependent discrimination task. When the pathway elimination was induced after learning acquisition, it disturbed the response accuracy in the task performance with no apparent change in the response time. The elimination did not influence spontaneous locomotion, methamphetamine-induced hyperactivity, and motor skill learning that demand the function of the dorsal striatum. These results demonstrate that thalamostriatal projection derived from the PF plays essential roles in the acquisition and execution of discrimination learning in response to sensory stimulus. The temporal difference in the pathway requirement for visual discrimination suggests a stage-specific role of thalamostriatal pathway in the modulation of response time of learned motor actions.
Kaplan, Johanna S; Erickson, Kristine; Luckenbaugh, David A; Weiland-Fiedler, Petra; Geraci, Marilla; Sahakian, Barbara J; Charney, Dennis; Drevets, Wayne C; Neumeister, Alexander
2006-10-01
Neuropsychological studies have provided evidence for deficits in psychiatric disorders, such as schizophrenia and mood disorders. However, neuropsychological function in Panic Disorder (PD) or PD with a comorbid diagnosis of Major Depressive Disorder (MDD) has not been comprehensively studied. The present study investigated neuropsychological functioning in patients with PD and PD + MDD by focusing on tasks that assess attention, psychomotor speed, executive function, decision-making, and affective processing. Twenty-two unmedicated patients with PD, eleven of whom had a secondary diagnosis of MDD, were compared to twenty-two healthy controls, matched for gender, age, and intelligence on tasks of attention, memory, psychomotor speed, executive function, decision-making, and affective processing from the Cambridge Neuropsychological Test Automated Battery (CANTAB), Cambridge Gamble Task, and Affective Go/No-go Task. Relative to matched healthy controls, patients with PD + MDD displayed an attentional bias toward negatively-valenced verbal stimuli (Affective Go/No-go Task) and longer decision-making latencies (Cambridge Gamble Task). Furthermore, the PD + MDD group committed more errors on a task of memory and visual discrimination compared to their controls. In contrast, no group differences were found for PD patients relative to matched control subjects. The sample size was limited, however, all patients were drug-free at the time of testing. The PD + MDD patients demonstrated deficits on a task involving visual discrimination and working memory, and an attentional bias towards negatively-valenced stimuli. In addition, patients with comorbid depression provided qualitatively different responses in the areas of affective and decision-making processes.
McDonald, Robert J; Jones, Jana; Richards, Blake; Hong, Nancy S
2006-09-01
The objectives of this research were to further delineate the neural circuits subserving proposed memory-based behavioural subsystems in the hippocampal formation. These studies were guided by anatomical evidence showing a topographical organization of the hippocampal formation. Briefly, perpendicular to the medial/lateral entorhinal cortex division there is a second system of parallel circuits that separates the dorsal and ventral hippocampus. Recent work from this laboratory has provided evidence that the hippocampus incidentally encodes a context-specific inhibitory association during acquisition of a visual discrimination task. One question that emerges from this dataset is whether the dorsal or ventral hippocampus makes a unique contribution to this newly described function. Rats with neurotoxic lesions of the dorsal or ventral hippocampus were assessed on the acquisition of the visual discrimination task. Following asymptotic performance they were given reversal training in either the same or a different context from the original training. The results showed that the context-specific inhibition effect is mediated by a circuit that includes the ventral but not the dorsal hippocampus. Results from a control procedure showed that rats with either dorso-lateral striatum damage or dorsal hippocampal lesions were impaired on a tactile/spatial discrimination. Taken together, the results represent a double dissociation of learning and memory function between the ventral and dorsal hippocampus. The formation of an incidental inhibitory association was dependent on ventral but not dorsal hippocampal circuitry, and the opposite dependence was found for the spatial component of a tactile/spatial discrimination.
Pavan, Andrea; Boyce, Matthew; Ghin, Filippo
2016-10-01
Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.
Hindi Attar, Catherine; Müller, Matthias M
2012-01-01
A number of studies have shown that emotionally arousing stimuli are preferentially processed in the human brain. Whether or not this preference persists under increased perceptual load associated with a task at hand remains an open question. Here we manipulated two possible determinants of the attentional selection process, perceptual load associated with a foreground task and the emotional valence of concurrently presented task-irrelevant distractors. As a direct measure of sustained attentional resource allocation in early visual cortex we used steady-state visual evoked potentials (SSVEPs) elicited by distinct flicker frequencies of task and distractor stimuli. Subjects either performed a detection (low load) or discrimination (high load) task at a centrally presented symbol stream that flickered at 8.6 Hz while task-irrelevant neutral or unpleasant pictures from the International Affective Picture System (IAPS) flickered at a frequency of 12 Hz in the background of the stream. As reflected in target detection rates and SSVEP amplitudes to both task and distractor stimuli, unpleasant relative to neutral background pictures more strongly withdrew processing resources from the foreground task. Importantly, this finding was unaffected by the factor 'load' which turned out to be a weak modulator of attentional processing in human visual cortex.
Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.
Murai, Yuki; Yotsumoto, Yuko
2016-01-01
When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.
Bag-of-visual-ngrams for histopathology image classification
NASA Astrophysics Data System (ADS)
López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.
Hogarth, Lee; Dickinson, Anthony; Duka, Theodora
2003-08-01
Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.
Audio-visual temporal perception in children with restored hearing.
Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David
2017-05-01
It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.
Basic visual function and cortical thickness patterns in posterior cortical atrophy.
Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J
2011-09-01
Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.
Nieuwenstein, Mark; Wyble, Brad
2014-06-01
While studies on visual memory commonly assume that the consolidation of a visual stimulus into working memory is interrupted by a trailing mask, studies on dual-task interference suggest that the consolidation of a stimulus can continue for several hundred milliseconds after a mask. As a result, estimates of the time course of working memory consolidation differ more than an order of magnitude. Here, we contrasted these opposing views by examining if and for how long the processing of a masked display of visual stimuli can be disturbed by a trailing 2-alternative forced choice task (2-AFC; a color discrimination task or a visual or auditory parity judgment task). The results showed that the presence of the 2-AFC task produced a pronounced retroactive interference effect that dissipated across stimulus onset asynchronies of 250-1,000 ms, indicating that the processing elicited by the 2-AFC task interfered with the gradual consolidation of the earlier shown stimuli. Furthermore, this interference effect occurred regardless of whether the to-be-remembered stimuli comprised a string of letters or an unfamiliar complex visual shape, and it occurred regardless of whether these stimuli were masked. Conversely, the interference effect was reduced when the memory load for the 1st task was reduced, or when the 2nd task was a color detection task that did not require decision making. Taken together, these findings show that the formation of a durable and consciously accessible working memory trace for a briefly shown visual stimulus can be disturbed by a trailing 2-AFC task for up to several hundred milliseconds after the stimulus has been masked. By implication, the current findings challenge the common view that working memory consolidation involves an immutable central processing bottleneck, and they also make clear that consolidation does not stop when a stimulus is masked. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Humphries, Colin; Desai, Rutvik H.; Seidenberg, Mark S.; Osmon, David C.; Stengel, Ben C.; Binder, Jeffrey R.
2013-01-01
Although the left posterior occipitotemporal sulcus (pOTS) has been called a visual word form area, debate persists over the selectivity of this region for reading relative to general nonorthographic visual object processing. We used high-resolution functional magnetic resonance imaging to study left pOTS responses to combinatorial orthographic and object shape information. Participants performed naming and visual discrimination tasks designed to encourage or suppress phonological encoding. During the naming task, all participants showed subregions within left pOTS that were more sensitive to combinatorial orthographic information than to object information. This difference disappeared, however, when phonological processing demands were removed. Responses were stronger to pseudowords than to words, but this effect also disappeared when phonological processing demands were removed. Subregions within the left pOTS are preferentially activated when visual input must be mapped to a phonological representation (i.e., a name) and particularly when component parts of the visual input must be mapped to corresponding phonological elements (consonant or vowel phonemes). Results indicate a specialized role for subregions within the left pOTS in the isomorphic mapping of familiar combinatorial visual patterns to phonological forms. This process distinguishes reading from picture naming and accounts for a wide range of previously reported stimulus and task effects in left pOTS. PMID:22505661
How Attention Affects Spatial Resolution
Carrasco, Marisa; Barbot, Antoine
2015-01-01
We summarize and discuss a series of psychophysical studies on the effects of spatial covert attention on spatial resolution, our ability to discriminate fine patterns. Heightened resolution is beneficial in most, but not all, visual tasks. We show how endogenous attention (voluntary, goal driven) and exogenous attention (involuntary, stimulus driven) affect performance on a variety of tasks mediated by spatial resolution, such as visual search, crowding, acuity, and texture segmentation. Exogenous attention is an automatic mechanism that increases resolution regardless of whether it helps or hinders performance. In contrast, endogenous attention flexibly adjusts resolution to optimize performance according to task demands. We illustrate how psychophysical studies can reveal the underlying mechanisms of these effects and allow us to draw linking hypotheses with known neurophysiological effects of attention. PMID:25948640
Atypical Face Perception in Autism: A Point of View?
Morin, Karine; Guy, Jacalyn; Habak, Claudine; Wilson, Hugh R; Pagani, Linda; Mottron, Laurent; Bertone, Armando
2015-10-01
Face perception is the most commonly used visual metric of social perception in autism. However, when found to be atypical, the origin of face perception differences in autism is contentious. One hypothesis proposes that a locally oriented visual analysis, characteristic of individuals with autism, ultimately affects performance on face tasks where a global analysis is optimal. The objective of this study was to evaluate this hypothesis by assessing face identity discrimination with synthetic faces presented with and without changes in viewpoint, with the former condition minimizing access to local face attributes used for identity discrimination. Twenty-eight individuals with autism and 30 neurotypical participants performed a face identity discrimination task. Stimuli were synthetic faces extracted from traditional face photographs in both front and 20° side viewpoints, digitized from 37 points to provide a continuous measure of facial geometry. Face identity discrimination thresholds were obtained using a two-alternative, temporal forced choice match-to-sample paradigm. Analyses revealed an interaction between group and condition, with group differences found only for the viewpoint change condition, where performance in the autism group was decreased compared to that of neurotypical participants. The selective decrease in performance for the viewpoint change condition suggests that face identity discrimination in autism is more difficult when access to local cues is minimized, and/or when dependence on integrative analysis is increased. These results lend support to a perceptual contribution of atypical face perception in autism. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Stephan, Denise Nadine; Koch, Iring
2016-11-01
The present study was aimed at examining modality-specific influences in task switching. To this end, participants switched either between modality compatible tasks (auditory-vocal and visual-manual) or incompatible spatial discrimination tasks (auditory-manual and visual-vocal). In addition, auditory and visual stimuli were presented simultaneously (i.e., bimodally) in each trial, so that selective attention was required to process the task-relevant stimulus. The inclusion of bimodal stimuli enabled us to assess congruence effects as a converging measure of increased between-task interference. The tasks followed a pre-instructed sequence of double alternations (AABB), so that no explicit task cues were required. The results show that switching between two modality incompatible tasks increases both switch costs and congruence effects compared to switching between two modality compatible tasks. The finding of increased congruence effects in modality incompatible tasks supports our explanation in terms of ideomotor "backward" linkages between anticipated response effects and the stimuli that called for this response in the first place. According to this generalized ideomotor idea, the modality match between response effects and stimuli would prime selection of a response in the compatible modality. This priming would cause increased difficulties to ignore the competing stimulus and hence increases the congruence effect. Moreover, performance would be hindered when switching between modality incompatible tasks and facilitated when switching between modality compatible tasks.
Goodale, M A; Murison, R C
1975-05-02
The effects of bilateral removal of the superior colliculus or visual cortex on visually guided locomotor movements in rats performing a brightness discrimination task were investigated directly with the use of cine film. Rats with collicular lesions showed patterns of locomotion comparable to or more efficient than those of normal animals when approaching one of 5 small doors located at one end of a large open area. In contrast, animals with large but incomplete lesions of visual cortex were distinctly impaired in their visual control of approach responses to the same stimuli. On the other hand, rats with collicular damage showed no orienting reflex or evidence of distraction in the same task when novel visual or auditory stimuli were presented. However, both normal and visual-decorticate rats showed various components of the orienting reflex and disturbance in task performance when the same novel stimuli were presented. These results suggest that although the superior colliculus does not appear to be essential to the visual control of locomotor orientation, this midbrain structure might participate in the mediation of shifts in visual fixation and attention. Visual cortex, while contributing to visuospatial guidance of locomotor movements, might not play a significant role in the control and integration of the orienting reflex.
de Rivera, Christina; Boutet, Isabelle; Zicker, Steven C; Milgram, Norton W
2005-03-01
Tasks requiring visual discrimination are commonly used in assessment of canine cognitive function. However, little is known about canine visual processing, and virtually nothing is known about the effects of age on canine visual function. This study describes a novel behavioural method developed to assess one aspect of canine visual function, namely contrast sensitivity. Four age groups (young, middle aged, old, and senior) were studied. We also included a group of middle aged to old animals that had been maintained for at least 4 years on a specially formulated food containing a broad spectrum of antioxidants and mitochondrial cofactors. Performance of this group was compared with a group in the same age range maintained on a control diet. In the first phase, all animals were trained to discriminate between two high contrast shapes. In the second phase, contrast was progressively reduced by increasing the luminance of the shapes. Performance decreased as a function of age, but the differences did not achieve statistical significance, possibly because of a small sample size in the young group. All age groups were able to acquire the initial discrimination, although the two older age groups showed slower learning. Errors increased with decreasing contrast with the maximal number of errors for the 1% contrast shape. Also, all animals on the antioxidant diet learned the task and had significantly fewer errors at the high contrast compared with the animals on the control diet. The initial results suggest that contrast sensitivity deteriorates with age in the canine while form perception is largely unaffected by age.
NEONATAL VISUAL INFORMATION PROCESSING IN COCAINE-EXPOSED AND NON-EXPOSED INFANTS
Singer, Lynn T.; Arendt, Robert; Fagan, Joseph; Minnes, Sonia; Salvator, Ann; Bolek, Tina; Becker, Michael
2014-01-01
This study investigated early neonatal visual preferences in 267 poly drug exposed neonates (131 cocaine-exposed and 136 non-cocaine exposed) whose drug exposure was documented through interviews and urine and meconium drug screens. Infants were given four visual recognition memory tasks comparing looking time to familiarized stimuli of lattices and rectangular shapes to novel stimuli of a schematic face and curved hourglass and bull’s eye forms. Cocaine-exposed infants performed more poorly, after consideration of confounding factors, with a relationship of severity of cocaine exposure to lower novelty score found for both self-report and biologic measures of exposure, Findings support theories which link prenatal cocaine exposure to deficits in information processing entailing attentional and arousal organizational systems. Neonatal visual discrimination and attention tasks should be further explored as potentially sensitive behavioral indicators of teratologic effects. PMID:25717215
Delhey, Kaspar; Hall, Michelle; Kingma, Sjouke A.; Peters, Anne
2013-01-01
Colour signals are expected to match visual sensitivities of intended receivers. In birds, evolutionary shifts from violet-sensitive (V-type) to ultraviolet-sensitive (U-type) vision have been linked to increased prevalence of colours rich in shortwave reflectance (ultraviolet/blue), presumably due to better perception of such colours by U-type vision. Here we provide the first test of this widespread idea using fairy-wrens and allies (Family Maluridae) as a model, a family where shifts in visual sensitivities from V- to U-type eyes are associated with male nuptial plumage rich in ultraviolet/blue colours. Using psychophysical visual models, we compared the performance of both types of visual systems at two tasks: (i) detecting contrast between male plumage colours and natural backgrounds, and (ii) perceiving intraspecific chromatic variation in male plumage. While U-type outperforms V-type vision at both tasks, the crucial test here is whether U-type vision performs better at detecting and discriminating ultraviolet/blue colours when compared with other colours. This was true for detecting contrast between plumage colours and natural backgrounds (i), but not for discriminating intraspecific variability (ii). Our data indicate that selection to maximize conspicuousness to conspecifics may have led to the correlation between ultraviolet/blue colours and U-type vision in this clade of birds. PMID:23118438
Leach, P T; Crawley, J N
2017-12-20
Mutant mouse models of neurodevelopmental disorders with intellectual disabilities provide useful translational research tools, especially in cases where robust cognitive deficits are reproducibly detected. However, motor, sensory and/or health issues consequent to the mutation may introduce artifacts that preclude testing in some standard cognitive assays. Touchscreen learning and memory tasks in small operant chambers have the potential to circumvent these confounds. Here we use touchscreen visual discrimination learning to evaluate performance in the maternally derived Ube3a mouse model of Angelman syndrome, the Ts65Dn trisomy mouse model of Down syndrome, and the Mecp2 Bird mouse model of Rett syndrome. Significant deficits in acquisition of a 2-choice visual discrimination task were detected in both Ube3a and Ts65Dn mice. Procedural control measures showed no genotype differences during pretraining phases or during acquisition. Mecp2 males did not survive long enough for touchscreen training, consistent with previous reports. Most Mecp2 females failed on pretraining criteria. Significant impairments on Morris water maze spatial learning were detected in both Ube3a and Ts65Dn, replicating previous findings. Abnormalities on rotarod in Ube3a, and on open field in Ts65Dn, replicating previous findings, may have contributed to the observed acquisition deficits and swim speed abnormalities during water maze performance. In contrast, these motor phenotypes do not appear to have affected touchscreen procedural abilities during pretraining or visual discrimination training. Our findings of slower touchscreen learning in 2 mouse models of neurodevelopmental disorders with intellectual disabilities indicate that operant tasks offer promising outcome measures for the preclinical discovery of effective pharmacological therapeutics. © 2017 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.
Visual Search Performance in Patients with Vision Impairment: A Systematic Review.
Senger, Cassia; Margarido, Maria Rita Rodrigues Alves; De Moraes, Carlos Gustavo; De Fendi, Ligia Issa; Messias, André; Paula, Jayter Silva
2017-11-01
Patients with visual impairment are constantly facing challenges to achieve an independent and productive life, which depends upon both a good visual discrimination and search capacities. Given that visual search is a critical skill for several daily tasks and could be used as an index of the overall visual function, we investigated the relationship between vision impairment and visual search performance. A comprehensive search was undertaken using electronic PubMed, EMBASE, LILACS, and Cochrane databases from January 1980 to December 2016, applying the following terms: "visual search", "visual search performance", "visual impairment", "visual exploration", "visual field", "hemianopia", "search time", "vision lost", "visual loss", and "low vision". Two hundred seventy six studies from 12,059 electronic database files were selected, and 40 of them were included in this review. Studies included participants of all ages, both sexes, and the sample sizes ranged from 5 to 199 participants. Visual impairment was associated with worse visual search performance in several ophthalmologic conditions, which were either artificially induced, or related to specific eye and neurological diseases. This systematic review details all the described circumstances interfering with visual search tasks, highlights the need for developing technical standards, and outlines patterns for diagnosis and therapy using visual search capabilities.
Bublitz, Alexander; Weinhold, Severine R.; Strobel, Sophia; Dehnhardt, Guido; Hanke, Frederike D.
2017-01-01
Octopuses (Octopus vulgaris) are generally considered to possess extraordinary cognitive abilities including the ability to successfully perform in a serial reversal learning task. During reversal learning, an animal is presented with a discrimination problem and after reaching a learning criterion, the signs of the stimuli are reversed: the former positive becomes the negative stimulus and vice versa. If an animal improves its performance over reversals, it is ascribed advanced cognitive abilities. Reversal learning has been tested in octopus in a number of studies. However, the experimental procedures adopted in these studies involved pre-training on the new positive stimulus after a reversal, strong negative reinforcement or might have enabled secondary cueing by the experimenter. These procedures could have all affected the outcome of reversal learning. Thus, in this study, serial visual reversal learning was revisited in octopus. We trained four common octopuses (O. vulgaris) to discriminate between 2-dimensional stimuli presented on a monitor in a simultaneous visual discrimination task and reversed the signs of the stimuli each time the animals reached the learning criterion of ≥80% in two consecutive sessions. The animals were trained using operant conditioning techniques including a secondary reinforcer, a rod that was pushed up and down the feeding tube, which signaled the correctness of a response and preceded the subsequent primary reinforcement of food. The experimental protocol did not involve negative reinforcement. One animal completed four reversals and showed progressive improvement, i.e., it decreased its errors to criterion the more reversals it experienced. This animal developed a generalized response strategy. In contrast, another animal completed only one reversal, whereas two animals did not learn to reverse during the first reversal. In conclusion, some octopus individuals can learn to reverse in a visual task demonstrating behavioral flexibility even with a refined methodology. PMID:28223940
Surround-Masking Affects Visual Estimation Ability
Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.
2017-01-01
Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845
Self-motion Perception Training: Thresholds Improve in the Light but not in the Dark
Hartmann, Matthias; Furrer, Sarah; Herzog, Michael H.; Merfeld, Daniel M.; Mast, Fred W.
2014-01-01
We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform, and asked to indicate the direction of motion. A total of eleven participants underwent 3360 practice trials, distributed over twelve (Experiment 1) or six days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness. PMID:23392475
Real-Time Strategy Video Game Experience and Visual Perceptual Learning.
Kim, Yong-Hwan; Kang, Dong-Wha; Kim, Dongho; Kim, Hye-Jin; Sasaki, Yuka; Watanabe, Takeo
2015-07-22
Visual perceptual learning (VPL) is defined as long-term improvement in performance on a visual-perception task after visual experiences or training. Early studies have found that VPL is highly specific for the trained feature and location, suggesting that VPL is associated with changes in the early visual cortex. However, the generality of visual skills enhancement attributable to action video-game experience suggests that VPL can result from improvement in higher cognitive skills. If so, experience in real-time strategy (RTS) video-game play, which may heavily involve cognitive skills, may also facilitate VPL. To test this hypothesis, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and elucidated underlying structural and functional neural mechanisms. Healthy young human subjects underwent six training sessions on a texture discrimination task. Diffusion-tensor and functional magnetic resonance imaging were performed before and after training. VGPs performed better than NVGPs in the early phase of training. White-matter connectivity between the right external capsule and visual cortex and neuronal activity in the right inferior frontal gyrus (IFG) and anterior cingulate cortex (ACC) were greater in VGPs than NVGPs and were significantly correlated with RTS video-game experience. In both VGPs and NVGPs, there was task-related neuronal activity in the right IFG, ACC, and striatum, which was strengthened after training. These results indicate that RTS video-game experience, associated with changes in higher-order cognitive functions and connectivity between visual and cognitive areas, facilitates VPL in early phases of training. The results support the hypothesis that VPL can occur without involvement of only visual areas. Significance statement: Although early studies found that visual perceptual learning (VPL) is associated with involvement of the visual cortex, generality of visual skills enhancement by action video-game experience suggests that higher-order cognition may be involved in VPL. If so, real-time strategy (RTS) video-game experience may facilitate VPL as a result of heavy involvement of cognitive skills. Here, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and investigated the underlying neural mechanisms. VGPs showed better performance in the early phase of training on the texture discrimination task and greater level of neuronal activity in cognitive areas and structural connectivity between visual and cognitive areas than NVGPs. These results support the hypothesis that VPL can occur beyond the visual cortex. Copyright © 2015 the authors 0270-6474/15/3510485-08$15.00/0.
Visual Selective Attention Biases Contribute to the Other-Race Effect Among 9-Month-Old Infants
Oakes, Lisa M.; Amso, Dima
2016-01-01
During the first year of life, infants maintain their ability to discriminate faces from their own race but become less able to differentiate other-race faces. Though this is likely due to daily experience with own-race faces, the mechanisms linking repeated exposure to optimal face processing remain unclear. One possibility is that frequent experience with own-race faces generates a selective attention bias to these faces. Selective attention elicits enhancement of attended information and suppression of distraction to improve visual processing of attended objects. Thus attention biases to own-race faces may boost processing and discrimination of these faces relative to other-race faces. We used a spatial cueing task to bias attention to own- or other-race faces among Caucasian 9-month-old infants. Infants discriminated faces in the focus of the attention bias, regardless of race, indicating that infants remained sensitive to differences among other-race faces. Instead, efficacy of face discrimination reflected the extent of attention engagement. PMID:26486228
Visual selective attention biases contribute to the other-race effect among 9-month-old infants.
Markant, Julie; Oakes, Lisa M; Amso, Dima
2016-04-01
During the first year of life, infants maintain their ability to discriminate faces from their own race but become less able to differentiate other-race faces. Though this is likely due to daily experience with own-race faces, the mechanisms linking repeated exposure to optimal face processing remain unclear. One possibility is that frequent experience with own-race faces generates a selective attention bias to these faces. Selective attention elicits enhancement of attended information and suppression of distraction to improve visual processing of attended objects. Thus attention biases to own-race faces may boost processing and discrimination of these faces relative to other-race faces. We used a spatial cueing task to bias attention to own- or other-race faces among Caucasian 9-month-old infants. Infants discriminated faces in the focus of the attention bias, regardless of race, indicating that infants remained sensitive to differences among other-race faces. Instead, efficacy of face discrimination reflected the extent of attention engagement. © 2015 Wiley Periodicals, Inc.
The effect of perceptual load on tactile spatial attention: Evidence from event-related potentials.
Gherri, Elena; Berreby, Fiona
2017-10-15
To investigate whether tactile spatial attention is modulated by perceptual load, behavioural and electrophysiological measures were recorded during two spatial cuing tasks in which the difficulty of the target/non-target discrimination was varied (High and Low load tasks). Moreover, to study whether attentional modulations by load are sensitive to the availability of visual information, the High and Low load tasks were carried out under both illuminated and darkness conditions. ERPs to cued and uncued non-targets were compared as a function of task (High vs. Low load) and illumination condition (Light vs. Darkness). Results revealed that the locus of tactile spatial attention was determined by a complex interaction between perceptual load and illumination conditions during sensory-specific stages of processing. In the Darkness, earlier effects of attention were present in the High load than in the Low load task, while no difference between tasks emerged in the Light. By contrast, increased load was associated with stronger attention effects during later post-perceptual processing stages regardless of illumination conditions. These findings demonstrate that ERP correlates of tactile spatial attention are strongly affected by the perceptual load of the target/non-target discrimination. However, differences between illumination conditions show that the impact of load on tactile attention depends on the presence of visual information. Perceptual load is one of the many factors that contribute to determine the effects of spatial selectivity in touch. Copyright © 2017 Elsevier B.V. All rights reserved.
Visual Network Asymmetry and Default Mode Network Function in ADHD: An fMRI Study
Hale, T. Sigi; Kane, Andrea M.; Kaminsky, Olivia; Tung, Kelly L.; Wiley, Joshua F.; McGough, James J.; Loo, Sandra K.; Kaplan, Jonas T.
2014-01-01
Background: A growing body of research has identified abnormal visual information processing in attention-deficit hyperactivity disorder (ADHD). In particular, slow processing speed and increased reliance on visuo-perceptual strategies have become evident. Objective: The current study used recently developed fMRI methods to replicate and further examine abnormal rightward biased visual information processing in ADHD and to further characterize the nature of this effect; we tested its association with several large-scale distributed network systems. Method: We examined fMRI BOLD response during letter and location judgment tasks, and directly assessed visual network asymmetry and its association with large-scale networks using both a voxelwise and an averaged signal approach. Results: Initial within-group analyses revealed a pattern of left-lateralized visual cortical activity in controls but right-lateralized visual cortical activity in ADHD children. Direct analyses of visual network asymmetry confirmed atypical rightward bias in ADHD children compared to controls. This ADHD characteristic was atypically associated with reduced activation across several extra-visual networks, including the default mode network (DMN). We also found atypical associations between DMN activation and ADHD subjects’ inattentive symptoms and task performance. Conclusion: The current study demonstrated rightward VNA in ADHD during a simple letter discrimination task. This result adds an important novel consideration to the growing literature identifying abnormal visual processing in ADHD. We postulate that this characteristic reflects greater perceptual engagement of task-extraneous content, and that it may be a basic feature of less efficient top-down task-directed control over visual processing. We additionally argue that abnormal DMN function may contribute to this characteristic. PMID:25076915
The effect of acute sleep deprivation on visual evoked potentials in professional drivers.
Jackson, Melinda L; Croft, Rodney J; Owens, Katherine; Pierce, Robert J; Kennedy, Gerard A; Crewther, David; Howard, Mark E
2008-09-01
Previous studies have demonstrated that as little as 18 hours of sleep deprivation can cause deleterious effects on performance. It has also been suggested that sleep deprivation can cause a "tunnel-vision" effect, in which attention is restricted to the center of the visual field. The current study aimed to replicate these behavioral effects and to examine the electrophysiological underpinnings of these changes. Repeated-measures experimental study. University laboratory. Nineteen professional drivers (1 woman; mean age = 45.3 +/- 9.1 years). Two experimental sessions were performed; one following 27 hours of sleep deprivation and the other following a normal night of sleep, with control for circadian effects. A tunnel-vision task (central versus peripheral visual discrimination) and a standard checkerboard-viewing task were performed while 32-channel EEG was recorded. For the tunnel-vision task, sleep deprivation resulted in an overall slowing of reaction times and increased errors of omission for both peripheral and foveal stimuli (P < 0.05). These changes were related to reduced P300 amplitude (indexing cognitive processing) but not measures of early visual processing. No evidence was found for an interaction effect between sleep deprivation and visual-field position, either in terms of behavior or electrophysiological responses. Slower processing of the sustained parvocellular visual pathway was demonstrated. These findings suggest that performance deficits on visual tasks during sleep deprivation are due to higher cognitive processes rather than early visual processing. Sleep deprivation may differentially impair processing of more-detailed visual information. Features of the study design (eg, visual angle, duration of sleep deprivation) may influence whether peripheral visual-field neglect occurs.
The Primary Visual Cortex Is Differentially Modulated by Stimulus-Driven and Top-Down Attention
Bekisz, Marek; Bogdan, Wojciech; Ghazaryan, Anaida; Waleszczyk, Wioletta J.; Kublik, Ewa; Wróbel, Andrzej
2016-01-01
Selective attention can be focused either volitionally, by top-down signals derived from task demands, or automatically, by bottom-up signals from salient stimuli. Because the brain mechanisms that underlie these two attention processes are poorly understood, we recorded local field potentials (LFPs) from primary visual cortical areas of cats as they performed stimulus-driven and anticipatory discrimination tasks. Consistent with our previous observations, in both tasks, we found enhanced beta activity, which we have postulated may serve as an attention carrier. We characterized the functional organization of task-related beta activity by (i) cortical responses (EPs) evoked by electrical stimulation of the optic chiasm and (ii) intracortical LFP correlations. During the anticipatory task, peripheral stimulation that was preceded by high-amplitude beta oscillations evoked large-amplitude EPs compared with EPs that followed low-amplitude beta. In contrast, during the stimulus-driven task, cortical EPs preceded by high-amplitude beta oscillations were, on average, smaller than those preceded by low-amplitude beta. Analysis of the correlations between the different recording sites revealed that beta activation maps were heterogeneous during the bottom-up task and homogeneous for the top-down task. We conclude that bottom-up attention activates cortical visual areas in a mosaic-like pattern, whereas top-down attentional modulation results in spatially homogeneous excitation. PMID:26730705
Makowiecki, Kalina; Hammond, Geoff; Rodger, Jennifer
2012-01-01
In behavioural experiments, motivation to learn can be achieved using food rewards as positive reinforcement in food-restricted animals. Previous studies reduce animal weights to 80–90% of free-feeding body weight as the criterion for food restriction. However, effects of different degrees of food restriction on task performance have not been assessed. We compared learning task performance in mice food-restricted to 80 or 90% body weight (BW). We used adult wildtype (WT; C57Bl/6j) and knockout (ephrin-A2−/−) mice, previously shown to have a reverse learning deficit. Mice were trained in a two-choice visual discrimination task with food reward as positive reinforcement. When mice reached criterion for one visual stimulus (80% correct in three consecutive 10 trial sets) they began the reverse learning phase, where the rewarded stimulus was switched to the previously incorrect stimulus. For the initial learning and reverse phase of the task, mice at 90%BW took almost twice as many trials to reach criterion as mice at 80%BW. Furthermore, WT 80 and 90%BW groups significantly differed in percentage correct responses and learning strategy in the reverse learning phase, whereas no differences between weight restriction groups were observed in ephrin-A2−/− mice. Most importantly, genotype-specific differences in reverse learning strategy were only detected in the 80%BW groups. Our results indicate that increased food restriction not only results in better performance and a shorter training period, but may also be necessary for revealing behavioural differences between experimental groups. This has important ethical and animal welfare implications when deciding extent of diet restriction in behavioural studies. PMID:23144936
Makowiecki, Kalina; Hammond, Geoff; Rodger, Jennifer
2012-01-01
In behavioural experiments, motivation to learn can be achieved using food rewards as positive reinforcement in food-restricted animals. Previous studies reduce animal weights to 80-90% of free-feeding body weight as the criterion for food restriction. However, effects of different degrees of food restriction on task performance have not been assessed. We compared learning task performance in mice food-restricted to 80 or 90% body weight (BW). We used adult wildtype (WT; C57Bl/6j) and knockout (ephrin-A2⁻/⁻) mice, previously shown to have a reverse learning deficit. Mice were trained in a two-choice visual discrimination task with food reward as positive reinforcement. When mice reached criterion for one visual stimulus (80% correct in three consecutive 10 trial sets) they began the reverse learning phase, where the rewarded stimulus was switched to the previously incorrect stimulus. For the initial learning and reverse phase of the task, mice at 90%BW took almost twice as many trials to reach criterion as mice at 80%BW. Furthermore, WT 80 and 90%BW groups significantly differed in percentage correct responses and learning strategy in the reverse learning phase, whereas no differences between weight restriction groups were observed in ephrin-A2⁻/⁻ mice. Most importantly, genotype-specific differences in reverse learning strategy were only detected in the 80%BW groups. Our results indicate that increased food restriction not only results in better performance and a shorter training period, but may also be necessary for revealing behavioural differences between experimental groups. This has important ethical and animal welfare implications when deciding extent of diet restriction in behavioural studies.
Pre-cooling moderately enhances visual discrimination during exercise in the heat.
Clarke, Neil D; Duncan, Michael J; Smith, Mike; Hankey, Joanne
2017-02-01
Pre-cooling has been reported to attenuate the increase in core temperature, although, information regarding the effects of pre-cooling on cognitive function is limited. The present study investigated the effects of pre-cooling on visual discrimination during exercise in the heat. Eight male recreational runners completed 90 min of treadmill running at 65% [Formula: see text] 2max in the heat [32.4 ± 0.9°C and 46.8 ± 6.4% relative humidity (r.h.)] on two occasions in a randomised, counterbalanced crossover design. Participants underwent pre-cooling by means of water immersion (20.3 ± 0.3°C) for 60 min or remained seated for 60 min in a laboratory (20.2 ± 1.7°C and 60.2 ± 2.5% r.h.). Rectal temperature (T rec ) and mean skin temperature (T skin ) were monitored throughout the protocol. At 30-min intervals participants performed a visual discrimination task. Following pre-cooling, T rec (P = 0.040; [Formula: see text] = 0.48) was moderately lower at 0 and 30 min and T skin (P = 0.003; [Formula: see text] = 0.75) lower to a large extent at 0 min of exercise. Visual discrimination was moderately more accurate at 60 and 90 min of exercise following pre-cooling (P = 0.067; [Formula: see text] = 0.40). Pre-cooling resulted in small improvements in visual discrimination sensitivity (F 1,7 = 2.188; P = 0.183; [Formula: see text] = 0.24), criterion (F 1,7 = 1.298; P = 0.292; [Formula: see text] = 0.16) and bias (F 1,7 = 2.202; P = 0.181; [Formula: see text] = 0.24). Pre-cooling moderately improves visual discrimination accuracy during exercise in the heat.
Something worth remembering: visual discrimination in sharks.
Fuss, Theodora; Schluessel, Vera
2015-03-01
This study investigated memory retention capabilities of juvenile gray bamboo sharks (Chiloscyllium griseum) using two-alternative forced-choice experiments. The sharks had previously been trained in a range of visual discrimination tasks, such as distinguishing between squares, triangles and lines, and their corresponding optical illusions (i.e., the Kanizsa figures or Müller-Lyer illusions), and in the present study, we tested them for memory retention. Despite the absence of reinforcement, sharks remembered the learned information for a period of up to 50 weeks, after which testing was terminated. In fish, as in other vertebrates, memory windows vary in duration depending on species and task; while it may seem beneficial to retain some information for a long time or even indefinitely, other information may be forgotten more easily to retain flexibility and save energy. The results of this study indicate that sharks are capable of long-term memory within the framework of selected cognitive skills. These could aid sharks in activities such as food retrieval, predator avoidance, mate choice or habitat selection and therefore be worth being remembered for extended periods of time. As in other cognitive tasks, intraspecific differences reflected the behavioral breadth of the species.
The Pivotal Role of the Right Parietal Lobe in Temporal Attention.
Agosta, Sara; Magnago, Denise; Tyler, Sarah; Grossman, Emily; Galante, Emanuela; Ferraro, Francesco; Mazzini, Nunzia; Miceli, Gabriele; Battelli, Lorella
2017-05-01
The visual system is extremely efficient at detecting events across time even at very fast presentation rates; however, discriminating the identity of those events is much slower and requires attention over time, a mechanism with a much coarser resolution [Cavanagh, P., Battelli, L., & Holcombe, A. O. Dynamic attention. In A. C. Nobre & S. Kastner (Eds.), The Oxford handbook of attention (pp. 652-675). Oxford: Oxford University Press, 2013]. Patients affected by right parietal lesion, including the TPJ, are severely impaired in discriminating events across time in both visual fields [Battelli, L., Cavanagh, P., & Thornton, I. M. Perception of biological motion in parietal patients. Neuropsychologia, 41, 1808-1816, 2003]. One way to test this ability is to use a simultaneity judgment task, whereby participants are asked to indicate whether two events occurred simultaneously or not. We psychophysically varied the frequency rate of four flickering disks, and on most of the trials, one disk (either in the left or right visual field) was flickering out-of-phase relative to the others. We asked participants to report whether two left-or-right-presented disks were simultaneous or not. We tested a total of 23 right and left parietal lesion patients in Experiment 1, and only right parietal patients showed impairment in both visual fields while their low-level visual functions were normal. Importantly, to causally link the right TPJ to the relative timing processing, we ran a TMS experiment on healthy participants. Participants underwent three stimulation sessions and performed the same simultaneity judgment task before and after 20 min of low-frequency inhibitory TMS over right TPJ, left TPJ, or early visual area as a control. rTMS over the right TPJ caused a bilateral impairment in the simultaneity judgment task, whereas rTMS over left TPJ or over early visual area did not affect performance. Altogether, our results directly link the right TPJ to the processing of relative time.
Asymmetric top-down modulation of ascending visual pathways in pigeons.
Freund, Nadja; Valencia-Alfonso, Carlos E; Kirsch, Janina; Brodmann, Katja; Manns, Martina; Güntürkün, Onur
2016-03-01
Cerebral asymmetries are a ubiquitous phenomenon evident in many species, incl. humans, and they display some similarities in their organization across vertebrates. In many species the left hemisphere is associated with the ability to categorize objects based on abstract or experience-based behaviors. Using the asymmetrically organized visual system of pigeons as an animal model, we show that descending forebrain pathways asymmetrically modulate visually evoked responses of single thalamic units. Activity patterns of neurons within the nucleus rotundus, the largest thalamic visual relay structure in birds, were differently modulated by left and right hemispheric descending systems. Thus, visual information ascending towards the left hemisphere was modulated by forebrain top-down systems at thalamic level, while right thalamic units were strikingly less modulated. This asymmetry of top-down control could promote experience-based processes within the left hemisphere, while biasing the right side towards stimulus-bound response patterns. In a subsequent behavioral task we tested the possible functional impact of this asymmetry. Under monocular conditions, pigeons learned to discriminate color pairs, so that each hemisphere was trained on one specific discrimination. Afterwards the animals were presented with stimuli that put the hemispheres in conflict. Response patterns on the conflicting stimuli revealed a clear dominance of the left hemisphere. Transient inactivation of left hemispheric top-down control reduced this dominance while inactivation of right hemispheric top-down control had no effect on response patterns. Functional asymmetries of descending systems that modify visual ascending pathways seem to play an important role in the superiority of the left hemisphere in experience-based visual tasks. Copyright © 2015. Published by Elsevier Ltd.
Measuring pilot workload in a motion base simulator. III - Synchronous secondary task
NASA Technical Reports Server (NTRS)
Kantowitz, Barry H.; Bortolussi, Michael R.; Hart, Sandra G.
1987-01-01
This experiment continues earlier research of Kantowitz et al. (1983) conducted in a GAT-1 motion-base trainer to evaluate choice-reaction secondary tasks as measures of pilot work load. The earlier work used an asynchronous secondary task presented every 22 sec regardless of flying performance. The present experiment uses a synchronous task presented only when a critical event occurred on the flying task. Both two- and four-choice visual secondary tasks were investigated. Analysis of primary flying-task results showed no decrement in error for altitude, indicating that the key assumption necessary for using a choice secondary task was satisfied. Reaction times showed significant differences between 'easy' and 'hard' flight scenarios as well as the ability to discriminate among flight tasks.
Snapp-Childs, Winona; Wilson, Andrew D; Bingham, Geoffrey P
2015-07-01
Under certain conditions, learning can transfer from a trained task to an untrained version of that same task. However, it is as yet unclear what those certain conditions are or why learning transfers when it does. Coordinated rhythmic movement is a valuable model system for investigating transfer because we have a model of the underlying task dynamic that includes perceptual coupling between the limbs being coordinated. The model predicts that (1) coordinated rhythmic movements, both bimanual and unimanual, are organised with respect to relative motion information for relative phase in the coupling function, (2) unimanual is less stable than bimanual coordination because the coupling is unidirectional rather than bidirectional, and (3) learning a new coordination is primarily about learning to perceive and use the relevant information which, with equal perceptual improvement due to training, yields equal transfer of learning from bimanual to unimanual coordination and vice versa [but, given prediction (2), the resulting performance is also conditioned by the intrinsic stability of each task]. In the present study, two groups were trained to produce 90° either unimanually or bimanually, respectively, and tested in respect to learning (namely improved performance in the trained 90° coordination task and improved visual discrimination of 90°) and transfer of learning (to the other, untrained 90° coordination task). Both groups improved in the task condition in which they were trained and in their ability to visually discriminate 90°, and this learning transferred to the untrained condition. When scaled by the relative intrinsic stability of each task, transfer levels were found to be equal. The results are discussed in the context of the perception-action approach to learning and performance.
2018-01-01
Many individuals with posttraumatic stress disorder (PTSD) report experiencing frequent intrusive memories of the original traumatic event (e.g., flashbacks). These memories can be triggered by situations or stimuli that reflect aspects of the trauma and may reflect basic processes in learning and memory, such as generalization. It is possible that, through increased generalization, non-threatening stimuli that once evoked normal memories become associated with traumatic memories. Previous research has reported increased generalization in PTSD, but the role of visual discrimination processes has not been examined. To investigate visual discrimination in PTSD, 143 participants (Veterans and civilians) self-assessed for symptom severity were grouped according to the presence of severe PTSD symptoms (PTSS) vs. few/no symptoms (noPTSS). Participants were given a visual match-to-sample pattern separation task that varied trials by spatial separation (Low, Medium, High) and temporal delays (5, 10, 20, 30 s). Unexpectedly, the PTSS group demonstrated better discrimination performance than the noPTSS group at the most difficult spatial trials (Low spatial separation). Further assessment of accuracy and reaction time using diffusion drift modeling indicated that the better performance by the PTSS group on the hardest trials was not explained by slower reaction times, but rather a faster accumulation of evidence during decision making in conjunction with a reduced threshold, indicating a tendency in the PTSS group to decide quickly rather than waiting for additional evidence to support the decision. This result supports the need for future studies examining the precise role of discrimination and generalization in PTSD, and how these cognitive processes might contribute to expression and maintenance of PTSD symptoms. PMID:29736339
Learning Radiological Appearances of Diseases: Does Comparison Help?
ERIC Educational Resources Information Center
Kok, Ellen M.; de Bruin, Anique B. H.; Robben, Simon G. F.; van Merrienboer, Jeroen J. G.
2013-01-01
Comparison learning is a promising approach for learning complex real-life visual tasks. When medical students study radiological appearances of diseases, comparison of images showing diseases with images showing no abnormalities could help them learn to discriminate relevant, disease-related information. Medical students studied 12 diseases on…
Socio-cognitive profiles for visual learning in young and older adults
Christian, Julie; Goldstone, Aimee; Kuai, Shu-Guang; Chin, Wynne; Abrams, Dominic; Kourtzi, Zoe
2015-01-01
It is common wisdom that practice makes perfect; but why do some adults learn better than others? Here, we investigate individuals’ cognitive and social profiles to test which variables account for variability in learning ability across the lifespan. In particular, we focused on visual learning using tasks that test the ability to inhibit distractors and select task-relevant features. We tested the ability of young and older adults to improve through training in the discrimination of visual global forms embedded in a cluttered background. Further, we used a battery of cognitive tasks and psycho-social measures to examine which of these variables predict training-induced improvement in perceptual tasks and may account for individual variability in learning ability. Using partial least squares regression modeling, we show that visual learning is influenced by cognitive (i.e., cognitive inhibition, attention) and social (strategic and deep learning) factors rather than an individual’s age alone. Further, our results show that independent of age, strong learners rely on cognitive factors such as attention, while weaker learners use more general cognitive strategies. Our findings suggest an important role for higher-cognitive circuits involving executive functions that contribute to our ability to improve in perceptual tasks after training across the lifespan. PMID:26113820
Rapid natural scene categorization in the near absence of attention
Li, Fei Fei; VanRullen, Rufin; Koch, Christof; Perona, Pietro
2002-01-01
What can we see when we do not pay attention? It is well known that we can be “blind” even to major aspects of natural scenes when we attend elsewhere. The only tasks that do not need attention appear to be carried out in the early stages of the visual system. Contrary to this common belief, we report that subjects can rapidly detect animals or vehicles in briefly presented novel natural scenes while simultaneously performing another attentionally demanding task. By comparison, they are unable to discriminate large T's from L's, or bisected two-color disks from their mirror images under the same conditions. We conclude that some visual tasks associated with “high-level” cortical areas may proceed in the near absence of attention. PMID:12077298
Sneve, Markus H; Magnussen, Svein; Alnæs, Dag; Endestad, Tor; D'Esposito, Mark
2013-11-01
Visual STM of simple features is achieved through interactions between retinotopic visual cortex and a set of frontal and parietal regions. In the present fMRI study, we investigated effective connectivity between central nodes in this network during the different task epochs of a modified delayed orientation discrimination task. Our univariate analyses demonstrate that the inferior frontal junction (IFJ) is preferentially involved in memory encoding, whereas activity in the putative FEFs and anterior intraparietal sulcus (aIPS) remains elevated throughout periods of memory maintenance. We have earlier reported, using the same task, that areas in visual cortex sustain information about task-relevant stimulus properties during delay intervals [Sneve, M. H., Alnæs, D., Endestad, T., Greenlee, M. W., & Magnussen, S. Visual short-term memory: Activity supporting encoding and maintenance in retinotopic visual cortex. Neuroimage, 63, 166-178, 2012]. To elucidate the temporal dynamics of the IFJ-FEF-aIPS-visual cortex network during memory operations, we estimated Granger causality effects between these regions with fMRI data representing memory encoding/maintenance as well as during memory retrieval. We also investigated a set of control conditions involving active processing of stimuli not associated with a memory task and passive viewing. In line with the developing understanding of IFJ as a region critical for control processes with a possible initiating role in visual STM operations, we observed influence from IFJ to FEF and aIPS during memory encoding. Furthermore, FEF predicted activity in a set of higher-order visual areas during memory retrieval, a finding consistent with its suggested role in top-down biasing of sensory cortex.
Wu, Lin; Wang, Yang; Pan, Shirui
2017-12-01
It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.
The effect of short-term training on cardinal and oblique orientation discrimination: an ERP study.
Song, Yan; Sun, Li; Wang, You; Zhang, Xuemin; Kang, Jing; Ma, Xiaoli; Yang, Bin; Guan, Yijie; Ding, Yulong
2010-03-01
The adult brain shows remarkable plasticity, as demonstrated by the improvement in most visual discrimination tasks after intensive practice. However, previous studies have demonstrated that practice improved the discrimination only around oblique orientations, while performance around cardinal orientations (vertical or horizontal orientations) remained stable despite extensive training. The two experiments described here used event-related potentials (ERPs) to investigate the neural substrates underlying different training effects in the two kinds of orientation. Event-related potentials were recorded from subjects when they were trained with a grating orientation discrimination task. Psychophysical threshold measurements were performed before and after the training. For oblique gratings, psychophysical thresholds decreased significantly across training sessions. ERPs showed larger P2 and P3 amplitudes and smaller N1 amplitudes over the parietal/occipital areas with more practice. In line with the psychophysical thresholds, the training effect on the P2 and P3 was specific to stimulus orientation. However, the N1 effect was generalized over differently oriented gratings stimuli. For cardinally oriented gratings, no significant changes were found in the psychophysical thresholds during the training. ERPs still showed similar generalized N1 effect as the oblique gratings. However, the amplitudes of P2 and P3 were unchanged during the whole training. Compared with cardinal orientations, more visual processing stages and later ERP components were involved in the training of oblique orientation discrimination. These results contribute to understanding the neural basis of the asymmetry between cardinal and oblique orientation training effects. Copyright 2009 Elsevier B.V. All rights reserved.
Jolij, Jacob; Scholte, H Steven; van Gaal, Simon; Hodgson, Timothy L; Lamme, Victor A F
2011-12-01
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
The Influence of Phonetic Dimensions on Aphasic Speech Perception
ERIC Educational Resources Information Center
Hessler, Dorte; Jonkers, Roel; Bastiaanse, Roelien
2010-01-01
Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with "audiovisual", "auditory only" and "visual only" stimulus display. Subjects had to…
Congenital Blindness Leads to Enhanced Vibrotactile Perception
ERIC Educational Resources Information Center
Wan, Catherine Y.; Wood, Amanda G.; Reutens, David C.; Wilson, Sarah J.
2010-01-01
Previous studies have shown that in comparison with the sighted, blind individuals display superior non-visual perceptual abilities and differ in brain organisation. In this study, we investigated the performance of blind and sighted participants on a vibrotactile discrimination task. Thirty-three blind participants were classified into one of…
Concentration of Swiss Elite Orienteers.
ERIC Educational Resources Information Center
Seiler, Roland; Wetzel, Jorg
1997-01-01
A visual discrimination task was used to measure concentration among 43 members of Swiss national orienteering teams. Subjects were above average in the number of target objects dealt with and in duration of continuous concentration. For females only, ranking in orienteering performance was related to quality of concentration (ratio of correct to…
Altered orientation of spatial attention in depersonalization disorder.
Adler, Julia; Beutel, Manfred E; Knebel, Achim; Berti, Stefan; Unterrainer, Josef; Michal, Matthias
2014-05-15
Difficulties with concentration are frequent complaints of patients with depersonalization disorder (DPD). Standard neuropsychological tests suggested alterations of the attentional and perceptual systems. To investigate this, the well-validated Spatial Cueing paradigm was used with two different tasks, consisting either in the detection or in the discrimination of visual stimuli. At the start of each trial a cue indicated either the correct (valid) or the incorrect (invalid) position of the upcoming stimulus or was uninformative (neutral). Only under the condition of increased task difficulty (discrimination task) differences between DPD patients and controls were observed. DPD patients showed a smaller total attention directing effect (RT in valid vs. invalid trials) compared to healthy controls only in the discrimination condition. RT costs (i.e., prolonged RT in neutral vs. invalid trials) mainly accounted for this difference. These results indicate that DPD is associated with altered attentional mechanisms, especially with a stronger responsiveness to unexpected events. From an evolutionary perspective this may be advantageous in a dangerous environment, in daily life it may be experienced as high distractibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Stimulus information contaminates summation tests of independent neural representations of features
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2002-01-01
Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
Neural cascade of conflict processing: Not just time-on-task.
McKay, Cameron C; van den Berg, Berry; Woldorff, Marty G
2017-02-01
In visual conflict tasks (e.g., Stroop or flanker), response times (RTs) are generally longer on incongruent trials relative to congruent ones. Two event-related-potential (ERP) components classically associated with the processing of stimulus conflict are the fronto-central, incongruency-related negativity (N inc ) and the posterior late-positive complex (LPC), which are derived from the ERP difference waves for incongruent minus congruent trials. It has been questioned, however, whether these effects, or other neural measures of incongruency (e.g., fMRI responses in the anterior cingulate), reflect true conflict processing, or whether such effects derive mainly from differential time-on-task. To address this question, we leveraged high-temporal-resolution ERP measures of brain activity during two behavioral tasks. The first task, a modified Erikson flanker paradigm (with congruent and incongruent trials), was used to evoke the classic RT and ERP effects associated with conflict. The second was a non-conflict control task in which, participants visually discriminated a single stimulus (with easy and hard discrimination conditions). Behaviorally, the parameters were titrated to yield similar RT effects of conflict and difficulty (27ms). Neurally, both within-task contrasts showed an initial fronto-central negative-polarity wave (N2-latency effect), but they then diverged. In the difficulty difference wave, the initial negativity led directly into the posterior LPC, whereas in the incongruency contrast the initial negativity was followed a by a second fronto-central negative peak (N inc ), which was then followed by a considerably longer-latency LPC. These results provide clear evidence that the longer processing for incongruent stimulus inputs do not just reflect time-on-task or difficulty, but include a true conflict-processing component. Copyright © 2017 Elsevier Ltd. All rights reserved.
Voss, Patrice; Gougoux, Frederic; Zatorre, Robert J; Lassonde, Maryse; Lepore, Franco
2008-04-01
Blind individuals do not necessarily receive more auditory stimulation than sighted individuals. However, to interact effectively with their environment, they have to rely on non-visual cues (in particular auditory) to a greater extent. Often benefiting from cerebral reorganization, they not only learn to rely more on such cues but also may process them better and, as a result, demonstrate exceptional abilities in auditory spatial tasks. Here we examine the effects of blindness on brain activity, using positron emission tomography (PET), during a sound-source discrimination task (SSDT) in both early- and late-onset blind individuals. This should not only provide an answer to the question of whether the blind manifest changes in brain activity but also allow a direct comparison of the two subgroups performing an auditory spatial task. The task was presented under two listening conditions: one binaural and one monaural. The binaural task did not show any significant behavioural differences between groups, but it demonstrated striate and extrastriate activation in the early-blind groups. A subgroup of early-blind individuals, on the other hand, performed significantly better than all the other groups during the monaural task, and these enhanced skills were correlated with elevated activity within the left dorsal extrastriate cortex. Surprisingly, activation of the right ventral visual pathway, which was significantly activated in the late-blind individuals during the monaural task, was negatively correlated with performance. This suggests the possibility that not all cross-modal plasticity is beneficial. Overall, our results not only support previous findings showing that occipital cortex of early-blind individuals is functionally engaged in spatial auditory processing but also shed light on the impact the age of onset of blindness can have on the ensuing cross-modal plasticity.
Stimulus novelty, task relevance and the visual evoked potential in man
NASA Technical Reports Server (NTRS)
Courchesne, E.; Hillyard, S. A.; Galambos, R.
1975-01-01
The effect of task relevance on P3 (waveform of human evoked potential) waves and the methodologies used to deal with them are outlined. Visual evoked potentials (VEPs) were recorded from normal adult subjects performing in a visual discrimination task. Subjects counted the number of presentations of the numeral 4 which was interposed rarely and randomly within a sequence of tachistoscopically flashed background stimuli. Intrusive, task-irrelevant (not counted) stimuli were also interspersed rarely and randomly in the sequence of 2s; these stimuli were of two types: simples, which were easily recognizable, and novels, which were completely unrecognizable. It was found that the simples and the counted 4s evoked posteriorly distributed P3 waves while the irrelevant novels evoked large, frontally distributed P3 waves. These large, frontal P3 waves to novels were also found to be preceded by large N2 waves. These findings indicate that the P3 wave is not a unitary phenomenon but should be considered in terms of a family of waves, differing in their brain generators and in their psychological correlates.
Neural activity in cortical area V4 underlies fine disparity discrimination.
Shiozaki, Hiroshi M; Tanabe, Seiji; Doi, Takahiro; Fujita, Ichiro
2012-03-14
Primates are capable of discriminating depth with remarkable precision using binocular disparity. Neurons in area V4 are selective for relative disparity, which is the crucial visual cue for discrimination of fine disparity. Here, we investigated the contribution of V4 neurons to fine disparity discrimination. Monkeys discriminated whether the center disk of a dynamic random-dot stereogram was in front of or behind its surrounding annulus. We first behaviorally tested the reference frame of the disparity representation used for performing this task. After learning the task with a set of surround disparities, the monkey generalized its responses to untrained surround disparities, indicating that the perceptual decisions were generated from a disparity representation in a relative frame of reference. We then recorded single-unit responses from V4 while the monkeys performed the task. On average, neuronal thresholds were higher than the behavioral thresholds. The most sensitive neurons reached thresholds as low as the psychophysical thresholds. For subthreshold disparities, the monkeys made frequent errors. The variable decisions were predictable from the fluctuation in the neuronal responses. The predictions were based on a decision model in which each V4 neuron transmits the evidence for the disparity it prefers. We finally altered the disparity representation artificially by means of microstimulation to V4. The decisions were systematically biased when microstimulation boosted the V4 responses. The bias was toward the direction predicted from the decision model. We suggest that disparity signals carried by V4 neurons underlie precise discrimination of fine stereoscopic depth.
Götz, Theresa; Hanke, David; Huonker, Ralph; Weiss, Thomas; Klingner, Carsten; Brodoehl, Stefan; Baumbach, Philipp; Witte, Otto W
2017-06-01
We often close our eyes to improve perception. Recent results have shown a decrease of perception thresholds accompanied by an increase in somatosensory activity after eye closure. However, does somatosensory spatial discrimination also benefit from eye closure? We previously showed that spatial discrimination is accompanied by a reduction of somatosensory activity. Using magnetoencephalography, we analyzed the magnitude of primary somatosensory (somatosensory P50m) and primary auditory activity (auditory P50m) during a one-back discrimination task in 21 healthy volunteers. In complete darkness, participants were requested to pay attention to either the somatosensory or auditory stimulation and asked to open or close their eyes every 6.5 min. Somatosensory P50m was reduced during a task requiring the distinguishing of stimulus location changes at the distal phalanges of different fingers. The somatosensory P50m was further reduced and detection performance was higher during eyes open. A similar reduction was found for the auditory P50m during a task requiring the distinguishing of changing tones. The function of eye closure is more than controlling visual input. It might be advantageous for perception because it is an effective way to reduce interference from other modalities, but disadvantageous for spatial discrimination because it requires at least one top-down processing stage. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Visual body perception in anorexia nervosa.
Urgesi, Cosimo; Fornasari, Livia; Perini, Laura; Canalaz, Francesca; Cremaschi, Silvana; Faleschini, Laura; Balestrieri, Matteo; Fabbro, Franco; Aglioti, Salvatore Maria; Brambilla, Paolo
2012-05-01
Disturbance of body perception is a central aspect of anorexia nervosa (AN) and several neuroimaging studies have documented structural and functional alterations of occipito-temporal cortices involved in visual body processing. However, it is unclear whether these perceptual deficits involve more basic aspects of others' body perception. A consecutive sample of 15 adolescent patients with AN were compared with a group of 15 age- and gender-matched controls in delayed matching to sample tasks requiring the visual discrimination of the form or of the action of others' body. Patients showed better visual discrimination performance than controls in detail-based processing of body forms but not of body actions, which positively correlated with their increased tendency to convert a signal of punishment into a signal of reinforcement (higher persistence scores). The paradoxical advantage of patients with AN in detail-based body processing may be associated to their tendency to routinely explore body parts as a consequence of their obsessive worries about body appearance. Copyright © 2012 Wiley Periodicals, Inc.
Oetjen, Sophie; Ziefle, Martina
2009-01-01
An increasing demand to work with electronic displays and to use mobile computers emphasises the need to compare visual performance while working with different screen types. In the present study, a cathode ray tube (CRT) was compared to an external liquid crystal display (LCD) and a Notebook-LCD. The influence of screen type and viewing angle on discrimination performance was studied. Physical measurements revealed that luminance and contrast values change with varying viewing angles (anisotropy). This is most pronounced in Notebook-LCDs, followed by external LCDs and CRTs. Performance data showed that LCD's anisotropy has negative impacts on completing time critical visual tasks. The best results were achieved when a CRT was used. The largest deterioration of performance resulted when participants worked with a Notebook-LCD. When it is necessary to react quickly and accurately, LCD screens have disadvantages. The anisotropy of LCD-TFTs is therefore considered to be as a limiting factor deteriorating visual performance.
Li, Xuan; Allen, Philip A; Lien, Mei-Ching; Yamamoto, Naohide
2017-02-01
Previous studies on perceptual learning, acquiring a new skill through practice, appear to stimulate brain plasticity and enhance performance (Fiorentini & Berardi, 1981). The present study aimed to determine (a) whether perceptual learning can be used to compensate for age-related declines in perceptual abilities, and (b) whether the effect of perceptual learning can be transferred to untrained stimuli and subsequently improve capacity of visual working memory (VWM). We tested both healthy younger and older adults in a 3-day training session using an orientation discrimination task. A matching-to-sample psychophysical method was used to measure improvements in orientation discrimination thresholds and reaction times (RTs). Results showed that both younger and older adults improved discrimination thresholds and RTs with similar learning rates and magnitudes. Furthermore, older adults exhibited a generalization of improvements to 3 untrained orientations that were close to the training orientation and benefited more compared with younger adults from the perceptual learning as they transferred learning effects to the VWM performance. We conclude that through perceptual learning, older adults can partially counteract age-related perceptual declines, generalize the learning effect to other stimulus conditions, and further overcome the limitation of using VWM capacity to perform a perceptual task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
McElree, Brian; Carrasco, Marisa
2012-01-01
Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310
Sarabi, Mitra Taghizadeh; Aoki, Ryuta; Tsumura, Kaho; Keerativittayayut, Ruedeerat; Jimura, Koji; Nakahara, Kiyoshi
2018-01-01
The neural mechanisms underlying visual perceptual learning (VPL) have typically been studied by examining changes in task-related brain activation after training. However, the relationship between post-task "offline" processes and VPL remains unclear. The present study examined this question by obtaining resting-state functional magnetic resonance imaging (fMRI) scans of human brains before and after a task-fMRI session involving visual perceptual training. During the task-fMRI session, participants performed a motion coherence discrimination task in which they judged the direction of moving dots with a coherence level that varied between trials (20, 40, and 80%). We found that stimulus-induced activation increased with motion coherence in the middle temporal cortex (MT+), a feature-specific region representing visual motion. On the other hand, stimulus-induced activation decreased with motion coherence in the dorsal anterior cingulate cortex (dACC) and bilateral insula, regions involved in decision making under perceptual ambiguity. Moreover, by comparing pre-task and post-task rest periods, we revealed that resting-state functional connectivity (rs-FC) with the MT+ was significantly increased after training in widespread cortical regions including the bilateral sensorimotor and temporal cortices. In contrast, rs-FC with the MT+ was significantly decreased in subcortical regions including the thalamus and putamen. Importantly, the training-induced change in rs-FC was observed only with the MT+, but not with the dACC or insula. Thus, our findings suggest that perceptual training induces plastic changes in offline functional connectivity specifically in brain regions representing the trained visual feature, emphasising the distinct roles of feature-representation regions and decision-related regions in VPL.
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
Differential contribution of early visual areas to the perceptual process of contour processing.
Schira, Mark M; Fahle, Manfred; Donner, Tobias H; Kraft, Antje; Brandt, Stephan A
2004-04-01
We investigated contour processing and figure-ground detection within human retinotopic areas using event-related functional magnetic resonance imaging (fMRI) in 6 healthy and naïve subjects. A figure (6 degrees side length) was created by a 2nd-order texture contour. An independent and demanding foveal letter-discrimination task prevented subjects from noticing this more peripheral contour stimulus. The contour subdivided our stimulus into a figure and a ground. Using localizers and retinotopic mapping stimuli we were able to subdivide each early visual area into 3 eccentricity regions corresponding to 1) the central figure, 2) the area along the contour, and 3) the background. In these subregions we investigated the hemodynamic responses to our stimuli and compared responses with or without the contour defining the figure. No contour-related blood oxygenation level-dependent modulation in early visual areas V1, V3, VP, and MT+ was found. Significant signal modulation in the contour subregions of V2v, V2d, V3a, and LO occurred. This activation pattern was different from comparable studies, which might be attributable to the letter-discrimination task reducing confounding attentional modulation. In V3a, but not in any other retinotopic area, signal modulation corresponding to the central figure could be detected. Such contextual modulation will be discussed in light of the recurrent processing hypothesis and the role of visual awareness.
Effects of spatial cues on color-change detection in humans
Herman, James P.; Bogadhi, Amarender R.; Krauzlis, Richard J.
2015-01-01
Studies of covert spatial attention have largely used motion, orientation, and contrast stimuli as these features are fundamental components of vision. The feature dimension of color is also fundamental to visual perception, particularly for catarrhine primates, and yet very little is known about the effects of spatial attention on color perception. Here we present results using novel dynamic color stimuli in both discrimination and color-change detection tasks. We find that our stimuli yield comparable discrimination thresholds to those obtained with static stimuli. Further, we find that an informative spatial cue improves performance and speeds response time in a color-change detection task compared with an uncued condition, similar to what has been demonstrated for motion, orientation, and contrast stimuli. Our results demonstrate the use of dynamic color stimuli for an established psychophysical task and show that color stimuli are well suited to the study of spatial attention. PMID:26047359
Navigation performance in virtual environments varies with fractal dimension of landscape.
Juliani, Arthur W; Bies, Alexander J; Boydston, Cooper R; Taylor, Richard P; Sereno, Margaret E
2016-09-01
Fractal geometry has been used to describe natural and built environments, but has yet to be studied in navigational research. In order to establish a relationship between the fractal dimension (D) of a natural environment and humans' ability to navigate such spaces, we conducted two experiments using virtual environments that simulate the fractal properties of nature. In Experiment 1, participants completed a goal-driven search task either with or without a map in landscapes that varied in D. In Experiment 2, participants completed a map-reading and location-judgment task in separate sets of fractal landscapes. In both experiments, task performance was highest at the low-to-mid range of D, which was previously reported as most preferred and discriminable in studies of fractal aesthetics and discrimination, respectively, supporting a theory of visual fluency. The applicability of these findings to architecture, urban planning, and the general design of constructed spaces is discussed.
Task relevance modulates the behavioural and neural effects of sensory predictions
Friston, Karl J.; Nobre, Anna C.
2017-01-01
The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants’ brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling. PMID:29206225
NASA Technical Reports Server (NTRS)
Remington, Roger; Williams, Douglas
1986-01-01
Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.
Ebersbach, Mirjam; Nawroth, Christian
2016-01-01
Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds’ success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children (N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children. PMID:27812346
Ebersbach, Mirjam; Nawroth, Christian
2016-01-01
Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds' success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children ( N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children.
Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands
Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.
2013-01-01
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203
Cortical activity patterns predict speech discrimination ability
Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P
2010-01-01
Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123
Mele, Sonia; Ghirardi, Valentina; Craighero, Laila
2017-12-01
A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.
Monkey pulvinar neurons fire differentially to snake postures.
Le, Quan Van; Isbell, Lynne A; Matsumoto, Jumpei; Le, Van Quang; Hori, Etsuro; Tran, Anh Hai; Maior, Rafael S; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao
2014-01-01
There is growing evidence from both behavioral and neurophysiological approaches that primates are able to rapidly discriminate visually between snakes and innocuous stimuli. Recent behavioral evidence suggests that primates are also able to discriminate the level of threat posed by snakes, by responding more intensely to a snake model poised to strike than to snake models in coiled or sinusoidal postures (Etting and Isbell 2014). In the present study, we examine the potential for an underlying neurological basis for this ability. Previous research indicated that the pulvinar is highly sensitive to snake images. We thus recorded pulvinar neurons in Japanese macaques (Macaca fuscata) while they viewed photos of snakes in striking and non-striking postures in a delayed non-matching to sample (DNMS) task. Of 821 neurons recorded, 78 visually responsive neurons were tested with the all snake images. We found that pulvinar neurons in the medial and dorsolateral pulvinar responded more strongly to snakes in threat displays poised to strike than snakes in non-threat-displaying postures with no significant difference in response latencies. A multidimensional scaling analysis of the 78 visually responsive neurons indicated that threat-displaying and non-threat-displaying snakes were separated into two different clusters in the first epoch of 50 ms after stimulus onset, suggesting bottom-up visual information processing. These results indicate that pulvinar neurons in primates discriminate between poised to strike from those in non-threat-displaying postures. This neuronal ability likely facilitates behavioral discrimination and has clear adaptive value. Our results are thus consistent with the Snake Detection Theory, which posits that snakes were instrumental in the evolution of primate visual systems.
The use of visual cues in gravity judgements on parabolic motion.
Jörges, Björn; Hagenfeld, Lena; López-Moliner, Joan
2018-06-21
Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s 2 . This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (ẏ) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on ẏ and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone. Copyright © 2018. Published by Elsevier Ltd.
Neural cascade of conflict processing: not just time-on-task
McKay, Cameron C.; van den Berg, Berry; Woldorff, Marty G.
2017-01-01
In visual conflict tasks (e.g., Stroop or flanker), response times (RTs) are generally longer on incongruent trials relative to congruent ones. Two event-related-potential (ERP) components classically associated with the processing of stimulus conflict are the fronto-central, incongruency-related negativity (Ninc) and the posterior late-positive complex (LPC), which are derived from the ERP difference waves for incongruent minus congruent trials. It has been questioned, however, whether these effects, or other neural measures of incongruency (e.g., fMRI responses in the anterior cingulate), reflect true conflict processing, or whether such effects derive mainly from differential time-on-task. To address this question, we leveraged high-temporal-resolution ERP measures of brain activity during two behavioral tasks. The first task, a modified Erikson flanker paradigm (with congruent and incongruent trials), was used to evoke the classic RT and ERP effects associated with conflict. In the second, a non-conflict comparison condition, participants visually discriminated a single stimulus (with easy and hard discrimination conditions). Behaviorally, the parameters were titrated to yield similar RT effects of conflict and difficulty (27 ms). Neurally, both within-task contrasts showed an initial fronto-central negative-polarity wave (N2-latency effect), but they then diverged. In the difficulty difference wave, the initial negativity led directly into the posterior LPC, whereas in the incongruency contrast the initial negativity was followed a by a second fronto-central negative peak (Ninc), which was then followed by a considerably longer-latency LPC. These results provide clear evidence that the longer processing for incongruent stimulus inputs do not just reflect time-on-task or difficulty, but include a true conflict-processing component. PMID:28017818
Evidence of Blocking with Geometric Cues in a Virtual Watermaze
ERIC Educational Resources Information Center
Redhead, Edward S.; Hamilton, Derek A.
2009-01-01
Three computer based experiments, testing human participants in a non-immersive virtual watermaze task, used a blocking design to assess whether two sets of geometric cues would compete in a manner described by associative models of learning. In stage 1, participants were required to discriminate between visually distinct platforms. In stage 2,…
Display size effects in visual search: analyses of reaction time distributions as mixtures.
Reynolds, Ann; Miller, Jeff
2009-05-01
In a reanalysis of data from Cousineau and Shiffrin (2004) and two new visual search experiments, we used a likelihood ratio test to examine the full distributions of reaction time (RT) for evidence that the display size effect is a mixture-type effect that occurs on only a proportion of trials, leaving RT in the remaining trials unaffected, as is predicted by serial self-terminating search models. Experiment 1 was a reanalysis of Cousineau and Shiffrin's data, for which a mixture effect had previously been established by a bimodal distribution of RTs, and the results confirmed that the likelihood ratio test could also detect this mixture. Experiment 2 applied the likelihood ratio test within a more standard visual search task with a relatively easy target/distractor discrimination, and Experiment 3 applied it within a target identification search task within the same types of stimuli. Neither of these experiments provided any evidence for the mixture-type display size effect predicted by serial self-terminating search models. Overall, these results suggest that serial self-terminating search models may generally be applicable only with relatively difficult target/distractor discriminations, and then only for some participants. In addition, they further illustrate the utility of analysing full RT distributions in addition to mean RT.
Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.
Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D
2011-10-30
Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.
Visual perception of fatigued lifting actions.
Fischer, Steven L; Albert, Wayne J; McGarry, Tim
2012-12-01
Fatigue-related changes in lifting kinematics may expose workers to undue injury risks. Early detection of accumulating fatigue offers the prospect of intervention strategies to mitigate such fatigue-related risks. In a first step towards this objective, this study investigated whether fatigue detection was accessible to visual perception and, if so, what was the key visual information required for successful fatigue discrimination. Eighteen participants were tasked with identifying fatigued lifts when viewing 24 trials presented using both video and point-light representations. Each trial comprised a pair of lifting actions containing a fresh and a fatigued lift from the same individual presented in counter-balanced sequence. Confidence intervals demonstrated that the frequency of correct responses for both sexes exceeded chance expectations (50%) for both video (68%±12%) and point-light representations (67%±10%), demonstrating that fatigued lifting kinematics are open to visual perception. There were no significant differences between sexes or viewing condition, the latter result indicating kinematic dynamics as providing sufficient information for successful fatigue discrimination. Moreover, results from single viewer investigation reported fatigue detection (75%) from point-light information describing only the kinematics of the box lifted. These preliminary findings may have important workplace applications if fatigue discrimination rates can be improved upon through future research. Copyright © 2012 Elsevier B.V. All rights reserved.
How task demands shape brain responses to visual food cues.
Pohl, Tanja Maria; Tempelmann, Claus; Noesselt, Toemme
2017-06-01
Several previous imaging studies have aimed at identifying the neural basis of visual food cue processing in humans. However, there is little consistency of the functional magnetic resonance imaging (fMRI) results across studies. Here, we tested the hypothesis that this variability across studies might - at least in part - be caused by the different tasks employed. In particular, we assessed directly the influence of task set on brain responses to food stimuli with fMRI using two tasks (colour vs. edibility judgement, between-subjects design). When participants judged colour, the left insula, the left inferior parietal lobule, occipital areas, the left orbitofrontal cortex and other frontal areas expressed enhanced fMRI responses to food relative to non-food pictures. However, when judging edibility, enhanced fMRI responses to food pictures were observed in the superior and middle frontal gyrus and in medial frontal areas including the pregenual anterior cingulate cortex and ventromedial prefrontal cortex. This pattern of results indicates that task sets can significantly alter the neural underpinnings of food cue processing. We propose that judging low-level visual stimulus characteristics - such as colour - triggers stimulus-related representations in the visual and even in gustatory cortex (insula), whereas discriminating abstract stimulus categories activates higher order representations in both the anterior cingulate and prefrontal cortex. Hum Brain Mapp 38:2897-2912, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Kubanek, J; Wang, C; Snyder, L H
2013-11-01
We often look at and sometimes reach for visible targets. Looking at a target is fast and relatively easy. By comparison, reaching for an object is slower and is associated with a larger cost. We hypothesized that, as a result of these differences, abrupt visual onsets may drive the circuits involved in saccade planning more directly and with less intermediate regulation than the circuits involved in reach planning. To test this hypothesis, we recorded discharge activity of neurons in the parietal oculomotor system (area LIP) and in the parietal somatomotor system (area PRR) while monkeys performed a visually guided movement task and a choice task. We found that in the visually guided movement task LIP neurons show a prominent transient response to target onset. PRR neurons also show a transient response, although this response is reduced in amplitude, is delayed, and has a slower rise time compared with LIP. A more striking difference is observed in the choice task. The transient response of PRR neurons is almost completely abolished and replaced with a slow buildup of activity, while the LIP response is merely delayed and reduced in amplitude. Our findings suggest that the oculomotor system is more closely and obligatorily coupled to the visual system, whereas the somatomotor system operates in a more discriminating manner.
Perceptual Learning via Modification of Cortical Top-Down Signals
Schäfer, Roland; Vasilaki, Eleni; Senn, Walter
2007-01-01
The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning. PMID:17715996
Visual habit formation in monkeys with neurotoxic lesions of the ventrocaudal neostriatum
Fernandez-Ruiz, Juan; Wang, Jin; Aigner, Thomas G.; Mishkin, Mortimer
2001-01-01
Visual habit formation in monkeys, assessed by concurrent visual discrimination learning with 24-h intertrial intervals (ITI), was found earlier to be impaired by removal of the inferior temporal visual area (TE) but not by removal of either the medial temporal lobe or inferior prefrontal convexity, two of TE's major projection targets. To assess the role in this form of learning of another pair of structures to which TE projects, namely the rostral portion of the tail of the caudate nucleus and the overlying ventrocaudal putamen, we injected a neurotoxin into this neostriatal region of several monkeys and tested them on the 24-h ITI task as well as on a test of visual recognition memory. Compared with unoperated monkeys, the experimental animals were unaffected on the recognition test but showed an impairment on the 24-h ITI task that was highly correlated with the extent of their neostriatal damage. The findings suggest that TE and its projection areas in the ventrocaudal neostriatum form part of a circuit that selectively mediates visual habit formation. PMID:11274442
Words, shape, visual search and visual working memory in 3-year-old children.
Vales, Catarina; Smith, Linda B
2015-01-01
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.
Wang, Zhengke; Cheng-Lai, Alice; Song, Yan; Cutting, Laurie; Jiang, Yuzheng; Lin, Ou; Meng, Xiangzhi; Zhou, Xiaolin
2014-08-01
Learning to read involves discriminating between different written forms and establishing connections with phonology and semantics. This process may be partially built upon visual perceptual learning, during which the ability to process the attributes of visual stimuli progressively improves with practice. The present study investigated to what extent Chinese children with developmental dyslexia have deficits in perceptual learning by using a texture discrimination task, in which participants were asked to discriminate the orientation of target bars. Experiment l demonstrated that, when all of the participants started with the same initial stimulus-to-mask onset asynchrony (SOA) at 300 ms, the threshold SOA, adjusted according to response accuracy for reaching 80% accuracy, did not show a decrement over 5 days of training for children with dyslexia, whereas this threshold SOA steadily decreased over the training for the control group. Experiment 2 used an adaptive procedure to determine the threshold SOA for each participant during training. Results showed that both the group of dyslexia and the control group attained perceptual learning over the sessions in 5 days, although the threshold SOAs were significantly higher for the group of dyslexia than for the control group; moreover, over individual participants, the threshold SOA negatively correlated with their performance in Chinese character recognition. These findings suggest that deficits in visual perceptual processing and learning might, in part, underpin difficulty in reading Chinese. Copyright © 2014 John Wiley & Sons, Ltd.
The retention and disruption of color information in human short-term visual memory.
Nemes, Vanda A; Parry, Neil R A; Whitaker, David; McKeefry, Declan J
2012-01-27
Previous studies have demonstrated that the retention of information in short-term visual perceptual memory can be disrupted by the presentation of masking stimuli during interstimulus intervals (ISIs) in delayed discrimination tasks (S. Magnussen & W. W. Greenlee, 1999). We have exploited this effect in order to determine to what extent short-term perceptual memory is selective for stimulus color. We employed a delayed hue discrimination paradigm to measure the fidelity with which color information was retained in short-term memory. The task required 5 color normal observers to discriminate between spatially non-overlapping colored reference and test stimuli that were temporally separated by an ISI of 5 s. The points of subjective equality (PSEs) on the resultant psychometric matching functions provided an index of performance. Measurements were made in the presence and absence of mask stimuli presented during the ISI, which varied in hue around the equiluminant plane in DKL color space. For all reference stimuli, we found a consistent mask-induced, hue-dependent shift in PSE compared to the "no mask" conditions. These shifts were found to be tuned in color space, only occurring for a range of mask hues that fell within bandwidths of 29-37 deg. Outside this range, masking stimuli had little or no effect on measured PSEs. The results demonstrate that memory masking for color exhibits selectivity similar to that which has already been demonstrated for other visual attributes. The relatively narrow tuning of these interference effects suggests that short-term perceptual memory for color is based on higher order, non-linear color coding. © ARVO
Audiovisual speech perception development at varying levels of perceptual processing
Lalonde, Kaylah; Holt, Rachael Frush
2016-01-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318
Audiovisual speech perception development at varying levels of perceptual processing.
Lalonde, Kaylah; Holt, Rachael Frush
2016-04-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Cavanaugh, Matthew R; Barbot, Antoine; Carrasco, Marisa; Huxlin, Krystel R
2017-12-10
Training chronic, cortically-blind (CB) patients on a coarse [left-right] direction discrimination and integration (CDDI) task recovers performance on this task at trained, blind field locations. However, fine direction difference (FDD) thresholds remain elevated at these locations, limiting the usefulness of recovered vision in daily life. Here, we asked if this FDD impairment can be overcome by training CB subjects with endogenous, feature-based attention (FBA) cues. Ten CB subjects were recruited and trained on CDDI and FDD with an FBA cue or FDD with a neutral cue. After completion of each training protocol, FDD thresholds were re-measured with both neutral and FBA cues at trained, blind-field locations and at corresponding, intact-field locations. In intact portions of the visual field, FDD thresholds were lower when tested with FBA than neutral cues. Training subjects in the blind field on the CDDI task improved FDD performance to the point that a threshold could be measured, but these locations remained impaired relative to the intact field. FDD training with neutral cues resulted in better blind field FDD thresholds than CDDI training, but thresholds remained impaired relative to intact field levels, regardless of testing cue condition. Importantly, training FDD in the blind field with FBA lowered FDD thresholds relative to CDDI training, and allowed the blind field to reach thresholds similar to the intact field, even when FBA trained subjects were tested with a neutral rather than FBA cue. Finally, FDD training appeared to also recover normal integration thresholds at trained, blind-field locations, providing an interesting double dissociation with respect to CDDI training. In summary, mechanisms governing FBA appear to function normally in both intact and impaired regions of the visual field following V1 damage. Our results mark the first time that FDD thresholds in CB fields have been seen to reach intact field levels of performance. Moreover, FBA can be leveraged during visual training to recover normal, fine direction discrimination and integration performance at trained, blind-field locations, potentiating visual recovery of more complex and precise aspects of motion perception in cortically-blinded fields. Copyright © 2017 Elsevier Ltd. All rights reserved.
The loss of short-term visual representations over time: decay or temporal distinctiveness?
Mercer, Tom
2014-12-01
There has been much recent interest in the loss of visual short-term memories over the passage of time. According to decay theory, visual representations are gradually forgotten as time passes, reflecting a slow and steady distortion of the memory trace. However, this is controversial and decay effects can be explained in other ways. The present experiment aimed to reexamine the maintenance and loss of visual information over the short term. Decay and temporal distinctiveness models were tested using a delayed discrimination task, in which participants compared complex and novel objects over unfilled retention intervals of variable length. Experiment 1 found no significant change in the accuracy of visual memory from 2 to 6 s, but the gap separating trials reliably influenced task performance. Experiment 2 found evidence for information loss at a 10-s retention interval, but temporally separating trials restored the fidelity of visual memory, possibly because temporally isolated representations are distinct from older memory traces. In conclusion, visual representations lose accuracy at some point after 6 s, but only within temporally crowded contexts. These findings highlight the importance of temporal distinctiveness within visual short-term memory. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Visual Equivalence and Amodal Completion in Cuttlefish
Lin, I-Rong; Chiao, Chuan-Chin
2017-01-01
Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis, including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods. PMID:28220075
The time course of shape discrimination in the human brain.
Ales, Justin M; Appelbaum, L Gregory; Cottereau, Benoit R; Norcia, Anthony M
2013-02-15
The lateral occipital cortex (LOC) activates selectively to images of intact objects versus scrambled controls, is selective for the figure-ground relationship of a scene, and exhibits at least some degree of invariance for size and position. Because of these attributes, it is considered to be a crucial part of the object recognition pathway. Here we show that human LOC is critically involved in perceptual decisions about object shape. High-density EEG was recorded while subjects performed a threshold-level shape discrimination task on texture-defined figures segmented by either phase or orientation cues. The appearance or disappearance of a figure region from a uniform background generated robust visual evoked potentials throughout retinotopic cortex as determined by inverse modeling of the scalp voltage distribution. Contrasting responses from trials containing shape changes that were correctly detected (hits) with trials in which no change occurred (correct rejects) revealed stimulus-locked, target-selective activity in the occipital visual areas LOC and V4 preceding the subject's response. Activity that was locked to the subjects' reaction time was present in the LOC. Response-locked activity in the LOC was determined to be related to shape discrimination for several reasons: shape-selective responses were silenced when subjects viewed identical stimuli but their attention was directed away from the shapes to a demanding letter discrimination task; shape-selectivity was present across four different stimulus configurations used to define the figure; LOC responses correlated with participants' reaction times. These results indicate that decision-related activity is present in the LOC when subjects are engaged in threshold-level shape discriminations. Copyright © 2012 Elsevier Inc. All rights reserved.
The effect of age upon the perception of 3-D shape from motion.
Norman, J Farley; Cheeseman, Jacob R; Pyles, Jessica; Baxter, Michael W; Thomason, Kelsey E; Calloway, Autum B
2013-12-18
Two experiments evaluated the ability of 50 older, middle-aged, and younger adults to discriminate the 3-dimensional (3-D) shape of curved surfaces defined by optical motion. In Experiment 1, temporal correspondence was disrupted by limiting the lifetimes of the moving surface points. In order to discriminate 3-D surface shape reliably, the younger and middle-aged adults needed a surface point lifetime of approximately 4 views (in the apparent motion sequences). In contrast, the older adults needed a much longer surface point lifetime of approximately 9 views in order to reliably perform the same task. In Experiment 2, the negative effect of age upon 3-D shape discrimination from motion was replicated. In this experiment, however, the participants' abilities to discriminate grating orientation and speed were also assessed. Edden et al. (2009) have recently demonstrated that behavioral grating orientation discrimination correlates with GABA (gamma aminobutyric acid) concentration in human visual cortex. Our results demonstrate that the negative effect of age upon 3-D shape perception from motion is not caused by impairments in the ability to perceive motion per se, but does correlate significantly with grating orientation discrimination. This result suggests that the age-related decline in 3-D shape discrimination from motion is related to decline in GABA concentration in visual cortex. Copyright © 2013 Elsevier B.V. All rights reserved.
A Salient and Task-Irrelevant Collinear Structure Hurts Visual Search
Tseng, Chia-huei; Jingling, Li
2015-01-01
Salient distractors draw our attention spontaneously, even when we intentionally want to ignore them. When this occurs, the real targets close to or overlapping with the distractors benefit from attention capture and thus are detected and discriminated more quickly. However, a puzzling opposite effect was observed in a search display with a column of vertical collinear bars presented as a task-irrelevant distractor [6]. In this case, it was harder to discriminate the targets overlapping with the salient distractor. Here we examined whether this effect originated from factors known to modulate attentional capture: (a) low probability—the probability occurrence of target location at the collinear column was much less (14%) than the rest of the display (86%), and observers might strategically direct their attention away from the collinear distractor; (b) attentional control setting—the distractor and target task interfered with each other because they shared the same continuity set in attentional task; and/or (c) lack of time to establish the optional strategy. We tested these hypotheses by (a) increasing to 60% the trials in which targets overlapped with the same collinear distractor columns, (b) replacing the target task to be connectivity-irrelevant (i.e., luminance discrimination), and (c) having our observers practice the same search task for 10 days. Our results speak against all these hypotheses and lead us to conclude that a collinear distractor impairs search at a level that is unaffected by probabilistic information, attentional setting, and learning. PMID:25909986
Spatial vision in older adults: perceptual changes and neural bases.
McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N
2018-05-17
The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.
Sugimoto, Fumie; Kimura, Motohiro; Takeda, Yuji; Katayama, Jun'ichi
2017-08-16
In a three-stimulus oddball task, the amplitude of P3a elicited by deviant stimuli increases with an increase in the difficulty of discriminating between standard and target stimuli (i.e. task-difficulty effect on P3a), indicating that attentional capture by deviant stimuli is enhanced with an increase in task difficulty. This enhancement of attentional capture may be explained in terms of the modulation of modality-nonspecific temporal attention; that is, the participant's attention directed to the predicted timing of stimulus presentation is stronger when the task difficulty increases, which results in enhanced attentional capture. The present study examined this possibility with a modified three-stimulus oddball task consisting of a visual standard, a visual target, and four types of deviant stimuli defined by a combination of two modalities (visual and auditory) and two presentation timings (predicted and unpredicted). We expected that if the modulation of temporal attention is involved in enhanced attentional capture, then the task-difficulty effect on P3a should be reduced for unpredicted compared with predicted deviant stimuli irrespective of their modality; this is because the influence of temporal attention should be markedly weaker for unpredicted compared with predicted deviant stimuli. The results showed that the task-difficulty effect on P3a was significantly reduced for unpredicted compared with predicted deviant stimuli in both the visual and the auditory modalities. This result suggests that the modulation of modality-nonspecific temporal attention induced by the increase in task difficulty is at least partly involved in the enhancement of attentional capture by deviant stimuli.
Park, Bo Youn; Kim, Sujin; Cho, Yang Seok
2018-02-01
The congruency effect of a task-irrelevant distractor has been found to be modulated by task-relevant set size and display set size. The present study used a psychological refractory period (PRP) paradigm to examine the cognitive loci of the display set size effect (dilution effect) and the task-relevant set size effect (perceptual load effect) on distractor interference. A tone discrimination task (Task 1), in which a response was made to the pitch of the target tone, was followed by a letter discrimination task (Task 2) in which different types of visual target display were used. In Experiment 1, in which display set size was manipulated to examine the nature of the display set size effect on distractor interference in Task 2, the modulation of the congruency effect by display set size was observed at both short and long stimulus-onset asynchronies (SOAs), indicating that the display set size effect occurred after the target was selected for processing in the focused attention stage. In Experiment 2, in which task-relevant set size was manipulated to examine the nature of the task-relevant set size effect on distractor interference in Task 2, the effects of task-relevant set size increased with SOA, suggesting that the target selection efficiency in the preattentive stage was impaired with increasing task-relevant set size. These results suggest that display set size and task-relevant set size modulate distractor processing in different ways.
Hisagi, Miwako; Shafer, Valerie L.; Strange, Winifred; Sussman, Elyse S.
2015-01-01
This study examined automaticity of discrimination of a Japanese length contrast for consonants (miʃi vs. miʃʃi) in native (Japanese) and non-native (American-English) listeners using behavioral measures and the event-related potential (ERP) mismatch negativity (MMN). Attention to the auditory input was manipulated either away from the auditory input via a visual oddball task (Visual Attend), or to the input by asking the listeners to count auditory deviants (Auditory Attend). Results showed a larger MMN when attention was focused on the consonant contrast than away from it for both groups. The MMN was larger for consonant duration increments than decrements. No difference in MMN between the language groups was observed, but the Japanese listeners did show better behavioral discrimination than the American English listeners. In addition, behavioral responses showed a weak, but significant correlation with MMN amplitude. These findings suggest that both acoustic-phonetic properties and phonological experience affects automaticity of speech processing. PMID:26119918
Oscillations during observations: Dynamic oscillatory networks serving visuospatial attention.
Wiesman, Alex I; Heinrichs-Graham, Elizabeth; Proskovec, Amy L; McDermott, Timothy J; Wilson, Tony W
2017-10-01
The dynamic allocation of neural resources to discrete features within a visual scene enables us to react quickly and accurately to salient environmental circumstances. A network of bilateral cortical regions is known to subserve such visuospatial attention functions; however the oscillatory and functional connectivity dynamics of information coding within this network are not fully understood. Particularly, the coding of information within prototypical attention-network hubs and the subsecond functional connections formed between these hubs have not been adequately characterized. Herein, we use the precise temporal resolution of magnetoencephalography (MEG) to define spectrally specific functional nodes and connections that underlie the deployment of attention in visual space. Twenty-three healthy young adults completed a visuospatial discrimination task designed to elicit multispectral activity in visual cortex during MEG, and the resulting data were preprocessed and reconstructed in the time-frequency domain. Oscillatory responses were projected to the cortical surface using a beamformer, and time series were extracted from peak voxels to examine their temporal evolution. Dynamic functional connectivity was then computed between nodes within each frequency band of interest. We find that visual attention network nodes are defined functionally by oscillatory frequency, that the allocation of attention to the visual space dynamically modulates functional connectivity between these regions on a millisecond timescale, and that these modulations significantly correlate with performance on a spatial discrimination task. We conclude that functional hubs underlying visuospatial attention are segregated not only anatomically but also by oscillatory frequency, and importantly that these oscillatory signatures promote dynamic communication between these hubs. Hum Brain Mapp 38:5128-5140, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Visual Deficit in Albino Rats Following Fetal X Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
VAN DER ELST, DIRK H.; PORTER, PAUL B.; SHARP, JOSEPH C.
1963-02-01
To investigate the effect of radiation on visual ability, five groups of rats on the 15th day of gestation received x irradiation in doses of 0, 50, 75, 100, or 150 r at 50 r/ min. Two-thirds of the newborn rats died or were killed and eaten during the first postnatal week. The 75- and 50-r groups were lost entirely. The cannibalism occurred in all groups, so that its cause was uncertain. The remaining rats, which as fetuses had received 0, 100, and 150 r, were tested for visual discrimination in a water-flooded T. All 3 groups discriminated a lightedmore » escape ladder from the unlighted arm of the T with near- equal facility. Thereafter, as the light was dimmed progressively, performance declined in relation to dose. With the light turned off, but the bulb and ladder visible in ambient illumination, the 150-r group performed at chance, the 100-r group reliably better, and the control group better still. Thus, in the more precise task the irradiated animals failed. Since irradiation on the 15th day primarily damages the cortex, central blindness seems the most likely explanation. All animals had previously demonstrated their ability to solve the problem conceptually; hence a conclusion of visual deficiency seems justified. The similar performances of all groups during the easiest light discrimination test showed that the heavily irradiated and severely injured animals of the 150-r group were nonetheless able to learn readily. Finally, contrary to earlier studies in which irradiated rats were retarded in discriminating a light in a Skinner box, present tests reveal impairment neither in learning rate nor light discrimination.« less
The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes
NASA Technical Reports Server (NTRS)
Remington, R.; Williams, D.
1984-01-01
Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.
Age slowing down in detection and visual discrimination under varying presentation times.
Moret-Tatay, Carmen; Lemus-Zúñiga, Lenin-Guillermo; Tortosa, Diana Abad; Gamermann, Daniel; Vázquez-Martínez, Andrea; Navarro-Pardo, Esperanza; Conejero, J Alberto
2017-08-01
The reaction time has been described as a measure of perception, decision making, and other cognitive processes. The aim of this work is to examine age-related changes in executive functions in terms of demand load under varying presentation times. Two tasks were employed where a signal detection and a discrimination task were performed by young and older university students. Furthermore, a characterization of the response time distribution by an ex-Gaussian fit was carried out. The results indicated that the older participants were slower than the younger ones in signal detection and discrimination. Moreover, the differences between both processes for the older participants were higher, and they also showed a higher distribution average except for the lower and higher presentation time. The results suggest a general slowdown in both tasks for age under different presentation times, except for the cases where presentation times were lower and higher. Moreover, if these parameters are understood to be a reflection of executive functions, these findings are consistent with the common view that age-related cognitive deficits show a decline in this function. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Dye-enhanced visualization of rat whiskers for behavioral studies.
Rigosa, Jacopo; Lucantonio, Alessandro; Noselli, Giovanni; Fassihi, Arash; Zorzin, Erik; Manzino, Fabrizio; Pulecchi, Francesca; Diamond, Mathew E
2017-06-14
Visualization and tracking of the facial whiskers is required in an increasing number of rodent studies. Although many approaches have been employed, only high-speed videography has proven adequate for measuring whisker motion and deformation during interaction with an object. However, whisker visualization and tracking is challenging for multiple reasons, primary among them the low contrast of the whisker against its background. Here, we demonstrate a fluorescent dye method suitable for visualization of one or more rat whiskers. The process makes the dyed whisker(s) easily visible against a dark background. The coloring does not influence the behavioral performance of rats trained on a vibrissal vibrotactile discrimination task, nor does it affect the whiskers' mechanical properties.
The effect of encoding conditions on learning in the prototype distortion task.
Lee, Jessica C; Livesey, Evan J
2017-06-01
The prototype distortion task demonstrates that it is possible to learn about a category of physically similar stimuli through mere observation. However, there have been few attempts to test whether different encoding conditions affect learning in this task. This study compared prototypicality gradients produced under incidental learning conditions in which participants performed a visual search task, with those produced under intentional learning conditions in which participants were required to memorize the stimuli. Experiment 1 showed that similar prototypicality gradients could be obtained for category endorsement and familiarity ratings, but also found (weaker) prototypicality gradients in the absence of exposure. In Experiments 2 and 3, memorization was found to strengthen prototypicality gradients in familiarity ratings in comparison to visual search, but there were no group differences in participants' ability to discriminate between novel and presented exemplars. Although the Search groups in Experiments 2 and 3 produced prototypicality gradients, they were no different in magnitude to those produced in the absence of stimulus exposure in Experiment 1, suggesting that incidental learning during visual search was not conducive to producing prototypicality gradients. This study suggests that learning in the prototype distortion task is not implicit in the sense of resulting automatically from exposure, is affected by the nature of encoding, and should be considered in light of potential learning-at-test effects.
Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel
2015-01-01
Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in multivariate patterns of voxel activity.
A new metaphor for projection-based visual analysis and data exploration
NASA Astrophysics Data System (ADS)
Schreck, Tobias; Panse, Christian
2007-01-01
In many important application domains such as Business and Finance, Process Monitoring, and Security, huge and quickly increasing volumes of complex data are collected. Strong efforts are underway developing automatic and interactive analysis tools for mining useful information from these data repositories. Many data analysis algorithms require an appropriate definition of similarity (or distance) between data instances to allow meaningful clustering, classification, and retrieval, among other analysis tasks. Projection-based data visualization is highly interesting (a) for visual discrimination analysis of a data set within a given similarity definition, and (b) for comparative analysis of similarity characteristics of a given data set represented by different similarity definitions. We introduce an intuitive and effective novel approach for projection-based similarity visualization for interactive discrimination analysis, data exploration, and visual evaluation of metric space effectiveness. The approach is based on the convex hull metaphor for visually aggregating sets of points in projected space, and it can be used with a variety of different projection techniques. The effectiveness of the approach is demonstrated by application on two well-known data sets. Statistical evidence supporting the validity of the hull metaphor is presented. We advocate the hull-based approach over the standard symbol-based approach to projection visualization, as it allows a more effective perception of similarity relationships and class distribution characteristics.
Interest Inventory Items as Reinforcing Stimuli: A Test of the A-R-D Theory.
ERIC Educational Resources Information Center
Staats, Arthur W.; And Others
An experiement was conducted to test the hypothesis that interest inventory items would function as reinforcing stimuli in a visual discrimination task. When previously rated liked and disliked items from the Strong Vocational Interest Blank were differentially presented following one of two responses, subjects learned to respond to the stimulus…
Castillo-Padilla, Diana V; Funke, Klaus
2016-01-01
Early cortical critical period resembles a state of enhanced neuronal plasticity enabling the establishment of specific neuronal connections during first sensory experience. Visual performance with regard to pattern discrimination is impaired if the cortex is deprived from visual input during the critical period. We wondered how unspecific activation of the visual cortex before closure of the critical period using repetitive transcranial magnetic stimulation (rTMS) could affect the critical period and the visual performance of the experimental animals. Would it cause premature closure of the plastic state and thus worsen experience-dependent visual performance, or would it be able to preserve plasticity? Effects of intermittent theta-burst stimulation (iTBS) were compared with those of an enriched environment (EE) during dark-rearing (DR) from birth. Rats dark-reared in a standard cage showed poor improvement in a visual pattern discrimination task, while rats housed in EE or treated with iTBS showed a performance indistinguishable from rats reared in normal light/dark cycle. The behavioral effects were accompanied by correlated changes in the expression of brain-derived neurotrophic factor (BDNF) and atypical PKC (PKCζ/PKMζ), two factors controlling stabilization of synaptic potentiation. It appears that not only nonvisual sensory activity and exercise but also cortical activation induced by rTMS has the potential to alleviate the effects of DR on cortical development, most likely due to stimulation of BDNF synthesis and release. As we showed previously, iTBS reduced the expression of parvalbumin in inhibitory cortical interneurons, indicating that modulation of the activity of fast-spiking interneurons contributes to the observed effects of iTBS. © 2015 Wiley Periodicals, Inc.
Lagas, Alice K.; Black, Joanna M.; Byblow, Winston D.; Fleming, Melanie K.; Goodman, Lucy K.; Kydd, Robert R.; Russell, Bruce R.; Stinear, Cathy M.; Thompson, Benjamin
2016-01-01
The selective serotonin reuptake inhibitor fluoxetine significantly enhances adult visual cortex plasticity within the rat. This effect is related to decreased gamma-aminobutyric acid (GABA) mediated inhibition and identifies fluoxetine as a potential agent for enhancing plasticity in the adult human brain. We tested the hypothesis that fluoxetine would enhance visual perceptual learning of a motion direction discrimination (MDD) task in humans. We also investigated (1) the effect of fluoxetine on visual and motor cortex excitability and (2) the impact of increased GABA mediated inhibition following a single dose of triazolam on post-training MDD task performance. Within a double blind, placebo controlled design, 20 healthy adult participants completed a 19-day course of fluoxetine (n = 10, 20 mg per day) or placebo (n = 10). Participants were trained on the MDD task over the final 5 days of fluoxetine administration. Accuracy for the trained MDD stimulus and an untrained MDD stimulus configuration was assessed before and after training, after triazolam and 1 week after triazolam. Motor and visual cortex excitability were measured using transcranial magnetic stimulation. Fluoxetine did not enhance the magnitude or rate of perceptual learning and full transfer of learning to the untrained stimulus was observed for both groups. After training was complete, trazolam had no effect on trained task performance but significantly impaired untrained task performance. No consistent effects of fluoxetine on cortical excitability were observed. The results do not support the hypothesis that fluoxetine can enhance learning in humans. However, the specific effect of triazolam on MDD task performance for the untrained stimulus suggests that learning and learning transfer rely on dissociable neural mechanisms. PMID:27807412
Monkey Pulvinar Neurons Fire Differentially to Snake Postures
Le, Quan Van; Isbell, Lynne A.; Matsumoto, Jumpei; Le, Van Quang; Hori, Etsuro; Tran, Anh Hai; Maior, Rafael S.; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao
2014-01-01
There is growing evidence from both behavioral and neurophysiological approaches that primates are able to rapidly discriminate visually between snakes and innocuous stimuli. Recent behavioral evidence suggests that primates are also able to discriminate the level of threat posed by snakes, by responding more intensely to a snake model poised to strike than to snake models in coiled or sinusoidal postures (Etting and Isbell 2014). In the present study, we examine the potential for an underlying neurological basis for this ability. Previous research indicated that the pulvinar is highly sensitive to snake images. We thus recorded pulvinar neurons in Japanese macaques (Macaca fuscata) while they viewed photos of snakes in striking and non-striking postures in a delayed non-matching to sample (DNMS) task. Of 821 neurons recorded, 78 visually responsive neurons were tested with the all snake images. We found that pulvinar neurons in the medial and dorsolateral pulvinar responded more strongly to snakes in threat displays poised to strike than snakes in non-threat-displaying postures with no significant difference in response latencies. A multidimensional scaling analysis of the 78 visually responsive neurons indicated that threat-displaying and non-threat-displaying snakes were separated into two different clusters in the first epoch of 50 ms after stimulus onset, suggesting bottom-up visual information processing. These results indicate that pulvinar neurons in primates discriminate between poised to strike from those in non-threat-displaying postures. This neuronal ability likely facilitates behavioral discrimination and has clear adaptive value. Our results are thus consistent with the Snake Detection Theory, which posits that snakes were instrumental in the evolution of primate visual systems. PMID:25479158
Colour vision in ADHD: part 1--testing the retinal dopaminergic hypothesis.
Kim, Soyeon; Al-Haj, Mohamed; Chen, Samantha; Fuller, Stuart; Jain, Umesh; Carrasco, Marisa; Tannock, Rosemary
2014-10-24
To test the retinal dopaminergic hypothesis, which posits deficient blue color perception in ADHD, resulting from hypofunctioning CNS and retinal dopamine, to which blue cones are exquisitely sensitive. Also, purported sex differences in red color perception were explored. 30 young adults diagnosed with ADHD and 30 healthy young adults, matched on age and gender, performed a psychophysical task to measure blue and red color saturation and contrast discrimination ability. Visual function measures, such as the Visual Activities Questionnaire (VAQ) and Farnsworth-Munsell 100 hue test (FMT), were also administered. Females with ADHD were less accurate in discriminating blue and red color saturation relative to controls but did not differ in contrast sensitivity. Female control participants were better at discriminating red saturation than males, but no sex difference was present within the ADHD group. Poorer discrimination of red as well as blue color saturation in the female ADHD group may be partly attributable to a hypo-dopaminergic state in the retina, given that color perception (blue-yellow and red-green) is based on input from S-cones (short wavelength cone system) early in the visual pathway. The origin of female superiority in red perception may be rooted in sex-specific functional specialization in hunter-gather societies. The absence of this sexual dimorphism for red colour perception in ADHD females warrants further investigation.
Superior haptic-to-visual shape matching in autism spectrum disorders.
Nakano, Tamami; Kato, Nobumasa; Kitazawa, Shigeru
2012-04-01
A weak central coherence theory in autism spectrum disorder (ASD) proposes that a cognitive bias toward local processing in ASD derives from a weakness in integrating local elements into a coherent whole. Using this theory, we hypothesized that shape perception through active touch, which requires sequential integration of sensorimotor traces of exploratory finger movements into a shape representation, would be impaired in ASD. Contrary to our expectation, adults with ASD showed superior performance in a haptic-to-visual delayed shape-matching task compared to adults without ASD. Accuracy in discriminating haptic lengths or haptic orientations, which lies within the somatosensory modality, did not differ between adults with ASD and adults without ASD. Moreover, this superior ability in inter-modal haptic-to-visual shape matching was not explained by the score in a unimodal visuospatial rotation task. These results suggest that individuals with ASD are not impaired in integrating sensorimotor traces into a global visual shape and that their multimodal shape representations and haptic-to-visual information transfer are more accurate than those of individuals without ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Effect of Acute Sleep Deprivation on Visual Evoked Potentials in Professional Drivers
Jackson, Melinda L.; Croft, Rodney J.; Owens, Katherine; Pierce, Robert J.; Kennedy, Gerard A.; Crewther, David; Howard, Mark E.
2008-01-01
Study Objectives: Previous studies have demonstrated that as little as 18 hours of sleep deprivation can cause deleterious effects on performance. It has also been suggested that sleep deprivation can cause a “tunnel-vision” effect, in which attention is restricted to the center of the visual field. The current study aimed to replicate these behavioral effects and to examine the electrophysiological underpinnings of these changes. Design: Repeated-measures experimental study. Setting: University laboratory. Patients or Participants: Nineteen professional drivers (1 woman; mean age = 45.3 ± 9.1 years). Interventions: Two experimental sessions were performed; one following 27 hours of sleep deprivation and the other following a normal night of sleep, with control for circadian effects. Measurements & Results: A tunnel-vision task (central versus peripheral visual discrimination) and a standard checkerboard-viewing task were performed while 32-channel EEG was recorded. For the tunnel-vision task, sleep deprivation resulted in an overall slowing of reaction times and increased errors of omission for both peripheral and foveal stimuli (P < 0.05). These changes were related to reduced P300 amplitude (indexing cognitive processing) but not measures of early visual processing. No evidence was found for an interaction effect between sleep deprivation and visual-field position, either in terms of behavior or electrophysiological responses. Slower processing of the sustained parvocellular visual pathway was demonstrated. Conclusions: These findings suggest that performance deficits on visual tasks during sleep deprivation are due to higher cognitive processes rather than early visual processing. Sleep deprivation may differentially impair processing of more-detailed visual information. Features of the study design (eg, visual angle, duration of sleep deprivation) may influence whether peripheral visual-field neglect occurs. Citation: Jackson ML; Croft RJ; Owens K; Pierce RJ; Kennedy GA; Crewther D; Howard ME. The effect of acute sleep deprivation on visual evoked potentials in professional drivers. SLEEP 2008;31(9):1261-1269. PMID:18788651
Discrimination and categorization of emotional facial expressions and faces in Parkinson's disease.
Alonso-Recio, Laura; Martín, Pilar; Rubio, Sandra; Serrano, Juan M
2014-09-01
Our objective was to compare the ability to discriminate and categorize emotional facial expressions (EFEs) and facial identity characteristics (age and/or gender) in a group of 53 individuals with Parkinson's disease (PD) and another group of 53 healthy subjects. On the one hand, by means of discrimination and identification tasks, we compared two stages in the visual recognition process that could be selectively affected in individuals with PD. On the other hand, facial expression versus gender and age comparison permits us to contrast whether the emotional or non-emotional content influences the configural perception of faces. In Experiment I, we did not find differences between groups, either with facial expression or age, in discrimination tasks. Conversely, in Experiment II, we found differences between the groups, but only in the EFE identification task. Taken together, our results indicate that configural perception of faces does not seem to be globally impaired in PD. However, this ability is selectively altered when the categorization of emotional faces is required. A deeper assessment of the PD group indicated that decline in facial expression categorization is more evident in a subgroup of patients with higher global impairment (motor and cognitive). Taken together, these results suggest that the problems found in facial expression recognition may be associated with the progressive neuronal loss in frontostriatal and mesolimbic circuits, which characterizes PD. © 2013 The British Psychological Society.
Chan, Louis K H; Hayward, William G
2009-02-01
In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. Copyright 2009 APA, all rights reserved.
Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.
Liu, Xiang-Yun; Zhang, Jun-Yun
2017-08-04
Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.
Khan, Adil G; Poort, Jasper; Chadwick, Angus; Blot, Antonin; Sahani, Maneesh; Mrsic-Flogel, Thomas D; Hofer, Sonja B
2018-06-01
How learning enhances neural representations for behaviorally relevant stimuli via activity changes of cortical cell types remains unclear. We simultaneously imaged responses of pyramidal cells (PYR) along with parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal peptide (VIP) inhibitory interneurons in primary visual cortex while mice learned to discriminate visual patterns. Learning increased selectivity for task-relevant stimuli of PYR, PV and SOM subsets but not VIP cells. Strikingly, PV neurons became as selective as PYR cells, and their functional interactions reorganized, leading to the emergence of stimulus-selective PYR-PV ensembles. Conversely, SOM activity became strongly decorrelated from the network, and PYR-SOM coupling before learning predicted selectivity increases in individual PYR cells. Thus, learning differentially shapes the activity and interactions of multiple cell classes: while SOM inhibition may gate selectivity changes, PV interneurons become recruited into stimulus-specific ensembles and provide more selective inhibition as the network becomes better at discriminating behaviorally relevant stimuli.
Aphasic Patients Exhibit a Reversal of Hemispheric Asymmetries in Categorical Color Discrimination
Paluy, Yulia; Gilbert, Aubrey L.; Baldo, Juliana V.; Dronkers, Nina F.; Ivry, Richard B.
2010-01-01
Patients with left hemisphere (LH) or right hemisphere (RH) brain injury due to stroke were tested on a speeded, color discrimination task in which two factors were manipulated: 1) the categorical relationship between the target and the distracters and 2) the visual field in which the target was presented. Similar to controls, the RH patients were faster in detecting targets in the right visual field when the target and distracters had different color names compared to when their names were the same. This effect was absent in the LH patients, consistent with the hypothesis that injury to the left hemisphere handicaps the automatic activation of lexical codes. Moreover, the LH patients showed a reversed effect, such that the advantage of different target-distracter names was now evident for targets in the left visual field. This reversal may suggest a reorganization of the color lexicon in the right hemisphere following left hemisphere brain injury and/or the unmasking of a heightened right hemisphere sensitivity to color categories. PMID:21216454
Vision and visual navigation in nocturnal insects.
Warrant, Eric; Dacke, Marie
2011-01-01
With their highly sensitive visual systems, nocturnal insects have evolved a remarkable capacity to discriminate colors, orient themselves using faint celestial cues, fly unimpeded through a complicated habitat, and navigate to and from a nest using learned visual landmarks. Even though the compound eyes of nocturnal insects are significantly more sensitive to light than those of their closely related diurnal relatives, their photoreceptors absorb photons at very low rates in dim light, even during demanding nocturnal visual tasks. To explain this apparent paradox, it is hypothesized that the necessary bridge between retinal signaling and visual behavior is a neural strategy of spatial and temporal summation at a higher level in the visual system. Exactly where in the visual system this summation takes place, and the nature of the neural circuitry that is involved, is currently unknown but provides a promising avenue for future research.
Processing speed in recurrent visual networks correlates with general intelligence.
Jolij, Jacob; Huisman, Danielle; Scholte, Steven; Hamel, Ronald; Kemner, Chantal; Lamme, Victor A F
2007-01-08
Studies on the neural basis of general fluid intelligence strongly suggest that a smarter brain processes information faster. Different brain areas, however, are interconnected by both feedforward and feedback projections. Whether both types of connections or only one of the two types are faster in smarter brains remains unclear. Here we show, by measuring visual evoked potentials during a texture discrimination task, that general fluid intelligence shows a strong correlation with processing speed in recurrent visual networks, while there is no correlation with speed of feedforward connections. The hypothesis that a smarter brain runs faster may need to be refined: a smarter brain's feedback connections run faster.
Berditchevskaia, A.; Cazé, R. D.; Schultz, S. R.
2016-01-01
In recent years, simple GO/NOGO behavioural tasks have become popular due to the relative ease with which they can be combined with technologies such as in vivo multiphoton imaging. To date, it has been assumed that behavioural performance can be captured by the average performance across a session, however this neglects the effect of motivation on behaviour within individual sessions. We investigated the effect of motivation on mice performing a GO/NOGO visual discrimination task. Performance within a session tended to follow a stereotypical trajectory on a Receiver Operating Characteristic (ROC) chart, beginning with an over-motivated state with many false positives, and transitioning through a more or less optimal regime to end with a low hit rate after satiation. Our observations are reproduced by a new model, the Motivated Actor-Critic, introduced here. Our results suggest that standard measures of discriminability, obtained by averaging across a session, may significantly underestimate behavioural performance. PMID:27272438
Bogale, Bezawork Afework; Aoyama, Masato; Sugita, Shoei
2011-01-01
We trained jungle crows to discriminate among photographs of human face according to their sex in a simultaneous two-alternative task to study their categorical learning ability. Once the crows reached a discrimination criterion (greater than or equal to 80% correct choices in two consecutive sessions; binomial probability test, p<.05), they next received generalization and transfer tests (i.e., greyscale, contour, and 'full' occlusion) in Experiment 1 followed by a 'partial' occlusion test in Experiment 2 and random stimuli pair test in Experiment 3. Jungle crows learned the discrimination task in a few trials and successfully generalized to novel stimuli sets. However, all crows failed the greyscale test and half of them the contour test. Neither occlusion of internal features of the face, nor randomly pairing of exemplars affected discrimination performance of most, if not all crows. We suggest that jungle crows categorize human face photographs based on perceptual similarities as other non-human animals do, and colour appears to be the most salient feature controlling discriminative behaviour. However, the variability in the use of facial contours among individuals suggests an exploitation of multiple features and individual differences in visual information processing among jungle crows. Copyright © 2010 Elsevier B.V. All rights reserved.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
Visually cued motor synchronization: modulation of fMRI activation patterns by baseline condition.
Cerasa, Antonio; Hagberg, Gisela E; Bianciardi, Marta; Sabatini, Umberto
2005-01-03
A well-known issue in functional neuroimaging studies, regarding motor synchronization, is to design suitable control tasks able to discriminate between the brain structures involved in primary time-keeper functions and those related to other processes such as attentional effort. The aim of this work was to investigate how the predictability of stimulus onsets in the baseline condition modulates the activity in brain structures related to processes involved in time-keeper functions during the performance of a visually cued motor synchronization task (VM). The rational behind this choice derives from the notion that using different stimulus predictability can vary the subject's attention and the consequently neural activity. For this purpose, baseline levels of BOLD activity were obtained from 12 subjects during a conventional-baseline condition: maintained fixation of the visual rhythmic stimuli presented in the VM task, and a random-baseline condition: maintained fixation of visual stimuli occurring randomly. fMRI analysis demonstrated that while brain areas with a documented role in basic time processing are detected independent of the baseline condition (right cerebellum, bilateral putamen, left thalamus, left superior temporal gyrus, left sensorimotor cortex, left dorsal premotor cortex and supplementary motor area), the ventral premotor cortex, caudate nucleus, insula and inferior frontal gyrus exhibited a baseline-dependent activation. We conclude that maintained fixation of unpredictable visual stimuli can be employed in order to reduce or eliminate neural activity related to attentional components present in the synchronization task.
Sustained attention in language production: an individual differences investigation.
Jongman, Suzanne R; Roelofs, Ardi; Meyer, Antje S
2015-01-01
Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that these processes do require some form of attention. Here we investigated the contribution of sustained attention: the ability to maintain alertness over time. In Experiment 1, participants' sustained attention ability was measured using auditory and visual continuous performance tasks. Subsequently, employing a dual-task procedure, participants described pictures using simple noun phrases and performed an arrow-discrimination task while their vocal and manual response times (RTs) and the durations of their gazes to the pictures were measured. Earlier research has demonstrated that gaze duration reflects language planning processes up to and including phonological encoding. The speakers' sustained attention ability correlated with the magnitude of the tail of the vocal RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. This suggests that sustained attention was most important after phonological encoding. Experiment 2 showed that the involvement of sustained attention was significantly stronger in a dual-task situation (picture naming and arrow discrimination) than in simple naming. Thus, individual differences in maintaining attention on the production processes become especially apparent when a simultaneous second task also requires attentional resources.
Exposure to Organic Solvents Used in Dry Cleaning Reduces Low and High Level Visual Function
Jiménez Barbosa, Ingrid Astrid
2015-01-01
Purpose To investigate whether exposure to occupational levels of organic solvents in the dry cleaning industry is associated with neurotoxic symptoms and visual deficits in the perception of basic visual features such as luminance contrast and colour, higher level processing of global motion and form (Experiment 1), and cognitive function as measured in a visual search task (Experiment 2). Methods The Q16 neurotoxic questionnaire, a commonly used measure of neurotoxicity (by the World Health Organization), was administered to assess the neurotoxic status of a group of 33 dry cleaners exposed to occupational levels of organic solvents (OS) and 35 age-matched non dry-cleaners who had never worked in the dry cleaning industry. In Experiment 1, to assess visual function, contrast sensitivity, colour/hue discrimination (Munsell Hue 100 test), global motion and form thresholds were assessed using computerised psychophysical tests. Sensitivity to global motion or form structure was quantified by varying the pattern coherence of global dot motion (GDM) and Glass pattern (oriented dot pairs) respectively (i.e., the percentage of dots/dot pairs that contribute to the perception of global structure). In Experiment 2, a letter visual-search task was used to measure reaction times (as a function of the number of elements: 4, 8, 16, 32, 64 and 100) in both parallel and serial search conditions. Results Dry cleaners exposed to organic solvents had significantly higher scores on the Q16 compared to non dry-cleaners indicating that dry cleaners experienced more neurotoxic symptoms on average. The contrast sensitivity function for dry cleaners was significantly lower at all spatial frequencies relative to non dry-cleaners, which is consistent with previous studies. Poorer colour discrimination performance was also noted in dry cleaners than non dry-cleaners, particularly along the blue/yellow axis. In a new finding, we report that global form and motion thresholds for dry cleaners were also significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners. Conclusions Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour) and high level visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance. PMID:25933026
Caudate clues to rewarding cues.
Platt, Michael L
2002-01-31
Behavioral studies indicate that prior experience can influence discrimination of subsequent stimuli. The mechanisms responsible for highlighting a particular aspect of the stimulus, such as motion or color, as most relevant and thus deserving further scrutiny, however, remain poorly understood. In the current issue of Neuron, demonstrate that neurons in the caudate nucleus of the basal ganglia signal which dimension of a visual cue, either color or location, is associated with reward in an eye movement task. These findings raise the possibility that this structure participates in the reward-based control of visual attention.
Sakimoto, Yuya; Sakata, Shogo
2014-01-01
It was showed that solving a simple discrimination task (A+, B−) and a simultaneous feature-negative (FN) task (A+, AB−) used the hippocampal-independent strategy. Recently, we showed that the number of sessions required for a rat to completely learn a task differed between the FN and simple discrimination tasks, and there was a difference in hippocampal theta activity between these tasks. These results suggested that solving the FN task relied on a different strategy than the simple discrimination task. In this study, we provided supportive evidence that solving the FN and simple discrimination tasks involved different strategies by examining changes in performance and hippocampal theta activity in the FN task after transfer from the simple discrimination task (A+, B− → A+, AB−). The results of this study showed that performance on the FN task was impaired and there was a difference in hippocampal theta activity between the simple discrimination task and FN task. Thus, we concluded that solving the FN task uses a different strategy than the simple discrimination task. PMID:24917797
Shipstead, Zach; Engle, Randall W
2013-01-01
One approach to understanding working memory (WM) holds that individual differences in WM capacity arise from the amount of information a person can store in WM over short periods of time. This view is especially prevalent in WM research conducted with the visual arrays task. Within this tradition, many researchers have concluded that the average person can maintain approximately 4 items in WM. The present study challenges this interpretation by demonstrating that performance on the visual arrays task is subject to time-related factors that are associated with retrieval from long-term memory. Experiment 1 demonstrates that memory for an array does not decay as a product of absolute time, which is consistent with both maintenance- and retrieval-based explanations of visual arrays performance. Experiment 2 introduced a manipulation of temporal discriminability by varying the relative spacing of trials in time. We found that memory for a target array was significantly influenced by its temporal compression with, or isolation from, a preceding trial. Subsequent experiments extend these effects to sub-capacity set sizes and demonstrate that changes in the size of k are meaningful to prediction of performance on other measures of WM capacity as well as general fluid intelligence. We conclude that performance on the visual arrays task does not reflect a multi-item storage system but instead measures a person's ability to accurately retrieve information in the face of proactive interference.
Wills, A J; Lea, Stephen E G; Leaver, Lisa A; Osthaus, Britta; Ryan, Catriona M E; Suret, Mark B; Bryant, Catherine M L; Chapman, Sue J A; Millar, Louise
2009-11-01
Pigeons (Columba livia), gray squirrels (Sciurus carolinensis), and undergraduates (Homo sapiens) learned discrimination tasks involving multiple mutually redundant dimensions. First, pigeons and undergraduates learned conditional discriminations between stimuli composed of three spatially separated dimensions, after first learning to discriminate the individual elements of the stimuli. When subsequently tested with stimuli in which one of the dimensions took an anomalous value, the majority of both species categorized test stimuli by their overall similarity to training stimuli. However some individuals of both species categorized them according to a single dimension. In a second set of experiments, squirrels, pigeons, and undergraduates learned go/no-go discriminations using multiple simultaneous presentations of stimuli composed of three spatially integrated, highly salient dimensions. The tendency to categorize test stimuli including anomalous dimension values unidimensionally was higher than in the first set of experiments and did not differ significantly between species. The authors conclude that unidimensional categorization of multidimensional stimuli is not diagnostic for analytic cognitive processing, and that any differences between human's and pigeons' behavior in such tasks are not due to special features of avian visual cognition.
The case of the missing visual details: Occlusion and long-term visual memory.
Williams, Carrick C; Burkle, Kyle A
2017-10-01
To investigate the critical information in long-term visual memory representations of objects, we used occlusion to emphasize 1 type of information or another. By occluding 1 solid side of the object (e.g., top 50%) or by occluding 50% of the object with stripes (like a picket fence), we emphasized visible information about the object, processing the visible details in the former and the object's overall form in the latter. On a token discrimination test, surprisingly, memory for solid or stripe occluded objects at either encoding (Experiment 1) or test (Experiment 2) was the same. In contrast, when occluded objects matched at encoding and test (Experiment 3) or when the occlusion shifted, revealing the entire object piecemeal (Experiment 4), memory was better for solid compared with stripe occluded objects, indicating that objects are represented differently in long-term visual memory. Critically, we also found that when the task emphasized remembering exactly what was shown, memory performance in the more detailed solid occlusion condition exceeded that in the stripe condition (Experiment 5). However, when the task emphasized the whole object form, memory was better in the stripe condition (Experiment 6) than in the solid condition. We argue that long-term visual memory can represent objects flexibly, and task demands can interact with visual information, allowing the viewer to cope with changing real-world visual environments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Baldwin, C M; Houston, F P; Podgornik, M N; Young, R S; Barnes, C A; Witten, M L
2001-01-01
To determine whether JP-8 jet fuel affects parameters of the Functional Observational Battery (FOB), visual discrimination, or spatial learning and memory, the authors exposed groups of male Fischer Brown Norway hybrid rats for 28 d to aerosol/vapor-delivered JP-8, or to JP-8 followed by 15 min of aerosolized substance P analogue, or to sham-confined fresh room air. Behavioral testing was accomplished with the U.S. Environmental Protection Agency's Functional Observational Battery. The authors used the Morris swim task to test visual and spatial learning and memory testing. The spatial test included examination of memory for the original target location following 15 d of JP-8 exposure, as well as a 3-d new target location learning paradigm implemented the day that followed the final day of exposure. Only JP-8 exposed animals had significant weight loss by the 2nd week of exposure compared with JP-8 with substance P and control rats; this finding compares with those of prior studies of JP-8 jet fuel. Rats exposed to JP-8 with or without substance P exhibited significantly greater rearing and less grooming behavior over time than did controls during Functional Observational Battery open-field testing. Exposed rats also swam significantly faster than controls during the new target location training and testing, thus supporting the increased activity noted during Functional Observational Battery testing. There were no significant differences between the exposed and control groups' performances during acquisition, retention, or learning of the new platform location in either the visual discrimination or spatial version of the Morris swim task. The data suggest that although visual discrimination and spatial learning and memory were not disrupted by JP-8 exposure, arousal indices and activity measures were distinctly different in these animals.
Lim, Jongil; Palmer, Christopher J; Busa, Michael A; Amado, Avelino; Rosado, Luis D; Ducharme, Scott W; Simon, Darnell; Van Emmerik, Richard E A
2017-06-01
The pickup of visual information is critical for controlling movement and maintaining situational awareness in dangerous situations. Altered coordination while wearing protective equipment may impact the likelihood of injury or death. This investigation examined the consequences of load magnitude and distribution on situational awareness, segmental coordination and head gaze in several protective equipment ensembles. Twelve soldiers stepped down onto force plates and were instructed to quickly and accurately identify visual information while establishing marksmanship posture in protective equipment. Time to discriminate visual information was extended when additional pack and helmet loads were added, with the small increase in helmet load having the largest effect. Greater head-leading and in-phase trunk-head coordination were found with lighter pack loads, while trunk-leading coordination increased and head gaze dynamics were more disrupted in heavier pack loads. Additional armour load in the vest had no consequences for Time to discriminate, coordination or head dynamics. This suggests that the addition of head borne load be carefully considered when integrating new technology and that up-armouring does not necessarily have negative consequences for marksmanship performance. Practitioner Summary: Understanding the trade-space between protection and reductions in task performance continue to challenge those developing personal protective equipment. These methods provide an approach that can help optimise equipment design and loading techniques by quantifying changes in task performance and the emergent coordination dynamics that underlie that performance.
Picture object recognition in an American black bear (Ursus americanus).
Johnson-Ulrich, Zoe; Vonk, Jennifer; Humbyrd, Mary; Crowley, Marilyn; Wojtkowski, Ela; Yates, Florence; Allard, Stephanie
2016-11-01
Many animals have been tested for conceptual discriminations using two-dimensional images as stimuli, and many of these species appear to transfer knowledge from 2D images to analogous real life objects. We tested an American black bear for picture-object recognition using a two alternative forced choice task. She was presented with four unique sets of objects and corresponding pictures. The bear showed generalization from both objects to pictures and pictures to objects; however, her transfer was superior when transferring from real objects to pictures, suggesting that bears can recognize visual features from real objects within photographic images during discriminations.
Stimulus discriminability in visual search.
Verghese, P; Nakayama, K
1994-09-01
We measured the probability of detecting the target in a visual search task, as a function of the following parameters: the discriminability of the target from the distractors, the duration of the display, and the number of elements in the display. We examined the relation between these parameters at criterion performance (80% correct) to determine if the parameters traded off according to the predictions of a limited capacity model. For the three dimensions that we studied, orientation, color, and spatial frequency, the observed relationship between the parameters deviates significantly from a limited capacity model. The data relating discriminability to display duration are better than predicted over the entire range of orientation and color differences that we examined, and are consistent with the prediction for only a limited range of spatial frequency differences--from 12 to 23%. The relation between discriminability and number varies considerably across the three dimensions and is better than the limited capacity prediction for two of the three dimensions that we studied. Orientation discrimination shows a strong number effect, color discrimination shows almost no effect, and spatial frequency discrimination shows an intermediate effect. The different trading relationships in each dimension are more consistent with early filtering in that dimension, than with a common limited capacity stage. Our results indicate that higher-level processes that group elements together also play a strong role. Our experiments provide little support for limited capacity mechanisms over the range of stimulus differences that we examined in three different dimensions.
Bratzke, Daniel; Seifried, Tanja; Ulrich, Rolf
2012-08-01
This study assessed possible cross-modal transfer effects of training in a temporal discrimination task from vision to audition as well as from audition to vision. We employed a pretest-training-post-test design including a control group that performed only the pretest and the post-test. Trained participants showed better discrimination performance with their trained interval than the control group. This training effect transferred to the other modality only for those participants who had been trained with auditory stimuli. The present study thus demonstrates for the first time that training on temporal discrimination within the auditory modality can transfer to the visual modality but not vice versa. This finding represents a novel illustration of auditory dominance in temporal processing and is consistent with the notion that time is primarily encoded in the auditory system.
Bimanual proprioceptive performance differs for right- and left-handed individuals.
Han, Jia; Waddington, Gordon; Adams, Roger; Anson, Judith
2013-05-10
It has been proposed that asymmetry between the upper limbs in the utilization of proprioceptive feedback arises from functional differences in the roles of the preferred and non-preferred hands during bimanual tasks. The present study investigated unimanual and bimanual proprioceptive performance in right- and left-handed young adults with an active finger pinch movement discrimination task. With visual information removed, participants were required to make absolute judgments about the extent of pinch movements made to physical stops, either by one hand, or by both hands concurrently, with the sequence of presented movement extents varied randomly. Discrimination accuracy scores were derived from participants' responses using non-parametric signal detection analysis. Consistent with previous findings, a non-dominant hand/hemisphere superiority effect was observed, where the non-dominant hands of right- and left-handed individuals performed overall significantly better than their dominant hands. For all participants, bimanual movement discrimination scores were significantly lower than scores obtained in the unimanual task. However, the magnitude of the performance reduction, from the unimanual to the bimanual task, was significantly greater for left-handed individuals. The effect whereby bimanual proprioception was disproportionately affected in left-handed individuals could be due to enhanced neural communication between hemispheres in left-handed individuals leading to less distinctive separation of information obtained from the two hands in the cerebral cortex. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Perceptual training yields rapid improvements in visually impaired youth.
Nyquist, Jeffrey B; Lappin, Joseph S; Zhang, Ruyuan; Tadin, Duje
2016-11-30
Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events.
Yokoi, Isao; Komatsu, Hidehiko
2010-09-01
Visual grouping of discrete elements is an important function for object recognition. We recently conducted an experiment to study neural correlates of visual grouping. We recorded neuronal activities while monkeys performed a grouping detection task in which they discriminated visual patterns composed of discrete dots arranged in a cross and detected targets in which dots with the same contrast were aligned horizontally or vertically. We found that some neurons in the lateral bank of the intraparietal sulcus exhibit activity related to visual grouping. In the present study, we analyzed how different types of neurons contribute to visual grouping. We classified the recorded neurons as putative pyramidal neurons or putative interneurons, depending on the duration of their action potentials. We found that putative pyramidal neurons exhibited selectivity for the orientation of the target, and this selectivity was enhanced by attention to a particular target orientation. By contrast, putative interneurons responded more strongly to the target stimuli than to the nontargets, regardless of the orientation of the target. These results suggest that different classes of parietal neurons contribute differently to the grouping of discrete elements.
Davis, Catherine M; Roma, Peter G; Armour, Elwood; Gooden, Virginia L; Brady, Joseph V; Weed, Michael R; Hienz, Robert D
2014-01-01
The present report describes an animal model for examining the effects of radiation on a range of neurocognitive functions in rodents that are similar to a number of basic human cognitive functions. Fourteen male Long-Evans rats were trained to perform an automated intra-dimensional set shifting task that consisted of their learning a basic discrimination between two stimulus shapes followed by more complex discrimination stages (e.g., a discrimination reversal, a compound discrimination, a compound reversal, a new shape discrimination, and an intra-dimensional stimulus discrimination reversal). One group of rats was exposed to head-only X-ray radiation (2.3 Gy at a dose rate of 1.9 Gy/min), while a second group received a sham-radiation exposure using the same anesthesia protocol. The irradiated group responded less, had elevated numbers of omitted trials, increased errors, and greater response latencies compared to the sham-irradiated control group. Additionally, social odor recognition memory was tested after radiation exposure by assessing the degree to which rats explored wooden beads impregnated with either their own odors or with the odors of novel, unfamiliar rats; however, no significant effects of radiation on social odor recognition memory were observed. These data suggest that rodent tasks assessing higher-level human cognitive domains are useful in examining the effects of radiation on the CNS, and may be applicable in approximating CNS risks from radiation exposure in clinical populations receiving whole brain irradiation.
Davis, Catherine M.; Roma, Peter G.; Armour, Elwood; Gooden, Virginia L.; Brady, Joseph V.; Weed, Michael R.; Hienz, Robert D.
2014-01-01
The present report describes an animal model for examining the effects of radiation on a range of neurocognitive functions in rodents that are similar to a number of basic human cognitive functions. Fourteen male Long-Evans rats were trained to perform an automated intra-dimensional set shifting task that consisted of their learning a basic discrimination between two stimulus shapes followed by more complex discrimination stages (e.g., a discrimination reversal, a compound discrimination, a compound reversal, a new shape discrimination, and an intra-dimensional stimulus discrimination reversal). One group of rats was exposed to head-only X-ray radiation (2.3 Gy at a dose rate of 1.9 Gy/min), while a second group received a sham-radiation exposure using the same anesthesia protocol. The irradiated group responded less, had elevated numbers of omitted trials, increased errors, and greater response latencies compared to the sham-irradiated control group. Additionally, social odor recognition memory was tested after radiation exposure by assessing the degree to which rats explored wooden beads impregnated with either their own odors or with the odors of novel, unfamiliar rats; however, no significant effects of radiation on social odor recognition memory were observed. These data suggest that rodent tasks assessing higher-level human cognitive domains are useful in examining the effects of radiation on the CNS, and may be applicable in approximating CNS risks from radiation exposure in clinical populations receiving whole brain irradiation. PMID:25099152
Representations of temporal information in short-term memory: Are they modality-specific?
Bratzke, Daniel; Quinn, Katrina R; Ulrich, Rolf; Bausenhart, Karin M
2016-10-01
Rattat and Picard (2012) reported that the coding of temporal information in short-term memory is modality-specific, that is, temporal information received via the visual (auditory) modality is stored as a visual (auditory) code. This conclusion was supported by modality-specific interference effects on visual and auditory duration discrimination, which were induced by secondary tasks (visual tracking or articulatory suppression), presented during a retention interval. The present study assessed the stability of these modality-specific interference effects. Our study did not replicate the selective interference pattern but rather indicated that articulatory suppression not only impairs short-term memory for auditory but also for visual durations. This result pattern supports a crossmodal or an abstract view of temporal encoding. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Benard, Julie; Giurfa, Martin
2004-01-01
We asked whether honeybees, "Apis mellifera," could solve a transitive inference problem. Individual free-flying bees were conditioned with four overlapping premise pairs of five visual patterns in a multiple discrimination task (A+ vs. B-, B+ vs. C-, C+ vs. D-, D+ vs. E-, where + and - indicate sucrose reward or absence of it,…
Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth
2018-01-01
Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Ensemble perception of emotions in autistic and typical children and adolescents.
Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth
2017-04-01
Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Abreu-Mendoza, Roberto A; Soto-Alba, Elia E; Arias-Trejo, Natalia
2013-01-01
Current research in the number development field has focused in individual differences regarding the acuity of children's approximate number system (ANS). The most common task to evaluate children's acuity is through non-symbolic numerical comparison. Efforts have been made to prevent children from using perceptual cues by controlling the visual properties of the stimuli (e.g., density, contour length, and area); nevertheless, researchers have used these visual controls interchangeably. Studies have also tried to understand the relation between children's cardinality knowledge and their performance in a number comparison task; divergent results may in fact be rooted in the use of different visual controls. The main goal of the present study is to explore how the usage of different visual controls (density, total filled area, and correlated and anti-correlated area) affects children's performance in a number comparison task, and its relationship to children's cardinality knowledge. For that purpose, 77 preschoolers participated in three tasks: (1) counting list elicitation to test whether children could recite the counting list up to ten, (2) give a number to evaluate children's cardinality knowledge, and (3) number comparison to evaluate their ability to compare two quantities. During this last task, children were asked to point at the set with more geometric figures when two sets were displayed on a screen. Children were exposed only to one of the three visual controls. Results showed that overall, children performed above chance in the number comparison task; nonetheless, density was the easiest control, while correlated and anti-correlated area was the most difficult in most cases. Only total filled area was sensitive to discriminate cardinal principal knowers from non-cardinal principal knowers. How this finding helps to explain conflicting evidence from previous research, and how the present outcome relates to children's number word knowledge is discussed.
Area vs. density: influence of visual variables and cardinality knowledge in early number comparison
Abreu-Mendoza, Roberto A.; Soto-Alba, Elia E.; Arias-Trejo, Natalia
2013-01-01
Current research in the number development field has focused in individual differences regarding the acuity of children's approximate number system (ANS). The most common task to evaluate children's acuity is through non-symbolic numerical comparison. Efforts have been made to prevent children from using perceptual cues by controlling the visual properties of the stimuli (e.g., density, contour length, and area); nevertheless, researchers have used these visual controls interchangeably. Studies have also tried to understand the relation between children's cardinality knowledge and their performance in a number comparison task; divergent results may in fact be rooted in the use of different visual controls. The main goal of the present study is to explore how the usage of different visual controls (density, total filled area, and correlated and anti-correlated area) affects children's performance in a number comparison task, and its relationship to children's cardinality knowledge. For that purpose, 77 preschoolers participated in three tasks: (1) counting list elicitation to test whether children could recite the counting list up to ten, (2) give a number to evaluate children's cardinality knowledge, and (3) number comparison to evaluate their ability to compare two quantities. During this last task, children were asked to point at the set with more geometric figures when two sets were displayed on a screen. Children were exposed only to one of the three visual controls. Results showed that overall, children performed above chance in the number comparison task; nonetheless, density was the easiest control, while correlated and anti-correlated area was the most difficult in most cases. Only total filled area was sensitive to discriminate cardinal principal knowers from non-cardinal principal knowers. How this finding helps to explain conflicting evidence from previous research, and how the present outcome relates to children's number word knowledge is discussed. PMID:24198803
No evidence for visual context-dependency of olfactory learning in Drosophila
NASA Astrophysics Data System (ADS)
Yarali, Ayse; Mayerle, Moritz; Nawroth, Christian; Gerber, Bertram
2008-08-01
How is behaviour organised across sensory modalities? Specifically, we ask concerning the fruit fly Drosophila melanogaster how visual context affects olfactory learning and recall and whether information about visual context is getting integrated into olfactory memory. We find that changing visual context between training and test does not deteriorate olfactory memory scores, suggesting that these olfactory memories can drive behaviour despite a mismatch of visual context between training and test. Rather, both the establishment and the recall of olfactory memory are generally facilitated by light. In a follow-up experiment, we find no evidence for learning about combinations of odours and visual context as predictors for reinforcement even after explicit training in a so-called biconditional discrimination task. Thus, a ‘true’ interaction between visual and olfactory modalities is not evident; instead, light seems to influence olfactory learning and recall unspecifically, for example by altering motor activity, alertness or olfactory acuity.
Behavioural evidence for colour vision in an elasmobranch.
Van-Eyk, Sarah M; Siebeck, Ulrike E; Champ, Connor M; Marshall, Justin; Hart, Nathan S
2011-12-15
Little is known about the sensory abilities of elasmobranchs (sharks, skates and rays) compared with other fishes. Despite their role as apex predators in most marine and some freshwater habitats, interspecific variations in visual function are especially poorly studied. Of particular interest is whether they possess colour vision and, if so, the role(s) that colour may play in elasmobranch visual ecology. The recent discovery of three spectrally distinct cone types in three different species of ray suggests that at least some elasmobranchs have the potential for functional trichromatic colour vision. However, in order to confirm that these species possess colour vision, behavioural experiments are required. Here, we present evidence for the presence of colour vision in the giant shovelnose ray (Glaucostegus typus) through the use of a series of behavioural experiments based on visual discrimination tasks. Our results show that these rays are capable of discriminating coloured reward stimuli from other coloured (unrewarded) distracter stimuli of variable brightness with a success rate significantly different from chance. This study represents the first behavioural evidence for colour vision in any elasmobranch, using a paradigm that incorporates extensive controls for relative stimulus brightness. The ability to discriminate colours may have a strong selective advantage for animals living in an aquatic ecosystem, such as rays, as a means of filtering out surface-wave-induced flicker.
Top-down beta oscillatory signaling conveys behavioral context in early visual cortex.
Richter, Craig G; Coppola, Richard; Bressler, Steven L
2018-05-03
Top-down modulation of sensory processing is a critical neural mechanism subserving numerous important cognitive roles, one of which may be to inform lower-order sensory systems of the current 'task at hand' by conveying behavioral context to these systems. Accumulating evidence indicates that top-down cortical influences are carried by directed interareal synchronization of oscillatory neuronal populations, with recent results pointing to beta-frequency oscillations as particularly important for top-down processing. However, it remains to be determined if top-down beta-frequency oscillations indeed convey behavioral context. We measured spectral Granger Causality (sGC) using local field potentials recorded from microelectrodes chronically implanted in visual areas V1/V2, V4, and TEO of two rhesus macaque monkeys, and applied multivariate pattern analysis to the spatial patterns of top-down sGC. We decoded behavioral context by discriminating patterns of top-down (V4/TEO-to-V1/V2) beta-peak sGC for two different task rules governing correct responses to identical visual stimuli. The results indicate that top-down directed influences are carried to visual cortex by beta oscillations, and differentiate task demands even before visual stimulus processing. They suggest that top-down beta-frequency oscillatory processes coordinate processing of sensory information by conveying global knowledge states to early levels of the sensory cortical hierarchy independently of bottom-up stimulus-driven processing.
Selective representation of task-relevant objects and locations in the monkey prefrontal cortex.
Everling, Stefan; Tinsley, Chris J; Gaffan, David; Duncan, John
2006-04-01
In the monkey prefrontal cortex (PFC), task context exerts a strong influence on neural activity. We examined different aspects of task context in a temporal search task. On each trial, the monkey (Macaca mulatta) watched a stream of pictures presented to left or right of fixation. The task was to hold fixation until seeing a particular target, and then to make an immediate saccade to it. Sometimes (unilateral task), the attended pictures appeared alone, with a cue at trial onset indicating whether they would be presented to left or right. Sometimes (bilateral task), the attended picture stream (cued side) was accompanied by an irrelevant stream on the opposite side. In two macaques, we recorded responses from a total of 161 cells in the lateral PFC. Many cells (75/161) showed visual responses. Object-selective responses were strongly shaped by task relevance - with stronger responses to targets than to nontargets, failure to discriminate one nontarget from another, and filtering out of information from an irrelevant stimulus stream. Location selectivity occurred rather independently of object selectivity, and independently in visual responses and delay periods between one stimulus and the next. On error trials, PFC activity followed the correct rules of the task, rather than the incorrect overt behaviour. Together, these results suggest a highly programmable system, with responses strongly determined by the rules and requirements of the task performed.
Herrera-Guzmán, I; Peña-Casanova, J; Lara, J P; Gudayol-Ferré, E; Böhm, P
2004-08-01
The assessment of visual perception and cognition forms an important part of any general cognitive evaluation. We have studied the possible influence of age, sex, and education on a normal elderly Spanish population (90 healthy subjects) in performance in visual perception tasks. To evaluate visual perception and cognition, we have used the subjects performance with The Visual Object and Space Perception Battery (VOSP). The test consists of 8 subtests: 4 measure visual object perception (Incomplete Letters, Silhouettes, Object Decision, and Progressive Silhouettes) while the other 4 measure visual space perception (Dot Counting, Position Discrimination, Number Location, and Cube Analysis). The statistical procedures employed were either simple or multiple linear regression analyses (subtests with normal distribution) and Mann-Whitney tests, followed by ANOVA with Scheffe correction (subtests without normal distribution). Age and sex were found to be significant modifying factors in the Silhouettes, Object Decision, Progressive Silhouettes, Position Discrimination, and Cube Analysis subtests. Educational level was found to be a significant predictor of function for the Silhouettes and Object Decision subtests. The results of the sample were adjusted in line with the differences observed. Our study also offers preliminary normative data for the administration of the VOSP to an elderly Spanish population. The results are discussed and compared with similar studies performed in different cultural backgrounds.
Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina
2013-01-01
A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.
Training in Contrast Detection Improves Motion Perception of Sinewave Gratings in Amblyopia
Hou, Fang; Huang, Chang-bing; Tao, Liming; Feng, Lixia; Zhou, Yifeng; Lu, Zhong-Lin
2011-01-01
Purpose. One critical concern about using perceptual learning to treat amblyopia is whether training with one particular stimulus and task generalizes to other stimuli and tasks. In the spatial domain, it has been found that the bandwidth of contrast sensitivity improvement is much broader in amblyopes than in normals. Because previous studies suggested the local motion deficits in amblyopia are explained by the spatial vision deficits, the hypothesis for this study was that training in the spatial domain could benefit motion perception of sinewave gratings. Methods. Nine adult amblyopes (mean age, 22.1 ± 5.6 years) were trained in a contrast detection task in the amblyopic eye for 10 days. Visual acuity, spatial contrast sensitivity functions, and temporal modulation transfer functions (MTF) for sinewave motion detection and discrimination were measured for each eye before and after training. Eight adult amblyopes (mean age, 22.6 ± 6.7 years) served as control subjects. Results. In the amblyopic eye, training improved (1) contrast sensitivity by 6.6 dB (or 113.8%) across spatial frequencies, with a bandwidth of 4.4 octaves; (2) sensitivity of motion detection and discrimination by 3.2 dB (or 44.5%) and 3.7 dB (or 53.1%) across temporal frequencies, with bandwidths of 3.9 and 3.1 octaves, respectively; (3) visual acuity by 3.2 dB (or 44.5%). The fellow eye also showed a small amount of improvement in contrast sensitivities and no significant change in motion perception. Control subjects who received no training demonstrated no obvious improvement in any measure. Conclusions. The results demonstrate substantial plasticity in the amblyopic visual system, and provide additional empirical support for perceptual learning as a potential treatment for amblyopia. PMID:21693615
Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina
2013-01-01
A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception. PMID:23776509
Shimansky, Y; Saling, M; Wunderlich, D A; Bracha, V; Stelmach, G E; Bloedel, J R
1997-01-01
This study addresses the issue of the role of the cerebellum in the processing of sensory information by determining the capability of cerebellar patients to acquire and use kinesthetic cues received via the active or passive tracing of an irregular shape while blindfolded. Patients with cerebellar lesions and age-matched healthy controls were tested on four tasks: (1) learning to discriminate a reference shape from three others through the repeated tracing of the reference template; (2) reproducing the reference shape from memory by drawing blindfolded; (3) performing the same task with vision; and (4) visually recognizing the reference shape. The cues used to acquire and then to recognize the reference shape were generated under four conditions: (1) "active kinesthesia," in which cues were acquired by the blindfolded subject while actively tracing a reference template; (2) "passive kinesthesia," in which the tracing was performed while the hand was guided passively through the template; (3) "sequential vision," in which the shape was visualized by the serial exposure of small segments of its outline; and (4) "full vision," in which the entire shape was visualized. The sequential vision condition was employed to emulate the sequential way in which kinesthetic information is acquired while tracing the reference shape. The results demonstrate a substantial impairment of cerebellar patients in their capability to perceive two-dimensional irregular shapes based only on kinesthetic cues. There also is evidence that this deficit in part relates to a reduced capacity to integrate temporal sequences of sensory cues into a complete image useful for shape discrimination tasks or for reproducing the shape through drawing. Consequently, the cerebellum has an important role in this type of sensory information processing even when it is not directly associated with the execution of movements.
Cognitive processing of orientation discrimination in anisometropic amblyopia.
Wang, Jianglan; Zhao, Jiao; Wang, Shoujing; Gong, Rui; Zheng, Zhong; Liu, Longqian
2017-01-01
Cognition is very important in our daily life. However, amblyopia has abnormal visual cognition. Physiological changes of the brain during processes of cognition could be reflected with ERPs. So the purpose of this study was to investigate the speed and the capacity of resource allocation in visual cognitive processing in orientation discrimination task during monocular and binocular viewing conditions of amblyopia and normal control as well as the corresponding eyes of the two groups with ERPs. We also sought to investigate whether the speed and the capacity of resource allocation in visual cognitive processing vary with target stimuli at different spatial frequencies (3, 6 and 9 cpd) in amblyopia and normal control as well as between the corresponding eyes of the two groups. Fifteen mild to moderate anisometropic amblyopes and ten normal controls were recruited. Three-stimulus oddball paradigms of three different spatial frequency orientation discrimination tasks were used in monocular and binocular conditions in amblyopes and normal controls to elicit event-related potentials (ERPs). Accuracy (ACC), reaction time (RT), the latency of novelty P300 and P3b, and the amplitude of novelty P300 and P3b were measured. Results showed that RT was longer in the amblyopic eye than in both eyes of amblyopia and non-dominant eye in control. Novelty P300 amplitude was largest in the amblyopic eye, followed by the fellow eye, and smallest in both eyes of amblyopia. Novelty P300 amplitude was larger in the amblyopic eye than non-dominant eye and was larger in fellow eye than dominant eye. P3b latency was longer in the amblyopic eye than in the fellow eye, both eyes of amblyopia and non-dominant eye of control. P3b latency was not associated with RT in amblyopia. Neural responses of the amblyopic eye are abnormal at the middle and late stages of cognitive processing, indicating that the amblyopic eye needs to spend more time or integrate more resources to process the same visual task. Fellow eye and both eyes in amblyopia are slightly different from the dominant eye and both eyes in normal control at the middle and late stages of cognitive processing. Meanwhile, abnormal extents of amblyopic eye do not vary with three different spatial frequencies used in our study.
Cognitive processing of orientation discrimination in anisometropic amblyopia
Wang, Jianglan; Zhao, Jiao; Wang, Shoujing; Gong, Rui; Zheng, Zhong; Liu, Longqian
2017-01-01
Cognition is very important in our daily life. However, amblyopia has abnormal visual cognition. Physiological changes of the brain during processes of cognition could be reflected with ERPs. So the purpose of this study was to investigate the speed and the capacity of resource allocation in visual cognitive processing in orientation discrimination task during monocular and binocular viewing conditions of amblyopia and normal control as well as the corresponding eyes of the two groups with ERPs. We also sought to investigate whether the speed and the capacity of resource allocation in visual cognitive processing vary with target stimuli at different spatial frequencies (3, 6 and 9 cpd) in amblyopia and normal control as well as between the corresponding eyes of the two groups. Fifteen mild to moderate anisometropic amblyopes and ten normal controls were recruited. Three-stimulus oddball paradigms of three different spatial frequency orientation discrimination tasks were used in monocular and binocular conditions in amblyopes and normal controls to elicit event-related potentials (ERPs). Accuracy (ACC), reaction time (RT), the latency of novelty P300 and P3b, and the amplitude of novelty P300 and P3b were measured. Results showed that RT was longer in the amblyopic eye than in both eyes of amblyopia and non-dominant eye in control. Novelty P300 amplitude was largest in the amblyopic eye, followed by the fellow eye, and smallest in both eyes of amblyopia. Novelty P300 amplitude was larger in the amblyopic eye than non-dominant eye and was larger in fellow eye than dominant eye. P3b latency was longer in the amblyopic eye than in the fellow eye, both eyes of amblyopia and non-dominant eye of control. P3b latency was not associated with RT in amblyopia. Neural responses of the amblyopic eye are abnormal at the middle and late stages of cognitive processing, indicating that the amblyopic eye needs to spend more time or integrate more resources to process the same visual task. Fellow eye and both eyes in amblyopia are slightly different from the dominant eye and both eyes in normal control at the middle and late stages of cognitive processing. Meanwhile, abnormal extents of amblyopic eye do not vary with three different spatial frequencies used in our study. PMID:29023501
Impaired discrimination learning in interneuronal NMDAR-GluN2B mutant mice.
Brigman, Jonathan L; Daut, Rachel A; Saksida, Lisa; Bussey, Timothy J; Nakazawa, Kazu; Holmes, Andrew
2015-06-17
Previous studies have established a role for N-methyl-D-aspartate receptor (NMDAR) containing the GluN2B subunit in efficient learning behavior on a variety of tasks. Recent findings have suggested that NMDAR on GABAergic interneurons may underlie the modulation of striatal function necessary to balance efficient action with cortical excitatory input. Here we investigated how loss of GluN2B-containing NMDAR on GABAergic interneurons altered corticostriatal-mediated associative learning. Mutant mice (floxed-GluN2B×Ppp1r2-Cre) were generated to produce loss of GluN2B on forebrain interneurons and phenotyped on a touchscreen-based pairwise visual learning paradigm. We found that the mutants showed normal performance during Pavlovian and instrumental pretraining, but were significantly impaired on a discrimination learning task. Detailed analysis of the microstructure of discrimination performance revealed reduced win→stay behavior in the mutants. These results further support the role of NMDAR, and GluN2B in particular, on modulation of striatal function necessary for efficient choice behavior and suggest that NMDAR on interneurons may play a critical role in associative learning.
Picchioni, Dante; Schmidt, Kathleen C; McWhirter, Kelly K; Loutaev, Inna; Pavletic, Adriana J; Speer, Andrew M; Zametkin, Alan J; Miao, Ning; Bishu, Shrinivas; Turetsky, Kate M; Morrow, Anne S; Nadel, Jeffrey L; Evans, Brittney C; Vesselinovitch, Diana M; Sheeler, Carrie A; Balkin, Thomas J; Smith, Carolyn B
2018-05-15
If protein synthesis during sleep is required for sleep-dependent memory consolidation, we might expect rates of cerebral protein synthesis (rCPS) to increase during sleep in the local brain circuits that support performance on a particular task following training on that task. To measure circuit-specific brain protein synthesis during a daytime nap opportunity, we used the L-[1-(11)C]leucine positron emission tomography (PET) method with simultaneous polysomnography. We trained subjects on the visual texture discrimination task (TDT). This was followed by a nap opportunity during the PET scan, and we retested them later in the day after the scan. The TDT is considered retinotopically specific, so we hypothesized that higher rCPS in primary visual cortex would be observed in the trained hemisphere compared to the untrained hemisphere in subjects who were randomized to a sleep condition. Our results indicate that the changes in rCPS in primary visual cortex depended on whether subjects were in the wakefulness or sleep condition but were independent of the side of the visual field trained. That is, only in the subjects randomized to sleep, rCPS in the right primary visual cortex was higher than the left regardless of side trained. Other brain regions examined were not so affected. In the subjects who slept, performance on the TDT improved similarly regardless of the side trained. Results indicate a regionally selective and sleep-dependent effect that occurs with improved performance on the TDT.
SINGLE NEURON ACTIVITY AND THETA MODULATION IN POSTRHINAL CORTEX DURING VISUAL OBJECT DISCRIMINATION
Furtak, Sharon C.; Ahmed, Omar J.; Burwell, Rebecca D.
2012-01-01
Postrhinal cortex, the rodent homolog of the primate parahippocampal cortex, processes spatial and contextual information. Our hypothesis of postrhinal function is that it serves to encode context, in part, by forming representations that link objects to places. We recorded postrhinal neuronal activity and local field potentials (LFPs) in rats trained on a two-choice, visual discrimination task. As predicted, a large proportion of postrhinal neurons signaled object-location conjunctions. In addition, postrhinal LFPs exhibited strong oscillatory rhythms in the theta band, and many postrhinal neurons were phase locked to theta. Although correlated with running speed, theta power was lower than predicted by speed alone immediately before and after choice. However, theta power was significantly increased following incorrect decisions, suggesting a role in signaling error. These findings provide evidence that postrhinal cortex encodes representations that link objects to places and suggest that postrhinal theta modulation extends to cognitive as well as spatial functions. PMID:23217745
Subbotsky, Eugene; Slater, Elizabeth
2011-04-01
Six- and nine-yr.-old children (n=28 of each) were divided into equal experimental and control groups. The experimental groups were shown a film with a magical theme, and the control groups were shown a film with a nonmagical theme. All groups then were presented with a choice task requiring them to discriminate between ordinary and fantastic visual displays on a computer screen. Statistical analyses indicated that mean scores for correctly identifying the ordinary and fantastic displays were significantly different between experimental and control groups. The children in the experimental groups who watched the magical film had significantly higher scores on correct identifications than children in the control groups who watched the nonmagical film for both age groups. The results suggest that watching films with a magical theme might enhance children's sensitivity toward the fantasy/reality distinction.
Influence of Coactors on Saccadic and Manual Responses
Niehorster, Diederick C.; Jarodzka, Halszka; Holmqvist, Kenneth
2017-01-01
Two experiments were conducted to investigate the effects of coaction on saccadic and manual responses. Participants performed the experiments either in a solitary condition or in a group of coactors who performed the same tasks at the same time. In Experiment 1, participants completed a pro- and antisaccade task where they were required to make saccades towards (prosaccades) or away (antisaccades) from a peripheral visual stimulus. In Experiment 2, participants performed a visual discrimination task that required both making a saccade towards a peripheral stimulus and making a manual response in reaction to the stimulus’s orientation. The results showed that performance of stimulus-driven responses was independent of the social context, while volitionally controlled responses were delayed by the presence of coactors. These findings are in line with studies assessing the effect of attentional load on saccadic control during dual-task paradigms. In particular, antisaccades – but not prosaccades – were influenced by the type of social context. Additionally, the number of coactors present in the group had a moderating effect on both saccadic and manual responses. The results support an attentional view of social influences. PMID:28321288
Sung, Kyongje
2008-12-01
Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.
Robotic wheelchair commanded by SSVEP, motor imagery and word generation.
Bastos, Teodiano F; Muller, Sandra M T; Benevides, Alessandro B; Sarcinelli-Filho, Mario
2011-01-01
This work presents a robotic wheelchair that can be commanded by a Brain Computer Interface (BCI) through Steady-State Visual Evoked Potential (SSVEP), Motor Imagery and Word Generation. When using SSVEP, a statistical test is used to extract the evoked response and a decision tree is used to discriminate the stimulus frequency, allowing volunteers to online operate the BCI, with hit rates varying from 60% to 100%, and guide a robotic wheelchair through an indoor environment. When using motor imagery and word generation, three mental task are used: imagination of left or right hand, and imagination of generation of words starting with the same random letter. Linear Discriminant Analysis is used to recognize the mental tasks, and the feature extraction uses Power Spectral Density. The choice of EEG channel and frequency uses the Kullback-Leibler symmetric divergence and a reclassification model is proposed to stabilize the classifier.
McGugin, Rankin Williams; Tanaka, James W.; Lebrecht, Sophie; Tarr, Michael J.; Gauthier, Isabel
2010-01-01
This study explores the effect of individuation training on the acquisition of race-specific expertise. First, we investigated whether practice individuating other-race faces yields improvement in perceptual discrimination for novel faces of that race. Second, we asked whether there was similar improvement for novel faces of a different race for which participants received equal practice, but in an orthogonal task that did not require individuation. Caucasian participants were trained to individuate faces of one race (African American or Hispanic) and to make difficult eye luminance judgments on faces of the other race. By equating these tasks we are able to rule out raw experience, visual attention or performance/success-induced positivity as the critical factors that produce race-specific improvements. These results indicate that individuation practice is one mechanism through which cognitive, perceptual, and/or social processes promote growth of the own-race face recognition advantage. PMID:21429002
Motion perception tasks as potential correlates to driving difficulty in the elderly
NASA Astrophysics Data System (ADS)
Raghuram, A.; Lakshminarayanan, V.
2006-09-01
Changes in the demographics indicates that the population older than 65 is on the rise because of the aging of the ‘baby boom’ generation. This aging trend and driving related accident statistics reveal the need for procedures and tests that would assess the driving ability of older adults and predict whether they would be safe or unsafe drivers. Literature shows that an attention based test called the useful field of view (UFOV) was a significant predictor of accident rates compared to any other visual function tests. The present study evaluates a qualitative trend on using motion perception tasks as a potential visual perceptual correlates in screening elderly drivers who might have difficulty in driving. Data was collected from 15 older subjects with a mean age of 71. Motion perception tasks included—speed discrimination with radial and lamellar motion, time to collision using prediction motion and estimating direction of heading. A motion index score was calculated which was indicative of performance on all of the above-mentioned motion tasks. Scores on visual attention was assessed using UFOV. A driving habit questionnaire was also administered for a self report on the driving difficulties and accident rates. A qualitative trend based on frequency distributions show that thresholds on the motion perception tasks are successful in identifying subjects who reported to have had difficulty in certain aspects of driving and had accidents. Correlation between UFOV and motion index scores was not significant indicating that probably different aspects of visual information processing that are crucial to driving behaviour are being tapped by these two paradigms. UFOV and motion perception tasks together can be a better predictor for identifying at risk or safe drivers than just using either one of them.
Six- and 9-Month-Old Infants Discriminate between Goals Despite Similar Action Patterns
ERIC Educational Resources Information Center
Marsh, Heidi L.; Stavropoulos, Jennifer; Nienhuis, Tom; Legerstee, Maria
2010-01-01
Behne, Carpenter, Call, and Tomasello (2005) showed that 9- to 18-month-olds, but not 6-month-olds, differentiated between people who were unwilling and unable to share toys. As the outcome of the two tasks is the same (i.e., the toy is not shared), the infants must respond to the different goals of the actor. However, visual habituation paradigms…
Intermanual Transfer of Shapes in Preterm Human Infants from 33 to 34 + 6 Weeks Postconceptional Age
ERIC Educational Resources Information Center
Lejeune, Fleur; Marcus, Leila; Berne-Audeoud, Frederique; Streri, Arlette; Debillon, Thierry; Gentaz, Edouard
2012-01-01
This study investigated the ability of preterm infants to learn an object shape with one hand and discriminate a new shape in the opposite hand (without visual control). Twenty-four preterm infants between 33 and 34 + 6 gestational weeks received a tactile habituation task with either their right or left hand followed by a tactile discrimination…
Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka
2016-08-04
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution.
Environmental influences on neural systems of relational complexity
Kalbfleisch, M. Layne; deBettencourt, Megan T.; Kopperman, Rebecca; Banasiak, Meredith; Roberts, Joshua M.; Halavi, Maryam
2013-01-01
Constructivist learning theory contends that we construct knowledge by experience and that environmental context influences learning. To explore this principle, we examined the cognitive process relational complexity (RC), defined as the number of visual dimensions considered during problem solving on a matrix reasoning task and a well-documented measure of mature reasoning capacity. We sought to determine how the visual environment influences RC by examining the influence of color and visual contrast on RC in a neuroimaging task. To specify the contributions of sensory demand and relational integration to reasoning, our participants performed a non-verbal matrix task comprised of color, no-color line, or black-white visual contrast conditions parametrically varied by complexity (relations 0, 1, 2). The use of matrix reasoning is ecologically valid for its psychometric relevance and for its potential to link the processing of psychophysically specific visual properties with various levels of RC during reasoning. The role of these elements is important because matrix tests assess intellectual aptitude based on these seemingly context-less exercises. This experiment is a first step toward examining the psychophysical underpinnings of performance on these types of problems. The importance of this is increased in light of recent evidence that intelligence can be linked to visual discrimination. We submit three main findings. First, color and black-white visual contrast (BWVC) add demand at a basic sensory level, but contributions from color and from BWVC are dissociable in cortex such that color engages a “reasoning heuristic” and BWVC engages a “sensory heuristic.” Second, color supports contextual sense-making by boosting salience resulting in faster problem solving. Lastly, when visual complexity reaches 2-relations, color and visual contrast relinquish salience to other dimensions of problem solving. PMID:24133465
Dogs can discriminate human smiling faces from blank expressions.
Nagasawa, Miho; Murai, Kensuke; Mogi, Kazutaka; Kikusui, Takefumi
2011-07-01
Dogs have a unique ability to understand visual cues from humans. We investigated whether dogs can discriminate between human facial expressions. Photographs of human faces were used to test nine pet dogs in two-choice discrimination tasks. The training phases involved each dog learning to discriminate between a set of photographs of their owner's smiling and blank face. Of the nine dogs, five fulfilled these criteria and were selected for test sessions. In the test phase, 10 sets of photographs of the owner's smiling and blank face, which had previously not been seen by the dog, were presented. The dogs selected the owner's smiling face significantly more often than expected by chance. In subsequent tests, 10 sets of smiling and blank face photographs of 20 persons unfamiliar to the dogs were presented (10 males and 10 females). There was no statistical difference between the accuracy in the case of the owners and that in the case of unfamiliar persons with the same gender as the owner. However, the accuracy was significantly lower in the case of unfamiliar persons of the opposite gender to that of the owner, than with the owners themselves. These results suggest that dogs can learn to discriminate human smiling faces from blank faces by looking at photographs. Although it remains unclear whether dogs have human-like systems for visual processing of human facial expressions, the ability to learn to discriminate human facial expressions may have helped dogs adapt to human society.
Behavioral demand modulates object category representation in the inferior temporal cortex
Emadi, Nazli
2014-01-01
Visual object categorization is a critical task in our daily life. Many studies have explored category representation in the inferior temporal (IT) cortex at the level of single neurons and population. However, it is not clear how behavioral demands modulate this category representation. Here, we recorded from the IT single neurons in monkeys performing two different tasks with identical visual stimuli: passive fixation and body/object categorization. We found that category selectivity of the IT neurons was improved in the categorization compared with the passive task where reward was not contingent on image category. The category improvement was the result of larger rate enhancement for the preferred category and smaller response variability for both preferred and nonpreferred categories. These specific modulations in the responses of IT category neurons enhanced signal-to-noise ratio of the neural responses to discriminate better between the preferred and nonpreferred categories. Our results provide new insight into the adaptable category representation in the IT cortex, which depends on behavioral demands. PMID:25080572
The surprisingly high human efficiency at learning to recognize faces
Peterson, Matthew F.; Abbey, Craig K.; Eckstein, Miguel P.
2009-01-01
We investigated the ability of humans to optimize face recognition performance through rapid learning of individual relevant features. We created artificial faces with discriminating visual information heavily concentrated in single features (nose, eyes, chin or mouth). In each of 2500 learning blocks a feature was randomly selected and retained over the course of four trials, during which observers identified randomly sampled, noisy face images. Observers learned the discriminating feature through indirect feedback, leading to large performance gains. Performance was compared to a learning Bayesian ideal observer, resulting in unexpectedly high learning compared to previous studies with simpler stimuli. We explore various explanations and conclude that the higher learning measured with faces cannot be driven by adaptive eye movement strategies but can be mostly accounted for by suboptimalities in human face discrimination when observers are uncertain about the discriminating feature. We show that an initial bias of humans to use specific features to perform the task even though they are informed that each of four features is equally likely to be the discriminatory feature would lead to seemingly supra-optimal learning. We also examine the possibility of inefficient human integration of visual information across the spatially distributed facial features. Together, the results suggest that humans can show large performance improvement effects in discriminating faces as they learn to identify the feature containing the discriminatory information. PMID:19000918
Verhaeghe, Pieter-Paul; Van der Bracht, Koen; Van de Putte, Bart
2016-04-01
According to the social model of disability, physical 'impairments' become disabilities through exclusion in social relations. An obvious form of social exclusion might be discrimination, for instance on the rental housing market. Although discrimination has detrimental health effects, very few studies have examined discrimination of people with a visual impairment. We aim to study (1) the extent of discrimination of individuals with a visual impairment on the rental housing market and (2) differences in rates of discrimination between landowners and real estate agents. We conducted correspondence tests among 268 properties on the Belgian rental housing market. Using matched tests, we compared reactions by realtors and landowners to tenants with and tenants without a visual impairment. The results show that individuals with a visual impairment are substantially discriminated against in the rental housing market: at least one in three lessors discriminate against individuals with a visual impairment. We further discern differences in the propensity toward discrimination according to the type of lessor. Private landlords are at least twice as likely to discriminate against tenants with a visual impairment than real estate agents. At the same time, realtors still discriminate against one in five tenants with a visual impairment. This study shows the substantial discrimination against visually people with an impairment. Given the important consequences discrimination might have for physical and mental health, further research into this topic is needed. Copyright © 2016 Elsevier Inc. All rights reserved.
Yeari, Menahem; Isser, Michal; Schiff, Rachel
2017-07-01
A controversy has recently developed regarding the hypothesis that developmental dyslexia may be caused, in some cases, by a reduced visual attention span (VAS). To examine this hypothesis, independent of phonological abilities, researchers tested the ability of dyslexic participants to recognize arrays of unfamiliar visual characters. Employing this test, findings were rather equivocal: dyslexic participants exhibited poor performance in some studies but normal performance in others. The present study explored four methodological differences revealed between the two sets of studies that might underlie their conflicting results. Specifically, in two experiments we examined whether a VAS deficit is (a) specific to recognition of multi-character arrays as wholes rather than of individual characters within arrays, (b) specific to characters' position within arrays rather than to characters' identity, or revealed only under a higher attention load due to (c) low-discriminable characters, and/or (d) characters' short exposure. Furthermore, in this study we examined whether pure dyslexic participants who do not have attention disorder exhibit a reduced VAS. Although comorbidity of dyslexia and attention disorder is common and the ability to sustain attention for a long time plays a major rule in the visual recognition task, the presence of attention disorder was neither evaluated nor ruled out in previous studies. Findings did not reveal any differences between the performance of dyslexic and control participants on eight versions of the visual recognition task. These findings suggest that pure dyslexic individuals do not present a reduced visual attention span.
Yang, Yi; Tokita, Midori; Ishiguchi, Akira
2018-01-01
A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed.
Neuronal Assemblies Evidence Distributed Interactions within a Tactile Discrimination Task in Rats
Deolindo, Camila S.; Kunicki, Ana C. B.; da Silva, Maria I.; Lima Brasil, Fabrício; Moioli, Renan C.
2018-01-01
Accumulating evidence suggests that neural interactions are distributed and relate to animal behavior, but many open questions remain. The neural assembly hypothesis, formulated by Hebb, states that synchronously active single neurons may transiently organize into functional neural circuits—neuronal assemblies (NAs)—and that would constitute the fundamental unit of information processing in the brain. However, the formation, vanishing, and temporal evolution of NAs are not fully understood. In particular, characterizing NAs in multiple brain regions over the course of behavioral tasks is relevant to assess the highly distributed nature of brain processing. In the context of NA characterization, active tactile discrimination tasks with rats are elucidative because they engage several cortical areas in the processing of information that are otherwise masked in passive or anesthetized scenarios. In this work, we investigate the dynamic formation of NAs within and among four different cortical regions in long-range fronto-parieto-occipital networks (primary somatosensory, primary visual, prefrontal, and posterior parietal cortices), simultaneously recorded from seven rats engaged in an active tactile discrimination task. Our results first confirm that task-related neuronal firing rate dynamics in all four regions is significantly modulated. Notably, a support vector machine decoder reveals that neural populations contain more information about the tactile stimulus than the majority of single neurons alone. Then, over the course of the task, we identify the emergence and vanishing of NAs whose participating neurons are shown to contain more information about animal behavior than randomly chosen neurons. Taken together, our results further support the role of multiple and distributed neurons as the functional unit of information processing in the brain (NA hypothesis) and their link to active animal behavior. PMID:29375324
Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415
Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.
Vitu, Françoise; Engbert, Ralf; Kliegl, Reinhold
2016-01-01
Saccades to single targets in peripheral vision are typically characterized by an undershoot bias. Putting this bias to a test, Kapoula [1] used a paradigm in which observers were presented with two different sets of target eccentricities that partially overlapped each other. Her data were suggestive of a saccadic range effect (SRE): There was a tendency for saccades to overshoot close targets and undershoot far targets in a block, suggesting that there was a response bias towards the center of eccentricities in a given block. Our Experiment 1 was a close replication of the original study by Kapoula [1]. In addition, we tested whether the SRE is sensitive to top-down requirements associated with the task, and we also varied the target presentation duration. In Experiments 1 and 2, we expected to replicate the SRE for a visual discrimination task. The simple visual saccade-targeting task in Experiment 3, entailing minimal top-down influence, was expected to elicit a weaker SRE. Voluntary saccades to remembered target locations in Experiment 3 were expected to elicit the strongest SRE. Contrary to these predictions, we did not observe a SRE in any of the tasks. Our findings complement the results reported by Gillen et al. [2] who failed to find the effect in a saccade-targeting task with a very brief target presentation. Together, these results suggest that unlike arm movements, saccadic eye movements are not biased towards making saccades of a constant, optimal amplitude for the task. PMID:27658191
Alpha-Band Rhythms in Visual Task Performance: Phase-Locking by Rhythmic Sensory Stimulation
de Graaf, Tom A.; Gross, Joachim; Paterson, Gavin; Rusch, Tessa; Sack, Alexander T.; Thut, Gregor
2013-01-01
Oscillations are an important aspect of neuronal activity. Interestingly, oscillatory patterns are also observed in behaviour, such as in visual performance measures after the presentation of a brief sensory event in the visual or another modality. These oscillations in visual performance cycle at the typical frequencies of brain rhythms, suggesting that perception may be closely linked to brain oscillations. We here investigated this link for a prominent rhythm of the visual system (the alpha-rhythm, 8–12 Hz) by applying rhythmic visual stimulation at alpha-frequency (10.6 Hz), known to lead to a resonance response in visual areas, and testing its effects on subsequent visual target discrimination. Our data show that rhythmic visual stimulation at 10.6 Hz: 1) has specific behavioral consequences, relative to stimulation at control frequencies (3.9 Hz, 7.1 Hz, 14.2 Hz), and 2) leads to alpha-band oscillations in visual performance measures, that 3) correlate in precise frequency across individuals with resting alpha-rhythms recorded over parieto-occipital areas. The most parsimonious explanation for these three findings is entrainment (phase-locking) of ongoing perceptually relevant alpha-band brain oscillations by rhythmic sensory events. These findings are in line with occipital alpha-oscillations underlying periodicity in visual performance, and suggest that rhythmic stimulation at frequencies of intrinsic brain-rhythms can be used to reveal influences of these rhythms on task performance to study their functional roles. PMID:23555873
Hassanshahi, Amin; Shafeie, Seyed Ali; Fatemi, Iman; Hassanshahi, Elham; Allahtavakoli, Mohammad; Shabani, Mohammad; Roohbakhsh, Ali; Shamsizadeh, Ali
2017-06-01
Wireless internet (Wi-Fi) electromagnetic waves (2.45 GHz) have widespread usage almost everywhere, especially in our homes. Considering the recent reports about some hazardous effects of Wi-Fi signals on the nervous system, this study aimed to investigate the effect of 2.4 GHz Wi-Fi radiation on multisensory integration in rats. This experimental study was done on 80 male Wistar rats that were allocated into exposure and sham groups. Wi-Fi exposure to 2.4 GHz microwaves [in Service Set Identifier mode (23.6 dBm and 3% for power and duty cycle, respectively)] was done for 30 days (12 h/day). Cross-modal visual-tactile object recognition (CMOR) task was performed by four variations of spontaneous object recognition (SOR) test including standard SOR, tactile SOR, visual SOR, and CMOR tests. A discrimination ratio was calculated to assess the preference of animal to the novel object. The expression levels of M1 and GAT1 mRNA in the hippocampus were assessed by quantitative real-time RT-PCR. Results demonstrated that rats in Wi-Fi exposure groups could not discriminate significantly between the novel and familiar objects in any of the standard SOR, tactile SOR, visual SOR, and CMOR tests. The expression of M1 receptors increased following Wi-Fi exposure. In conclusion, results of this study showed that chronic exposure to Wi-Fi electromagnetic waves might impair both unimodal and cross-modal encoding of information.
Induced theta oscillations as biomarkers for alcoholism.
Andrew, Colin; Fein, George
2010-03-01
Studies have suggested that non-phase-locked event-related oscillations (ERO) in target stimulus processing might provide biomarkers of alcoholism. This study investigates the discriminatory power of non-phase-locked oscillations in a group of long-term abstinent alcoholics (LTAAs) and non-alcoholic controls (NACs). EEGs were recorded from 48 LTAAs and 48 age and gender comparable NACs during rest with eyes open (EO) and during the performance of a three-condition visual target detection task. The data were analyzed to extract resting power, ERP amplitude and non-phase-locked ERO power measures. Data were analyzed using MANCOVA to determine the discriminatory power of induced theta ERO vs. resting theta power vs. P300 ERP measures in differentiating the LTAA and NAC groups. Both groups showed significantly more theta power in the pre-stimulus reference period of the task vs. the resting EO condition. The resting theta power did not discriminate the groups, while the LTAAs showed significantly less pre-stimulus theta power vs. the NACs. The LTAAs showed a significantly larger theta event-related synchronization (ERS) to the target stimulus vs. the NACs, even after accounting for pre-stimulus theta power levels. ERS to non-target stimuli showed smaller induced oscillations vs. target stimuli with no group differences. Alcohol use variables, a family history of alcohol problems, and the duration of alcohol abstinence were not associated with any theta power measures. While reference theta power in the task and induced theta oscillations to target stimuli both discriminate LTAAs and NACs, induced theta oscillations better discriminate the groups. Induced theta power measures are also more powerful and independent group discriminators than the P3b amplitude. Induced frontal theta oscillations promise to provide biomarkers of alcoholism that complement the well-established P300 ERP discriminators.
Quantifying a threat: Evidence of a numeric processing bias.
Hamamouche, Karina A; Niemi, Laura; Cordes, Sara
2017-06-01
Humans prioritize the processing of threats over neutral stimuli; thus, not surprisingly, the presence of threats has been shown to alter performance on both perceptual and cognitive tasks. Yet whether the quantification process is disrupted in the presence of threat is unknown. In three experiments, we examined numerical estimation and discrimination abilities in adults in the context of threatening (spiders) and non-threatening (e.g., flowers) stimuli. Results of the numerical estimation task (Experiment 1) showed that participants underestimated the number of threatening relative to neutral stimuli. Additionally, numerical discrimination data reveal that participants' abilities to discriminate between the number of entities in two arrays were worsened when the arrays consisted of threatening entities versus neutral entities (Experiment 2). However, discrimination abilities were enhanced when threatening content was presented immediately before neutral dot arrays (Experiment 3). Together, these studies suggest that threats impact our processing of visual numerosity via changes in attention to numerical stimuli, and that the nature of the threat (intrinsic or extrinsic to the stimulus) is vital in determining the direction of this impact. Intrinsic threat content in stimuli impedes its own quantification; yet threat that is extrinsic to the sets to be enumerated enhances numerical processing for subsequently presented neutral stimuli. Copyright © 2017 Elsevier B.V. All rights reserved.
Discrimination of single features and conjunctions by children.
Taylor, M J; Chevalier, H; Lobaugh, N J
2003-12-01
Stimuli that are discriminated by a conjunction of features can show more rapid early processing in adults. To determine how this facilitation effect develops, the processing of visual features and their conjunction was examined in 7-12-year-old children. The children completed a series of tasks in which they made a target-non-target judgement as a function of shape only, colour only or shape and colour features, while event-related potentials were recorded. To assess early stages of feature processing the posteriorly distributed P1 and N1 were analysed. Attentional effects were seen for both components. P1 had a shorter latency and P1 and N1 had larger amplitudes to targets than non-targets. Task effects were driven by the conjunction task. P1 amplitude was largest, while N1 amplitude was smallest for the conjunction targets. In contrast to larger left-sided N1 in adults, N1 had a symmetrical distribution in the children. N1 latency was shortest for the conjunction targets in the 9-10-year olds and 11-12-year olds, demonstrating facilitation in children, but which continued to develop over the pre-teen years. These data underline the sensitivity of early stages of processing to both top-down modulations and the parallel binding of non-spatial features in young children. Furthermore, facilitation effects, increased speed of processing when features need to be conjoined, mature in mid-childhood, arguing against a hierarchical model of visual processing, and supporting a rapid, integrated facilitative model.
Haegens, Saskia; Händel, Barbara F; Jensen, Ole
2011-04-06
The brain receives a rich flow of information which must be processed according to behavioral relevance. How is the state of the sensory system adjusted to up- or downregulate processing according to anticipation? We used magnetoencephalography to investigate whether prestimulus alpha band activity (8-14 Hz) reflects allocation of attentional resources in the human somatosensory system. Subjects performed a tactile discrimination task where a visual cue directed attention to their right or left hand. The strength of attentional modulation was controlled by varying the reliability of the cue in three experimental blocks (100%, 75%, or 50% valid cueing). While somatosensory prestimulus alpha power lateralized strongly with a fully predictive cue (100%), lateralization was decreased with lower cue reliability (75%) and virtually absent if the cue had no predictive value at all (50%). Importantly, alpha lateralization influenced the subjects' behavioral performance positively: both accuracy and speed of response improved with the degree of alpha lateralization. This study demonstrates that prestimulus alpha lateralization in the somatosensory system behaves similarly to posterior alpha activity observed in visual attention tasks. Our findings extend the notion that alpha band activity is involved in shaping the functional architecture of the working brain by determining both the engagement and disengagement of specific regions: the degree of anticipation modulates the alpha activity in sensory regions in a graded manner. Thus, the alpha activity is under top-down control and seems to play an important role for setting the state of sensory regions to optimize processing.
Zeleznikow-Johnston, Ariel; Burrows, Emma L; Renoir, Thibault; Hannan, Anthony J
2017-05-01
Environmental enrichment (EE) is any positive modification of the 'standard housing' (SH) conditions in which laboratory animals are typically held, usually involving increased opportunity for cognitive stimulation and physical activity. EE has been reported to enhance baseline performance of wild-type animals on traditional cognitive behavioural tasks. Recently, touchscreen operant testing chambers have emerged as a way of performing rodent cognitive assays, providing greater reproducibility, translatability and automatability. Cognitive tests in touchscreen chambers are performed over numerous trials and thus experimenters have the power to detect subtle enhancements in performance. We used touchscreens to analyse the effects of EE on reversal learning, visual discrimination and hippocampal-dependent spatial pattern separation and working memory. We hypothesized that EE would enhance the performance of mice on cognitive touchscreen tasks. Our hypothesis was partially supported in that EE induced enhancements in cognitive flexibility as observed in visual discrimination and reversal learning improvements. However, no other significant effects of EE on cognitive performance were observed. EE decreased the activity level of mice in the touchscreen chambers, which may influence the enrichment level of the animals. Although we did not see enhancements on all hypothesized parameters, our testing paradigm is capable of detecting EE-induced improved cognitive flexibility in mice, which has implications for both understanding the mechanisms of EE and improving screening of putative cognitive-enhancing therapeutics. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sound segregation via embedded repetition is robust to inattention.
Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria
2016-03-01
The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).
Temporally flexible feedback signal to foveal cortex for peripheral object recognition
Fan, Xiaoxu; Wang, Lan; Shao, Hanyu; Kersten, Daniel; He, Sheng
2016-01-01
Recent studies have shown that information from peripherally presented images is present in the human foveal retinotopic cortex, presumably because of feedback signals. We investigated this potential feedback signal by presenting noise in fovea at different object–noise stimulus onset asynchronies (SOAs), whereas subjects performed a discrimination task on peripheral objects. Results revealed a selective impairment of performance when foveal noise was presented at 250-ms SOA, but only for tasks that required comparing objects’ spatial details, suggesting a task- and stimulus-dependent foveal processing mechanism. Critically, the temporal window of foveal processing was shifted when mental rotation was required for the peripheral objects, indicating that the foveal retinotopic processing is not automatically engaged at a fixed time following peripheral stimulation; rather, it occurs at a stage when detailed information is required. Moreover, fMRI measurements using multivoxel pattern analysis showed that both image and object category-relevant information of peripheral objects was represented in the foveal cortex. Taken together, our results support the hypothesis of a temporally flexible feedback signal to the foveal retinotopic cortex when discriminating objects in the visual periphery. PMID:27671651
Perceptual training yields rapid improvements in visually impaired youth
Nyquist, Jeffrey B.; Lappin, Joseph S.; Zhang, Ruyuan; Tadin, Duje
2016-01-01
Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events. PMID:27901026
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Discrimination of face-like patterns in the giant panda (Ailuropoda melanoleuca).
Dungl, Eveline; Schratter, Dagmar; Huber, Ludwig
2008-11-01
The black-and-white pattern of the giant panda's (Ailuropoda melanoleuca) fur is a conspicuous signal and may be used for mate-choice and intraspecific communication. Here the authors examined whether they have the perceptual and cognitive potential to make use of this information. Two juvenile subjects were trained on several discrimination problems in steps of increasing difficulty, whereby the stimuli required to discriminate ranged from geometric figures to pairs of differently orientated ellipses, pairs of ellipses with the same orientation but different angles, and finally discrimination of panda-like eye-mask patterns that differed only subtly in shape. Not only did both subjects achieve significant levels of discrimination in all these tasks, they also remembered discriminations for 6 months or even 1 year after the first presentation. Thus this study provided the first solid evidence of sufficient visual and cognitive potential in the giant panda to use the fur pattern or the facial masks for individual recognition, social communication, and perhaps, mate choice. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
Retinotopically specific reorganization of visual cortex for tactile pattern recognition
Cheung, Sing-Hang; Fang, Fang; He, Sheng; Legge, Gordon E.
2009-01-01
Although previous studies have shown that Braille reading and other tactile-discrimination tasks activate the visual cortex of blind and sighted people [1–5], it is not known whether this kind of cross-modal reorganization is influenced by retinotopic organization. We have addressed this question by studying S, a visually impaired adult with the rare ability to read print visually and Braille by touch. S had normal visual development until age six years, and thereafter severe acuity reduction due to corneal opacification, but no evidence of visual-field loss. Functional magnetic resonance imaging (fMRI) revealed that, in S’s early visual areas, tactile information processing activated what would be the foveal representation for normally-sighted individuals, and visual information processing activated what would be the peripheral representation. Control experiments showed that this activation pattern was not due to visual imagery. S’s high-level visual areas which correspond to shape- and object-selective areas in normally-sighted individuals were activated by both visual and tactile stimuli. The retinotopically specific reorganization in early visual areas suggests an efficient redistribution of neural resources in the visual cortex. PMID:19361999
Visual discrimination in an orangutan (Pongo pygmaeus): measuring visual preference.
Hanazuka, Yuki; Kurotori, Hidetoshi; Shimizu, Mika; Midorikawa, Akira
2012-04-01
Although previous studies have confirmed that trained orangutans visually discriminate between mammals and artificial objects, whether orangutans without operant conditioning can discriminate remains unknown. The visual discrimination ability in an orangutan (Pongo pygmaeus) with no experience in operant learning was examined using measures of visual preference. Sixteen color photographs of inanimate objects and of mammals with four legs were randomly presented to an orangutan. The results showed that the mean looking time at photographs of mammals with four legs was longer than that for inanimate objects, suggesting that the orangutan discriminated mammals with four legs from inanimate objects. The results implied that orangutans who have not experienced operant conditioning may possess the ability to discriminate visually.
Visual Learning Alters the Spontaneous Activity of the Resting Human Brain: An fNIRS Study
Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan
2014-01-01
Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning. PMID:25243168
Visual learning alters the spontaneous activity of the resting human brain: an fNIRS study.
Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan
2014-01-01
Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning.
Renfroe, Jenna B; Turner, Travis H; Hinson, Vanessa K
2017-02-01
Judgment of Line Orientation (JOLO) test is widely used in assessing visuospatial deficits in Parkinson's disease (PD). The neuropsychological assessment battery (NAB) offers the Visual Discrimination test, with age and education correction, parallel forms, and co-normed standardization sample for comparisons within and between domains. However, NAB Visual Discrimination has not been validated in PD, and may not measure the same construct as JOLO. A heterogeneous sample of 47 PD patients completed the JOLO and NAB Visual Discrimination within a broader neuropsychological evaluation. Pearson correlations assessed relationships between JOLO and NAB Visual Discrimination performances. Raw and demographically corrected scores from JOLO and Visual Discrimination were only weakly correlated. NAB Visual Discrimination subtest was moderately correlated with overall cognitive functioning, whereas the JOLO was not. Despite apparent virtues, results do not support NAB Visual Discrimination as an alternative to JOLO in assessing visuospatial functioning in PD. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Processing Resources in Attention, Dual Task Performance, and Workload Assessment.
1981-07-01
some levels of processing, discrete attention switching is clearly an identifiable phenomenon ( LaBerge , Van Gelder, & Yellott, 1971; Kristofferson...1967, 27, 93-101. LaBerge , D., Van Gilder, P., & Yellott, S. A cueing technique in choice reaction time. Journal of Experimental Psychology, 1971, 87...city processing in auditory and visual discrimination. Acta Psychologica, 1967, 27, 223-229. Teghtsoonian, R. On the exponent in Stevens ’ law and the
Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2014-01-01
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403
Hay-McCutcheon, Marcia J; Peterson, Nathaniel R; Pisoni, David B; Kirk, Karen Iler; Yang, Xin; Parton, Jason
The purpose of this study was to evaluate performance on two challenging listening tasks, talker and regional accent discrimination, and to assess variables that could have affected the outcomes. A prospective study using 35 adults with one cochlear implant (CI) or a CI and a contralateral hearing aid (bimodal hearing) was conducted. Adults completed talker and regional accent discrimination tasks. Two-alternative forced-choice tasks were used to assess talker and accent discrimination in a group of adults who ranged in age from 30 years old to 81 years old. A large amount of performance variability was observed across listeners for both discrimination tasks. Three listeners successfully discriminated between talkers for both listening tasks, 14 participants successfully completed one discrimination task and 18 participants were not able to discriminate between talkers for either listening task. Some adults who used bimodal hearing benefitted from the addition of acoustic cues provided through a HA but for others the HA did not help with discrimination abilities. Acoustic speech feature analysis of the test signals indicated that both the talker speaking rate and the fundamental frequency (F0) helped with talker discrimination. For accent discrimination, findings suggested that access to more salient spectral cues was important for better discrimination performance. The ability to perform challenging discrimination tasks successfully likely involves a number of complex interactions between auditory and non-auditory pre- and post-implant factors. To understand why some adults with CIs perform similarly to adults with normal hearing and others experience difficulty discriminating between talkers, further research will be required with larger populations of adults who use unilateral CIs, bilateral CIs and bimodal hearing. Copyright © 2018 Elsevier Inc. All rights reserved.
The mechanisms underlying the ASD advantage in visual search
Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S.; Blaser, Erik
2013-01-01
A number of studies have demonstrated that individuals with Autism Spectrum Disorders (ASD) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin & Frith, 2005; Simmons, et al., 2009). This “ASD advantage” was first identified in the domain of visual search by Plaisted and colleagues (Plaisted, O’Riordan, & Baron-Cohen, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that - across development and a broad range of symptom severity - individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to ‘enhanced perceptual discrimination’, a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O’Riordan, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn, Muller, & Townsend, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests. PMID:24091470
Parkington, Karisa B; Clements, Rebecca J; Landry, Oriane; Chouinard, Philippe A
2015-10-01
We examined how performance on an associative learning task changes in a sample of undergraduate students as a function of their autism-spectrum quotient (AQ) score. The participants, without any prior knowledge of the Japanese language, learned to associate hiragana characters with button responses. In the novel condition, 50 participants learned visual-motor associations without any prior exposure to the stimuli's visual attributes. In the familiar condition, a different set of 50 participants completed a session in which they first became familiar with the stimuli's visual appearance prior to completing the visual-motor association learning task. Participants with higher AQ scores had a clear advantage in the novel condition; the amount of training required reaching learning criterion correlated negatively with AQ. In contrast, participants with lower AQ scores had a clear advantage in the familiar condition; the amount of training required to reach learning criterion correlated positively with AQ. An examination of how each of the AQ subscales correlated with these learning patterns revealed that abilities in visual discrimination-which is known to depend on the visual ventral-stream system-may have afforded an advantage in the novel condition for the participants with the higher AQ scores, whereas abilities in attention switching-which are known to require mechanisms in the prefrontal cortex-may have afforded an advantage in the familiar condition for the participants with the lower AQ scores.
Toward a hybrid brain-computer interface based on imagined movement and visual attention
NASA Astrophysics Data System (ADS)
Allison, B. Z.; Brunner, C.; Kaiser, V.; Müller-Putz, G. R.; Neuper, C.; Pfurtscheller, G.
2010-04-01
Brain-computer interface (BCI) systems do not work for all users. This article introduces a novel combination of tasks that could inspire BCI systems that are more accurate than conventional BCIs, especially for users who cannot attain accuracy adequate for effective communication. Subjects performed tasks typically used in two BCI approaches, namely event-related desynchronization (ERD) and steady state visual evoked potential (SSVEP), both individually and in a 'hybrid' condition that combines both tasks. Electroencephalographic (EEG) data were recorded across three conditions. Subjects imagined moving the left or right hand (ERD), focused on one of the two oscillating visual stimuli (SSVEP), and then simultaneously performed both tasks. Accuracy and subjective measures were assessed. Offline analyses suggested that half of the subjects did not produce brain patterns that could be accurately discriminated in response to at least one of the two tasks. If these subjects produced comparable EEG patterns when trying to use a BCI, these subjects would not be able to communicate effectively because the BCI would make too many errors. Results also showed that switching to a different task used in BCIs could improve accuracy in some of these users. Switching to a hybrid approach eliminated this problem completely, and subjects generally did not consider the hybrid condition more difficult. Results validate this hybrid approach and suggest that subjects who cannot use a BCI should consider switching to a different BCI approach, especially a hybrid BCI. Subjects proficient with both approaches might combine them to increase information throughput by improving accuracy, reducing selection time, and/or increasing the number of possible commands.
Einstein, Michael C; Polack, Pierre-Olivier; Tran, Duy T; Golshani, Peyman
2017-05-17
Low-frequency membrane potential ( V m ) oscillations were once thought to only occur in sleeping and anesthetized states. Recently, low-frequency V m oscillations have been described in inactive awake animals, but it is unclear whether they shape sensory processing in neurons and whether they occur during active awake behavioral states. To answer these questions, we performed two-photon guided whole-cell V m recordings from primary visual cortex layer 2/3 excitatory and inhibitory neurons in awake mice during passive visual stimulation and performance of visual and auditory discrimination tasks. We recorded stereotyped 3-5 Hz V m oscillations where the V m baseline hyperpolarized as the V m underwent high amplitude rhythmic fluctuations lasting 1-2 s in duration. When 3-5 Hz V m oscillations coincided with visual cues, excitatory neuron responses to preferred cues were significantly reduced. Despite this disruption to sensory processing, visual cues were critical for evoking 3-5 Hz V m oscillations when animals performed discrimination tasks and passively viewed drifting grating stimuli. Using pupillometry and animal locomotive speed as indicators of arousal, we found that 3-5 Hz oscillations were not restricted to unaroused states and that they occurred equally in aroused and unaroused states. Therefore, low-frequency V m oscillations play a role in shaping sensory processing in visual cortical neurons, even during active wakefulness and decision making. SIGNIFICANCE STATEMENT A neuron's membrane potential ( V m ) strongly shapes how information is processed in sensory cortices of awake animals. Yet, very little is known about how low-frequency V m oscillations influence sensory processing and whether they occur in aroused awake animals. By performing two-photon guided whole-cell recordings from layer 2/3 excitatory and inhibitory neurons in the visual cortex of awake behaving animals, we found visually evoked stereotyped 3-5 Hz V m oscillations that disrupt excitatory responsiveness to visual stimuli. Moreover, these oscillations occurred when animals were in high and low arousal states as measured by animal speed and pupillometry. These findings show, for the first time, that low-frequency V m oscillations can significantly modulate sensory signal processing, even in awake active animals. Copyright © 2017 the authors 0270-6474/17/375084-15$15.00/0.
Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn
2005-10-01
Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism.
High contrast sensitivity for visually guided flight control in bumblebees.
Chakravarthi, Aravin; Kelber, Almut; Baird, Emily; Dacke, Marie
2017-12-01
Many insects rely on vision to find food, to return to their nest and to carefully control their flight between these two locations. The amount of information available to support these tasks is, in part, dictated by the spatial resolution and contrast sensitivity of their visual systems. Here, we investigate the absolute limits of these visual properties for visually guided position and speed control in Bombus terrestris. Our results indicate that the limit of spatial vision in the translational motion detection system of B. terrestris lies at 0.21 cycles deg -1 with a peak contrast sensitivity of at least 33. In the perspective of earlier findings, these results indicate that bumblebees have higher contrast sensitivity in the motion detection system underlying position control than in their object discrimination system. This suggests that bumblebees, and most likely also other insects, have different visual thresholds depending on the behavioral context.
Perceptual learning modifies the functional specializations of visual cortical areas.
Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang
2016-05-17
Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.
Perceptual Learning in Children With Infantile Nystagmus: Effects on Visual Performance.
Huurneman, Bianca; Boonstra, F Nienke; Goossens, Jeroen
2016-08-01
To evaluate whether computerized training with a crowded or uncrowded letter-discrimination task reduces visual impairment (VI) in 6- to 11-year-old children with infantile nystagmus (IN) who suffer from increased foveal crowding, reduced visual acuity, and reduced stereopsis. Thirty-six children with IN were included. Eighteen had idiopathic IN and 18 had oculocutaneous albinism. These children were divided in two training groups matched on age and diagnosis: a crowded training group (n = 18) and an uncrowded training group (n = 18). Training occurred two times per week during 5 weeks (3500 trials per training). Eleven age-matched children with normal vision were included to assess baseline differences in task performance and test-retest learning. Main outcome measures were task-specific performance, distance and near visual acuity (DVA and NVA), intensity and extent of (foveal) crowding at 5 m and 40 cm, and stereopsis. Training resulted in task-specific improvements. Both training groups also showed uncrowded and crowded DVA improvements (0.10 ± 0.02 and 0.11 ± 0.02 logMAR) and improved stereopsis (670 ± 249″). Crowded NVA improved only in the crowded training group (0.15 ± 0.02 logMAR), which was also the only group showing a reduction in near crowding intensity (0.08 ± 0.03 logMAR). Effects were not due to test-retest learning. Perceptual learning with or without distractors reduces the extent of crowding and improves visual acuity in children with IN. Training with distractors improves near vision more than training with single optotypes. Perceptual learning also transfers to DVA and NVA under uncrowded and crowded conditions and even stereopsis. Learning curves indicated that improvements may be larger after longer training.
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
The contribution of disengagement to temporal discriminability.
Shipstead, Zach; Nespodzany, Ashley
2018-05-01
The present study examines the idea that time-based forgetting of outdated information can lead to better memory of currently relevant information. This was done using the visual arrays task, along with a between-subjects manipulation of both the retention interval (1 s vs. 4 s) and the time between two trials (1 s vs. 4 s). Consistent with prior work [Shipstead, Z., & Engle, R. W. (2013). Interference within the focus of attention: Working memory tasks reflect more than temporary maintenance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 277-289; Experiment 1], longer retention intervals did not lead to diminished memory of currently relevant information. However, we did find that longer periods of time between two trials improved memory for currently relevant information. This replicates findings that indicate proactive interference affects visual arrays performance and extends previous findings to show that reduction of proactive interference can occur in a time-dependent manner.
Motivation, affect, and hemispheric asymmetry: power versus affiliation.
Kuhl, Julius; Kazén, Miguel
2008-08-01
In 4 experiments, the authors examined to what extent information related to different social needs (i.e., power vs. affiliation) is associated with hemispheric laterality. Response latencies to a lateralized dot-probe task following lateralized pictures or verbal labels that were associated with positive or negative episodes related to power, affiliation, or achievement revealed clear-cut laterality effects. These effects were a function of need content rather than of valence: Power-related stimuli were associated with right visual field (left hemisphere) superiority, whereas affiliation-related stimuli were associated with left visual field (right hemisphere) superiority. Additional results demonstrated that in contrast to power, affiliation primes were associated with better discrimination between coherent word triads (e.g., goat, pass, and green, all related to mountain) and noncoherent triads, a remote associate task known to activate areas of the right hemisphere. (c) 2008 APA, all rights reserved
Sczesny-Kaiser, Matthias; Beckhaus, Katharina; Dinse, Hubert R; Schwenkreis, Peter; Tegenthoff, Martin; Höffken, Oliver
2016-01-01
Studies on noninvasive motor cortex stimulation and motor learning demonstrated cortical excitability as a marker for a learning effect. Transcranial direct current stimulation (tDCS) is a non-invasive tool to modulate cortical excitability. It is as yet unknown how tDCS-induced excitability changes and perceptual learning in visual cortex correlate. Our study aimed to examine the influence of tDCS on visual perceptual learning in healthy humans. Additionally, we measured excitability in primary visual cortex (V1). We hypothesized that anodal tDCS would improve and cathodal tDCS would have minor or no effects on visual learning. Anodal, cathodal or sham tDCS were applied over V1 in a randomized, double-blinded design over four consecutive days (n = 30). During 20 min of tDCS, subjects had to learn a visual orientation-discrimination task (ODT). Excitability parameters were measured by analyzing paired-stimulation behavior of visual-evoked potentials (ps-VEP) and by measuring phosphene thresholds (PTs) before and after the stimulation period of 4 days. Compared with sham-tDCS, anodal tDCS led to an improvement of visual discrimination learning (p < 0.003). We found reduced PTs and increased ps-VEP ratios indicating increased cortical excitability after anodal tDCS (PT: p = 0.002, ps-VEP: p = 0.003). Correlation analysis within the anodal tDCS group revealed no significant correlation between PTs and learning effect. For cathodal tDCS, no significant effects on learning or on excitability could be seen. Our results showed that anodal tDCS over V1 resulted in improved visual perceptual learning and increased cortical excitability. tDCS is a promising tool to alter V1 excitability and, hence, perceptual visual learning.
NASA Astrophysics Data System (ADS)
Rohaeti, Eti; Rafi, Mohamad; Syafitri, Utami Dyah; Heryanto, Rudi
2015-02-01
Turmeric (Curcuma longa), java turmeric (Curcuma xanthorrhiza) and cassumunar ginger (Zingiber cassumunar) are widely used in traditional Indonesian medicines (jamu). They have similar color for their rhizome and possess some similar uses, so it is possible to substitute one for the other. The identification and discrimination of these closely-related plants is a crucial task to ensure the quality of the raw materials. Therefore, an analytical method which is rapid, simple and accurate for discriminating these species using Fourier transform infrared spectroscopy (FTIR) combined with some chemometrics methods was developed. FTIR spectra were acquired in the mid-IR region (4000-400 cm-1). Standard normal variate, first and second order derivative spectra were compared for the spectral data. Principal component analysis (PCA) and canonical variate analysis (CVA) were used for the classification of the three species. Samples could be discriminated by visual analysis of the FTIR spectra by using their marker bands. Discrimination of the three species was also possible through the combination of the pre-processed FTIR spectra with PCA and CVA, in which CVA gave clearer discrimination. Subsequently, the developed method could be used for the identification and discrimination of the three closely-related plant species.
Maljaars, J P W; Noens, I L J; Scholte, E M; Verpoorten, R A W; van Berckelaer-Onnes, I A
2011-01-01
The ComFor study has indicated that individuals with intellectual disability (ID) and autism spectrum disorder (ASD) show enhanced visual local processing compared with individuals with ID only. Items of the ComFor with meaningless materials provided the best discrimination between the two samples. These results can be explained by the weak central coherence account. The main focus of the present study is to examine whether enhanced visual perception is also present in low-functioning deaf individuals with and without ASD compared with individuals with ID, and to evaluate the underlying cognitive style in deaf and hearing individuals with ASD. Different sorting tasks (selected from the ComFor) were administered from four subsamples: (1) individuals with ID (n = 68); (2) individuals with ID and ASD (n = 72); (3) individuals with ID and deafness (n = 22); and (4) individuals with ID, ASD and deafness (n = 15). Differences in performance on sorting tasks with meaningful and meaningless materials between the four subgroups were analysed. Age and level of functioning were taken into account. Analyses of covariance revealed that results of deaf individuals with ID and ASD are in line with the results of hearing individuals with ID and ASD. Both groups showed enhanced visual perception, especially on meaningless sorting tasks, when compared with hearing individuals with ID, but not compared with deaf individuals with ID. In ASD either with or without deafness, enhanced visual perception for meaningless information can be understood within the framework of the central coherence theory, whereas in deafness, enhancement in visual perception might be due to a more generally enhanced visual perception as a result of auditory deprivation. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.
Baymann, Ulrike; Langbein, Jan; Siebert, Katrin; Nürnberg, Gerd; Manteuffel, Gerhard; Mohr, Elmar
2007-01-01
The influence of social rank and social environment on visual discrimination learning of small groups of Nigerian dwarf goats (Capra hircus, n = 79) was studied using a computer-controlled learning device integrated in the animals' home pen. The experiment was divided into three sections (LE1, LE1 u, LE2; each 14d). In LE1 the goats learned a discrimination task in a socially stable environment. In LE1u animals were mixed and relocated to another pen and given the same task as in LE1. In LE2 the animals were mixed and relocated again and given a new discrimination task. We used drinking water as a primary reinforcer. The rank category of the goats were analysed as alpha, omega or middle ranking for each section of the experiment. The rank category had an influence on daily learning success (percentage of successful trials per day) only in LE1 u. Daily learning success decreased after mixing and relocation of the animals in LE1 u and LE2 compared to LE1. That resulted in an undersupply of drinking water on the first day of both these tasks. We discuss social stress induced by agonistic interactions after mixing as a reason for that decline. The absolute learning performance (trials to reach the learning criterion) of the omega animals was lower in LE2 compared to the other rank categories. Furthermore, their absolute learning performance was lower in LE2 compared to LE1. For future application of similar automated learning devices in animal husbandry, we recommend against the combination of management routines like mixing and relocation with changes in the learning task because of the negative effects on learning performance, particularly of the omega animals.
Spatial Frequency Discrimination: Effects of Age, Reward, and Practice
Peters, Judith Carolien
2017-01-01
Social interaction starts with perception of the world around you. This study investigated two fundamental issues regarding the development of discrimination of higher spatial frequencies, which are important building blocks of perception. Firstly, it mapped the typical developmental trajectory of higher spatial frequency discrimination. Secondly, it developed and validated a novel design that could be applied to improve atypically developed vision. Specifically, this study examined the effect of age and reward on task performance, practice effects, and motivation (i.e., number of trials completed) in a higher spatial frequency (reference frequency: 6 cycles per degree) discrimination task. We measured discrimination thresholds in children aged between 7 to 12 years and adults (N = 135). Reward was manipulated by presenting either positive reinforcement or punishment. Results showed a decrease in discrimination thresholds with age, thus revealing that higher spatial frequency discrimination continues to develop after 12 years of age. This development continues longer than previously shown for discrimination of lower spatial frequencies. Moreover, thresholds decreased during the run, indicating that discrimination abilities improved. Reward did not affect performance or improvement. However, in an additional group of 5–6 year-olds (N = 28) punishments resulted in the completion of fewer trials compared to reinforcements. In both reward conditions children aged 5–6 years completed only a fourth or half of the run (64 to 128 out of 254 trials) and were not motivated to continue. The design thus needs further adaptation before it can be applied to this age group. Children aged 7–12 years and adults completed the run, suggesting that the design is successful and motivating for children aged 7–12 years. This study thus presents developmental differences in higher spatial frequency discrimination thresholds. Furthermore, it presents a design that can be used in future developmental studies that require multiple stimulus presentations such as visual perceptual learning. PMID:28135272
Spatial Frequency Discrimination: Effects of Age, Reward, and Practice.
van den Boomen, Carlijn; Peters, Judith Carolien
2017-01-01
Social interaction starts with perception of the world around you. This study investigated two fundamental issues regarding the development of discrimination of higher spatial frequencies, which are important building blocks of perception. Firstly, it mapped the typical developmental trajectory of higher spatial frequency discrimination. Secondly, it developed and validated a novel design that could be applied to improve atypically developed vision. Specifically, this study examined the effect of age and reward on task performance, practice effects, and motivation (i.e., number of trials completed) in a higher spatial frequency (reference frequency: 6 cycles per degree) discrimination task. We measured discrimination thresholds in children aged between 7 to 12 years and adults (N = 135). Reward was manipulated by presenting either positive reinforcement or punishment. Results showed a decrease in discrimination thresholds with age, thus revealing that higher spatial frequency discrimination continues to develop after 12 years of age. This development continues longer than previously shown for discrimination of lower spatial frequencies. Moreover, thresholds decreased during the run, indicating that discrimination abilities improved. Reward did not affect performance or improvement. However, in an additional group of 5-6 year-olds (N = 28) punishments resulted in the completion of fewer trials compared to reinforcements. In both reward conditions children aged 5-6 years completed only a fourth or half of the run (64 to 128 out of 254 trials) and were not motivated to continue. The design thus needs further adaptation before it can be applied to this age group. Children aged 7-12 years and adults completed the run, suggesting that the design is successful and motivating for children aged 7-12 years. This study thus presents developmental differences in higher spatial frequency discrimination thresholds. Furthermore, it presents a design that can be used in future developmental studies that require multiple stimulus presentations such as visual perceptual learning.
Odour discrimination and identification are improved in early blindness.
Cuevas, Isabel; Plaza, Paula; Rombaux, Philippe; De Volder, Anne G; Renier, Laurent
2009-12-01
Previous studies showed that early blind humans develop superior abilities in the use of their remaining senses, hypothetically due to a functional reorganization of the deprived visual brain areas. While auditory and tactile functions have been investigated for long, little is known about the effects of early visual deprivation on olfactory processing. However, blind humans make an extensive use of olfactory information in their daily life. Here we investigated olfactory discrimination and identification abilities in early blind subjects and age-matched sighted controls. Three levels of cuing were used in the identification task, i.e., free-identification (no cue), categorization (semantic cues) and multiple choice (semantic and phonological cues). Early blind subjects significantly outperformed the controls in odour discrimination, free-identification and categorization. In addition, the larger group difference was observed in the free-identification as compared to the categorization and the multiple choice conditions. This indicated that a better access to the semantic information from odour perception accounted for part of the improved olfactory performances in odour identification in the blind. We concluded that early blind subjects have both improved perceptual abilities and a better access to the information stored in semantic memory than sighted subjects.
Effective real-time vehicle tracking using discriminative sparse coding on local patches
NASA Astrophysics Data System (ADS)
Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei
2016-01-01
A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.
Artificial faces are harder to remember
Balas, Benjamin; Pacella, Jonathan
2015-01-01
Observers interact with artificial faces in a range of different settings and in many cases must remember and identify computer-generated faces. In general, however, most adults have heavily biased experience favoring real faces over synthetic faces. It is well known that face recognition abilities are affected by experience such that faces belonging to “out-groups” defined by race or age are more poorly remembered and harder to discriminate from one another than faces belonging to the “in-group.” Here, we examine the extent to which artificial faces form an “out-group” in this sense when other perceptual categories are matched. We rendered synthetic faces using photographs of real human faces and compared performance in a memory task and a discrimination task across real and artificial versions of the same faces. We found that real faces were easier to remember, but only slightly more discriminable than artificial faces. Artificial faces were also equally susceptible to the well-known face inversion effect, suggesting that while these patterns are still processed by the human visual system in a face-like manner, artificial appearance does compromise the efficiency of face processing. PMID:26195852
Tibber, Marc S; Greenwood, John A; Dakin, Steven C
2012-06-04
While observers are adept at judging the density of elements (e.g., in a random-dot image), it has recently been proposed that they also have an independent visual sense of number. To test the independence of number and density discrimination, we examined the effects of manipulating stimulus structure (patch size, element size, contrast, and contrast-polarity) and available attentional resources on both judgments. Five observers made a series of two-alternative, forced-choice discriminations based on the relative numerosity/density of two simultaneously presented patches containing 16-1,024 Gaussian blobs. Mismatches of patch size and element size (across reference and test) led to bias and reduced sensitivity in both tasks, whereas manipulations of contrast and contrast-polarity had varied effects on observers, implying differing strategies. Nonetheless, the effects reported were consistent across density and number judgments, the only exception being when luminance cues were made available. Finally, density and number judgment were similarly impaired by attentional load in a dual-task experiment. These results are consistent with a common underlying metric to density and number judgments, with the caveat that additional cues may be exploited when they are available.
The challenges of developing a contrast-based video game for treatment of amblyopia
Hussain, Zahra; Astle, Andrew T.; Webb, Ben S.; McGraw, Paul V.
2014-01-01
Perceptual learning of visual tasks is emerging as a promising treatment for amblyopia, a developmental disorder of vision characterized by poor monocular visual acuity. The tasks tested thus far span the gamut from basic psychophysical discriminations to visually complex video games. One end of the spectrum offers precise control over stimulus parameters, whilst the other delivers the benefits of motivation and reward that sustain practice over long periods. Here, we combined the advantages of both approaches by developing a video game that trains contrast sensitivity, which in psychophysical experiments, is associated with significant improvements in visual acuity in amblyopia. Target contrast was varied adaptively in the game to derive a contrast threshold for each session. We tested the game on 20 amblyopic subjects (10 children and 10 adults), who played at home using their amblyopic eye for an average of 37 sessions (approximately 11 h). Contrast thresholds from the game improved reliably for adults but not for children. However, logMAR acuity improved for both groups (mean = 1.3 lines; range = 0–3.6 lines). We present the rationale leading to the development of the game and describe the challenges of incorporating psychophysical methods into game-like settings. PMID:25404922
The challenges of developing a contrast-based video game for treatment of amblyopia.
Hussain, Zahra; Astle, Andrew T; Webb, Ben S; McGraw, Paul V
2014-01-01
Perceptual learning of visual tasks is emerging as a promising treatment for amblyopia, a developmental disorder of vision characterized by poor monocular visual acuity. The tasks tested thus far span the gamut from basic psychophysical discriminations to visually complex video games. One end of the spectrum offers precise control over stimulus parameters, whilst the other delivers the benefits of motivation and reward that sustain practice over long periods. Here, we combined the advantages of both approaches by developing a video game that trains contrast sensitivity, which in psychophysical experiments, is associated with significant improvements in visual acuity in amblyopia. Target contrast was varied adaptively in the game to derive a contrast threshold for each session. We tested the game on 20 amblyopic subjects (10 children and 10 adults), who played at home using their amblyopic eye for an average of 37 sessions (approximately 11 h). Contrast thresholds from the game improved reliably for adults but not for children. However, logMAR acuity improved for both groups (mean = 1.3 lines; range = 0-3.6 lines). We present the rationale leading to the development of the game and describe the challenges of incorporating psychophysical methods into game-like settings.
Chudasama, Y; Robbins, Trevor W
2003-09-24
To examine possible heterogeneity of function within the ventral regions of the rodent frontal cortex, the present study compared the effects of excitotoxic lesions of the orbitofrontal cortex (OFC) and the infralimbic cortex (ILC) on pavlovian autoshaping and discrimination reversal learning. During the pavlovian autoshaping task, in which rats learn to approach a stimulus predictive of reward [conditional stimulus (CS+)], only the OFC group failed to acquire discriminated approach but was unimpaired when preoperatively trained. In the visual discrimination learning and reversal task, rats were initially required to discriminate a stimulus positively associated with reward. There was no effect of either OFC or ILC lesions on discrimination learning. When the stimulus-reward contingencies were reversed, both groups of animals committed more errors, but only the OFC-lesioned animals were unable to suppress the previously rewarded stimulus-reward association, committing more "stimulus perseverative" errors. In contrast, the ILC group showed a pattern of errors that was more attributable to "learning" than perseveration. These findings suggest two types of dissociation between the effects of OFC and ILC lesions: (1) OFC lesions impaired the learning processes implicated in pavlovian autoshaping but not instrumental simultaneous discrimination learning, whereas ILC lesions were unimpaired at autoshaping and their reversal learning deficit did not reflect perseveration, and (2) OFC lesions induced perseverative responding in reversal learning but did not disinhibit responses to pavlovian CS-. In contrast, the ILC lesion had no effect on response inhibitory control in either of these settings. The findings are discussed in the context of dissociable executive functions in ventral sectors of the rat prefrontal cortex.
Subjective figures and texture perception.
Zucker, S W; Cavanagh, P
1985-01-01
A texture discrimination task using the Ehrenstein illusion demonstrates that subjective brightness effects can play an essential role in early vision. The subjectively bright regions of the Ehrenstein can be organized either as discs or as stripes, depending on orientation. The accuracy of discrimination between variants of the Ehrenstein and control patterns was a direct function of the presence of the illusory brightness stripes, being high when they were present and low otherwise. It is argued that neither receptive field structure nor spatial-frequency content can adequately account for these results. We suggest that the subjective brightness illusions, rather than being a high-level, cognitive aspect of vision, are in fact the result of an early visual process.
Motivation versus aversive processing during perception.
Padmala, Srikanth; Pessoa, Luiz
2014-06-01
Reward facilitates performance and boosts cognitive performance across many tasks. At the same time, negative affective stimuli interfere with performance when they are not relevant to the task at hand. Yet, the investigation of how reward and negative stimuli impact perception and cognition has taken place in a manner that is largely independent of each other. How reward and negative emotion simultaneously contribute to behavioral performance is currently poorly understood. The aim of the present study was to investigate how the simultaneous manipulation of positive motivational processing (here manipulated via reward) and aversive processing (here manipulated via negative picture viewing) influence behavior during a perceptual task. We tested 2 competing hypotheses about the impact of reward on negative picture viewing. On the one hand, suggestions about the automaticity of emotional processing predict that negative picture interference would be relatively immune to reward. On the other, if affective visual processing is not obligatory, as we have argued in the past, reward may counteract the deleterious effect of more potent negative pictures. We found that reward counteracted the effect of potent, negative distracters during a visual discrimination task. Thus, when sufficiently motivated, participants were able to reduce the deleterious impact of bodily mutilation stimuli.
Gutierrez, Eduardo de A; Pessoa, Valdir F; Aguiar, Ludmilla M S; Pessoa, Daniel M A
2014-11-01
Bats are known for their well-developed echolocation. However, several experiments focused on the bat visual system have shown evidence of the importance of visual cues under specific luminosity for different aspects of bat biology, including foraging behavior. This study examined the foraging abilities of five female great fruit-eating bats, Artibeus lituratus, under different light intensities. Animals were given a series of tasks to test for discrimination between a food target against an inedible background, under light levels similar to the twilight illumination (18lx), the full moon (2lx) and complete darkness (0lx). We found that the bats required a longer time frame to detect targets under a light intensity similar to twilight, possibly due to inhibitory effects present under a more intense light level. Additionally, bats were more efficient at detecting and capturing targets under light conditions similar to the luminosity of a full moon, suggesting that visual cues were important for target discrimination. These results demonstrate that light intensity affects foraging behavior and enables the use of visual cues for food detection in frugivorous bats. This article is part of a Special Issue entitled: Neotropical Behaviour. Copyright © 2014 Elsevier B.V. All rights reserved.
Ogourtsova, Tatiana; Archambault, Philippe S; Lamontagne, Anouk
2018-04-03
Unilateral spatial neglect (USN), a highly prevalent and disabling post-stroke deficit, severely affects functional mobility. Visual perceptual abilities (VPAs) are essential in activities involving mobility. However, whether and to what extent post-stroke USN affects VPAs and how they contribute to mobility impairments remains unclear. To estimate the extent to which VPAs in left and right visual hemispaces are (1) affected in post-stroke USN; and (2) contribute to goal-directed locomotion. Individuals with (USN+, n = 15) and without (USN-, n = 15) post-stroke USN and healthy controls (HC, n = 15) completed (1) psychophysical evaluation of contrast sensitivity, optic flow direction and coherence, and shape discrimination; and (2) goal-directed locomotion tasks. Higher discrimination thresholds were found for all VPAs in the USN+ group compared to USN- and HC groups (p < 0.05). Psychophysical tests showed high sensitivity in detecting deficits in individuals with a history of USN or with no USN on traditional assessments, and were found to be significantly correlated with goal-directed locomotor impairments. Deficits in VPAs may account for the functional difficulties experienced by individuals with post-stroke USN. Psychophysical tests used in the present study offer important advantages and can be implemented to enhance USN diagnostics and rehabilitation.
Yang, Yi; Tokita, Midori; Ishiguchi, Akira
2018-01-01
A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed. PMID:29399318
Summary statistics in the attentional blink.
McNair, Nicolas A; Goodbourn, Patrick T; Shone, Lauren T; Harris, Irina M
2017-01-01
We used the attentional blink (AB) paradigm to investigate the processing stage at which extraction of summary statistics from visual stimuli ("ensemble coding") occurs. Experiment 1 examined whether ensemble coding requires attentional engagement with the items in the ensemble. Participants performed two sequential tasks on each trial: gender discrimination of a single face (T1) and estimating the average emotional expression of an ensemble of four faces (or of a single face, as a control condition) as T2. Ensemble coding was affected by the AB when the tasks were separated by a short temporal lag. In Experiment 2, the order of the tasks was reversed to test whether ensemble coding requires more working-memory resources, and therefore induces a larger AB, than estimating the expression of a single face. Each condition produced a similar magnitude AB in the subsequent gender-discrimination T2 task. Experiment 3 additionally investigated whether the previous results were due to participants adopting a subsampling strategy during the ensemble-coding task. Contrary to this explanation, we found different patterns of performance in the ensemble-coding condition and a condition in which participants were instructed to focus on only a single face within an ensemble. Taken together, these findings suggest that ensemble coding emerges automatically as a result of the deployment of attentional resources across the ensemble of stimuli, prior to information being consolidated in working memory.
Development of form similarity as a Gestalt grouping principle in infancy.
Quinn, Paul C; Bhatt, Ramesh S; Brush, Diana; Grimes, Autumn; Sharpnack, Heather
2002-07-01
Given evidence demonstrating that infants 3 months of age and younger can utilize the Gestalt principle of lightness similarity to group visually presented elements into organized percepts, four experiments using the familiarization/novelty-preference procedure were conducted to determine whether infants can also organize visual pattern information in accord with the Gestalt principle of form similarity. In Experiments 1 and 2, 6- to 7-month-olds, but not 3- to 4-month-olds, presented with generalization and discrimination tasks involving arrays of X and O elements responded as if they organized the elements into columns or rows based on form similarity. Experiments 3 and 4 demonstrated that the failure of the young infants to use form similarity was not due to insufficient processing time or the inability to discriminate between the individual X and O elements. The results suggest that different Gestalt principles may become functional over different time courses of development, and that not all principles are automatically deployed in the manner originally proposed by Gestalt theorists.
Peters, Megan A K; Lau, Hakwan
2015-01-01
Many believe that humans can ‘perceive unconsciously’ – that for weak stimuli, briefly presented and masked, above-chance discrimination is possible without awareness. Interestingly, an online survey reveals that most experts in the field recognize the lack of convincing evidence for this phenomenon, and yet they persist in this belief. Using a recently developed bias-free experimental procedure for measuring subjective introspection (confidence), we found no evidence for unconscious perception; participants’ behavior matched that of a Bayesian ideal observer, even though the stimuli were visually masked. This surprising finding suggests that the thresholds for subjective awareness and objective discrimination are effectively the same: if objective task performance is above chance, there is likely conscious experience. These findings shed new light on decades-old methodological issues regarding what it takes to consider a neurobiological or behavioral effect to be 'unconscious,' and provide a platform for rigorously investigating unconscious perception in future studies. DOI: http://dx.doi.org/10.7554/eLife.09651.001 PMID:26433023
Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka
2016-01-01
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution. DOI: http://dx.doi.org/10.7554/eLife.13764.001 PMID:27490481
Psychophysical estimation of speed discrimination. II. Aging effects
NASA Astrophysics Data System (ADS)
Raghuram, Aparna; Lakshminarayanan, Vasudevan; Khanna, Ritu
2005-10-01
We studied the effects of aging on a speed discrimination task using a pair of first-order drifting luminance gratings. Two reference speeds of 2 and 8 deg/s were presented at stimulus durations of 500 ms and 1000 ms. The choice of stimulus parameters, etc., was determined in preliminary experiments and described in Part I. Thresholds were estimated using a two-alternative-forced-choice staircase methodology. Data were collected from 16 younger subjects (mean age 24 years) and 17 older subjects (mean age 71 years). Results showed that thresholds for speed discrimination were higher for the older age group. This was especially true at stimulus duration of 500 ms for both slower and faster speeds. This could be attributed to differences in temporal integration of speed with age. Visual acuity and contrast sensitivity were not statistically observed to mediate age differences in the speed discrimination thresholds. Gender differences were observed in the older age group, with older women having higher thresholds.
Chromatic Perceptual Learning but No Category Effects without Linguistic Input.
Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L
2016-01-01
Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.
Popoviç, M; Biessels, G J; Isaacson, R L; Gispen, W H
2001-08-01
Diabetes mellitus is associated with disturbances of cognitive functioning. The aim of this study was to examine cognitive functioning in diabetic rats using the 'Can test', a novel spatial/object learning and memory task, without the use of aversive stimuli. Rats were trained to select a single rewarded can from seven cans. Mild water deprivation provided the motivation to obtain the reward (0.3 ml of water). After 5 days of baseline training, in which the rewarded can was marked by its surface and position in an open field, the animals were divided into two groups. Diabetes was induced in one group, by an intravenous injection of streptozotocin. Retention of baseline training was tested at 2-weekly intervals for 10 weeks. Next, two adapted versions of the task were used, with 4 days of training in each version. The rewarded can was a soft-drink can with coloured print. In a 'simple visual task' the soft-drink can was placed among six white cans, whereas in a 'complex visual task' it was placed among six soft-drink cans from different brands with distinct prints. In diabetic rats the number of correct responses was lower and number of reference and working memory errors higher than in controls in the various versions of the test. Switches between tasks and increases in task complexity accentuated the performance deficits, which may reflect an inability of diabetic rats to adapt behavioural strategies to the demands of the tasks.
Perceptual Learning Improves Adult Amblyopic Vision Through Rule-Based Cognitive Compensation
Zhang, Jun-Yun; Cong, Lin-Juan; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong
2014-01-01
Purpose. We investigated whether perceptual learning in adults with amblyopia could be enabled to transfer completely to an orthogonal orientation, which would suggest that amblyopic perceptual learning results mainly from high-level cognitive compensation, rather than plasticity in the amblyopic early visual brain. Methods. Nineteen adults (mean age = 22.5 years) with anisometropic and/or strabismic amblyopia were trained following a training-plus-exposure (TPE) protocol. The amblyopic eyes practiced contrast, orientation, or Vernier discrimination at one orientation for six to eight sessions. Then the amblyopic or nonamblyopic eyes were exposed to an orthogonal orientation via practicing an irrelevant task. Training was first performed at a lower spatial frequency (SF), then at a higher SF near the cutoff frequency of the amblyopic eye. Results. Perceptual learning was initially orientation specific. However, after exposure to the orthogonal orientation, learning transferred to an orthogonal orientation completely. Reversing the exposure and training order failed to produce transfer. Initial lower SF training led to broad improvement of contrast sensitivity, and later higher SF training led to more specific improvement at high SFs. Training improved visual acuity by 1.5 to 1.6 lines (P < 0.001) in the amblyopic eyes with computerized tests and a clinical E acuity chart. It also improved stereoacuity by 53% (P < 0.001). Conclusions. The complete transfer of learning suggests that perceptual learning in amblyopia may reflect high-level learning of rules for performing a visual discrimination task. These rules are applicable to new orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation. PMID:24550359
Perceptual learning improves adult amblyopic vision through rule-based cognitive compensation.
Zhang, Jun-Yun; Cong, Lin-Juan; Klein, Stanley A; Levi, Dennis M; Yu, Cong
2014-04-01
We investigated whether perceptual learning in adults with amblyopia could be enabled to transfer completely to an orthogonal orientation, which would suggest that amblyopic perceptual learning results mainly from high-level cognitive compensation, rather than plasticity in the amblyopic early visual brain. Nineteen adults (mean age = 22.5 years) with anisometropic and/or strabismic amblyopia were trained following a training-plus-exposure (TPE) protocol. The amblyopic eyes practiced contrast, orientation, or Vernier discrimination at one orientation for six to eight sessions. Then the amblyopic or nonamblyopic eyes were exposed to an orthogonal orientation via practicing an irrelevant task. Training was first performed at a lower spatial frequency (SF), then at a higher SF near the cutoff frequency of the amblyopic eye. Perceptual learning was initially orientation specific. However, after exposure to the orthogonal orientation, learning transferred to an orthogonal orientation completely. Reversing the exposure and training order failed to produce transfer. Initial lower SF training led to broad improvement of contrast sensitivity, and later higher SF training led to more specific improvement at high SFs. Training improved visual acuity by 1.5 to 1.6 lines (P < 0.001) in the amblyopic eyes with computerized tests and a clinical E acuity chart. It also improved stereoacuity by 53% (P < 0.001). The complete transfer of learning suggests that perceptual learning in amblyopia may reflect high-level learning of rules for performing a visual discrimination task. These rules are applicable to new orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation.
Wallmeier, Ludwig; Kish, Daniel; Wiegrebe, Lutz; Flanagin, Virginia L
2015-03-01
Some blind humans have developed the remarkable ability to detect and localize objects through the auditory analysis of self-generated tongue clicks. These echolocation experts show a corresponding increase in 'visual' cortex activity when listening to echo-acoustic sounds. Echolocation in real-life settings involves multiple reflections as well as active sound production, neither of which has been systematically addressed. We developed a virtualization technique that allows participants to actively perform such biosonar tasks in virtual echo-acoustic space during magnetic resonance imaging (MRI). Tongue clicks, emitted in the MRI scanner, are picked up by a microphone, convolved in real time with the binaural impulse responses of a virtual space, and presented via headphones as virtual echoes. In this manner, we investigated the brain activity during active echo-acoustic localization tasks. Our data show that, in blind echolocation experts, activations in the calcarine cortex are dramatically enhanced when a single reflector is introduced into otherwise anechoic virtual space. A pattern-classification analysis revealed that, in the blind, calcarine cortex activation patterns could discriminate left-side from right-side reflectors. This was found in both blind experts, but the effect was significant for only one of them. In sighted controls, 'visual' cortex activations were insignificant, but activation patterns in the planum temporale were sufficient to discriminate left-side from right-side reflectors. Our data suggest that blind and echolocation-trained, sighted subjects may recruit different neural substrates for the same active-echolocation task. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Air-Track: a real-world floating environment for active sensing in head-fixed mice.
Nashaat, Mostafa A; Oraby, Hatem; Sachdev, Robert N S; Winter, York; Larkum, Matthew E
2016-10-01
Natural behavior occurs in multiple sensory and motor modalities and in particular is dependent on sensory feedback that constantly adjusts behavior. To investigate the underlying neuronal correlates of natural behavior, it is useful to have access to state-of-the-art recording equipment (e.g., 2-photon imaging, patch recordings, etc.) that frequently requires head fixation. This limitation has been addressed with various approaches such as virtual reality/air ball or treadmill systems. However, achieving multimodal realistic behavior in these systems can be challenging. These systems are often also complex and expensive to implement. Here we present "Air-Track," an easy-to-build head-fixed behavioral environment that requires only minimal computational processing. The Air-Track is a lightweight physical maze floating on an air table that has all the properties of the "real" world, including multiple sensory modalities tightly coupled to motor actions. To test this system, we trained mice in Go/No-Go and two-alternative forced choice tasks in a plus maze. Mice chose lanes and discriminated apertures or textures by moving the Air-Track back and forth and rotating it around themselves. Mice rapidly adapted to moving the track and used visual, auditory, and tactile cues to guide them in performing the tasks. A custom-controlled camera system monitored animal location and generated data that could be used to calculate reaction times in the visual and somatosensory discrimination tasks. We conclude that the Air-Track system is ideal for eliciting natural behavior in concert with virtually any system for monitoring or manipulating brain activity. Copyright © 2016 the American Physiological Society.