Sofer, Imri; Crouzet, Sébastien M.; Serre, Thomas
2015-01-01
Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability. PMID:26335683
Dyslexic Participants Show Intact Spontaneous Categorization Processes
ERIC Educational Resources Information Center
Nikolopoulos, Dimitris S.; Pothos, Emmanuel M.
2009-01-01
We examine the performance of dyslexic participants on an unsupervised categorization task against that of matched non-dyslexic control participants. Unsupervised categorization is a cognitive process critical for conceptual development. Existing research in dyslexia has emphasized perceptual tasks and supervised categorization tasks (for which…
Feature saliency and feedback information interactively impact visual category learning
Hammer, Rubi; Sloutsky, Vladimir; Grill-Spector, Kalanit
2015-01-01
Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object’s features most relevant for categorization, while ‘filtering out’ irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously. PMID:25745404
Perceptual Biases in Relation to Paranormal and Conspiracy Beliefs
van Elk, Michiel
2015-01-01
Previous studies have shown that one’s prior beliefs have a strong effect on perceptual decision-making and attentional processing. The present study extends these findings by investigating how individual differences in paranormal and conspiracy beliefs are related to perceptual and attentional biases. Two field studies were conducted in which visitors of a paranormal conducted a perceptual decision making task (i.e. the face / house categorization task; Experiment 1) or a visual attention task (i.e. the global / local processing task; Experiment 2). In the first experiment it was found that skeptics compared to believers more often incorrectly categorized ambiguous face stimuli as representing a house, indicating that disbelief rather than belief in the paranormal is driving the bias observed for the categorization of ambiguous stimuli. In the second experiment, it was found that skeptics showed a classical ‘global-to-local’ interference effect, whereas believers in conspiracy theories were characterized by a stronger ‘local-to-global interference effect’. The present study shows that individual differences in paranormal and conspiracy beliefs are associated with perceptual and attentional biases, thereby extending the growing body of work in this field indicating effects of cultural learning on basic perceptual processes. PMID:26114604
Perceptual Biases in Relation to Paranormal and Conspiracy Beliefs.
van Elk, Michiel
2015-01-01
Previous studies have shown that one's prior beliefs have a strong effect on perceptual decision-making and attentional processing. The present study extends these findings by investigating how individual differences in paranormal and conspiracy beliefs are related to perceptual and attentional biases. Two field studies were conducted in which visitors of a paranormal conducted a perceptual decision making task (i.e. the face/house categorization task; Experiment 1) or a visual attention task (i.e. the global/local processing task; Experiment 2). In the first experiment it was found that skeptics compared to believers more often incorrectly categorized ambiguous face stimuli as representing a house, indicating that disbelief rather than belief in the paranormal is driving the bias observed for the categorization of ambiguous stimuli. In the second experiment, it was found that skeptics showed a classical 'global-to-local' interference effect, whereas believers in conspiracy theories were characterized by a stronger 'local-to-global interference effect'. The present study shows that individual differences in paranormal and conspiracy beliefs are associated with perceptual and attentional biases, thereby extending the growing body of work in this field indicating effects of cultural learning on basic perceptual processes.
Lexical Categorization Modalities in Pre-School Children: Influence of Perceptual and Verbal Tasks
ERIC Educational Resources Information Center
Tallandini, Maria Anna; Roia, Anna
2005-01-01
This study investigates how categorical organization functions in pre-school children, focusing on the dichotomy between living and nonliving things. The variables of familiarity, frequency of word use and perceptual complexity were controlled. Sixty children aged between 4 years and 5 years 10 months were investigated. Three tasks were used: a…
Effects of Talker Variability on Perceptual Learning of Dialects
Clopper, Cynthia G.; Pisoni, David B.
2012-01-01
Two groups of listeners learned to categorize a set of unfamiliar talkers by dialect region using sentences selected from the TIMIT speech corpus. One group learned to categorize a single talker from each of six American English dialect regions. A second group learned to categorize three talkers from each dialect region. Following training, both groups were asked to categorize new talkers using the same categorization task. While the single-talker group was more accurate during initial training and test phases when familiar talkers produced the sentences, the three-talker group performed better on the generalization task with unfamiliar talkers. This cross-over effect in dialect categorization suggests that while talker variation during initial perceptual learning leads to more difficult learning of specific exemplars, exposure to intertalker variability facilitates robust perceptual learning and promotes better categorization performance of unfamiliar talkers. The results suggest that listeners encode and use acoustic-phonetic variability in speech to reliably perceive the dialect of unfamiliar talkers. PMID:15697151
Neurophysiological Evidence for Categorical Perception of Color
ERIC Educational Resources Information Center
Holmes, Amanda; Franklin, Anna; Clifford, Alexandra; Davies, Ian
2009-01-01
The aim of this investigation was to examine the time course and the relative contributions of perceptual and post-perceptual processes to categorical perception (CP) of color. A visual oddball task was used with standard and deviant stimuli from same (within-category) or different (between-category) categories, with chromatic separations for…
ERIC Educational Resources Information Center
Berger, Carole; Donnadieu, Sophie
2006-01-01
This research explores the way in which young children (5 years of age) and adults use perceptual and conceptual cues for categorizing objects processed by vision or by audition. Three experiments were carried out using forced-choice categorization tasks that allowed responses based on taxonomic relations (e.g., vehicles) or on schema category…
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.
2011-01-01
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344
Decoding task-based attentional modulation during face categorization.
Chiu, Yu-Chin; Esterman, Michael; Han, Yuefeng; Rosen, Heather; Yantis, Steven
2011-05-01
Attention is a neurocognitive mechanism that selects task-relevant sensory or mnemonic information to achieve current behavioral goals. Attentional modulation of cortical activity has been observed when attention is directed to specific locations, features, or objects. However, little is known about how high-level categorization task set modulates perceptual representations. In the current study, observers categorized faces by gender (male vs. female) or race (Asian vs. White). Each face was perceptually ambiguous in both dimensions, such that categorization of one dimension demanded selective attention to task-relevant information within the face. We used multivoxel pattern classification to show that task-specific modulations evoke reliably distinct spatial patterns of activity within three face-selective cortical regions (right fusiform face area and bilateral occipital face areas). This result suggests that patterns of activity in these regions reflect not only stimulus-specific (i.e., faces vs. houses) responses but also task-specific (i.e., race vs. gender) attentional modulation. Furthermore, exploratory whole-brain multivoxel pattern classification (using a searchlight procedure) revealed a network of dorsal fronto-parietal regions (left middle frontal gyrus and left inferior and superior parietal lobule) that also exhibit distinct patterns for the two task sets, suggesting that these regions may represent abstract goals during high-level categorization tasks.
Perceptual, Categorical, and Affective Processing of Ambiguous Smiling Facial Expressions
ERIC Educational Resources Information Center
Calvo, Manuel G.; Fernandez-Martin, Andres; Nummenmaa, Lauri
2012-01-01
Why is a face with a smile but non-happy eyes likely to be interpreted as happy? We used blended expressions in which a smiling mouth was incongruent with the eyes (e.g., angry eyes), as well as genuine expressions with congruent eyes and mouth (e.g., both happy or angry). Tasks involved detection of a smiling mouth (perceptual), categorization of…
Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion.
Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel
2014-01-01
Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.
ERIC Educational Resources Information Center
Gijsel, Martine A. R.; Ormel, Ellen A.; Hermans, Daan; Verhoeven, L.; Bosman, Anna M. T.
2011-01-01
In the present study, the development of semantic categorization and its relationship with reading was investigated across Dutch primary grade students. Three Exemplar-level tasks (Experiment 1) and two Superordinate-level tasks (Experiment 2) with different types of distracters (phonological, semantic and perceptual) were administered to assess…
ERIC Educational Resources Information Center
Harel, Assaf; Bentin, Shlomo
2009-01-01
The type of visual information needed for categorizing faces and nonface objects was investigated by manipulating spatial frequency scales available in the image during a category verification task addressing basic and subordinate levels. Spatial filtering had opposite effects on faces and airplanes that were modulated by categorization level. The…
Differences in perceptual learning transfer as a function of training task.
Green, C Shawn; Kattner, Florian; Siegel, Max H; Kersten, Daniel; Schrater, Paul R
2015-01-01
A growing body of research--including results from behavioral psychology, human structural and functional imaging, single-cell recordings in nonhuman primates, and computational modeling--suggests that perceptual learning effects are best understood as a change in the ability of higher-level integration or association areas to read out sensory information in the service of particular decisions. Work in this vein has argued that, depending on the training experience, the "rules" for this read-out can either be applicable to new contexts (thus engendering learning generalization) or can apply only to the exact training context (thus resulting in learning specificity). Here we contrast learning tasks designed to promote either stimulus-specific or stimulus-general rules. Specifically, we compare learning transfer across visual orientation following training on three different tasks: an orientation categorization task (which permits an orientation-specific learning solution), an orientation estimation task (which requires an orientation-general learning solution), and an orientation categorization task in which the relevant category boundary shifts on every trial (which lies somewhere between the two tasks above). While the simple orientation-categorization training task resulted in orientation-specific learning, the estimation and moving categorization tasks resulted in significant orientation learning generalization. The general framework tested here--that task specificity or generality can be predicted via an examination of the optimal learning solution--may be useful in building future training paradigms with certain desired outcomes.
Free classification of regional dialects of American English.
Clopper, Cynthia G; Pisoni, David B
2007-07-01
Recent studies have found that naïve listeners perform poorly in forced-choice dialect categorization tasks. However, the listeners' error patterns in these tasks reveal systematic confusions between phonologically similar dialects. In the present study, a free classification procedure was used to measure the perceptual similarity structure of regional dialect variation in the United States. In two experiments, participants listened to a set of short English sentences produced by male talkers only (Experiment 1) and by male and female talkers (Experiment 2). The listeners were instructed to group the talkers by regional dialect into as many groups as they wanted with as many talkers in each group as they wished. Multidimensional scaling analyses of the data revealed three primary dimensions of perceptual similarity (linguistic markedness, geography, and gender). In addition, a comparison of the results obtained from the free classification task to previous results using the same stimulus materials in six-alternative forced-choice categorization tasks revealed that response biases in the six-alternative task were reduced or eliminated in the free classification task. Thus, the results obtained with the free classification task in the current study provided further evidence that the underlying structure of perceptual dialect category representations reflects important linguistic and sociolinguistic factors.
Chromatic Perceptual Learning but No Category Effects without Linguistic Input.
Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L
2016-01-01
Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.
Fleischhauer, Monika; Strobel, Alexander; Diers, Kersten; Enge, Sören
2014-02-01
The Implicit Association Test (IAT) is a widely used latency-based categorization task that indirectly measures the strength of automatic associations between target and attribute concepts. So far, little is known about the perceptual and cognitive processes underlying personality IATs. Thus, the present study examined event-related potential indices during the execution of an IAT measuring neuroticism (N = 70). The IAT effect was strongly modulated by the P1 component indicating early facilitation of relevant visual input and by a P3b-like late positive component reflecting the efficacy of stimulus categorization. Both components covaried, and larger amplitudes led to faster responses. The results suggest a relationship between early perceptual and semantic processes operating at a more automatic, implicit level and later decision-related categorization of self-relevant stimuli contributing to the IAT effect. Copyright © 2013 Society for Psychophysiological Research.
Color categories and color appearance
Webster, Michael A.; Kay, Paul
2011-01-01
We examined categorical effects in color appearance in two tasks, which in part differed in the extent to which color naming was explicitly required for the response. In one, we measured the effects of color differences on perceptual grouping for hues that spanned the blue–green boundary, to test whether chromatic differences across the boundary were perceptually exaggerated. This task did not require overt judgments of the perceived colors, and the tendency to group showed only a weak and inconsistent categorical bias. In a second case, we analyzed results from two prior studies of hue scaling of chromatic stimuli (De Valois, De Valois, Switkes, & Mahon, 1997; Malkoc, Kay, & Webster, 2005), to test whether color appearance changed more rapidly around the blue–green boundary. In this task observers directly judge the perceived color of the stimuli and these judgments tended to show much stronger categorical effects. The differences between these tasks could arise either because different signals mediate color grouping and color appearance, or because linguistic categories might differentially intrude on the response to color and/or on the perception of color. Our results suggest that the interaction between language and color processing may be highly dependent on the specific task and cognitive demands and strategies of the observer, and also highlight pronounced individual differences in the tendency to exhibit categorical responses. PMID:22176751
Chromatic Perceptual Learning but No Category Effects without Linguistic Input
Grandison, Alexandra; Sowden, Paul T.; Drivonikou, Vicky G.; Notman, Leslie A.; Alexander, Iona; Davies, Ian R. L.
2016-01-01
Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest. PMID:27252669
Free classification of regional dialects of American English
Clopper, Cynthia G.; Pisoni, David B.
2011-01-01
Recent studies have found that naïve listeners perform poorly in forced-choice dialect categorization tasks. However, the listeners' error patterns in these tasks reveal systematic confusions between phonologically similar dialects. In the present study, a free classification procedure was used to measure the perceptual similarity structure of regional dialect variation in the United States. In two experiments, participants listened to a set of short English sentences produced by male talkers only (Experiment 1) and by male and female talkers (Experiment 2). The listeners were instructed to group the talkers by regional dialect into as many groups as they wanted with as many talkers in each group as they wished. Multidimensional scaling analyses of the data revealed three primary dimensions of perceptual similarity (linguistic markedness, geography, and gender). In addition, a comparison of the results obtained from the free classification task to previous results using the same stimulus materials in six-alternative forced-choice categorization tasks revealed that response biases in the six-alternative task were reduced or eliminated in the free classification task. Thus, the results obtained with the free classification task in the current study provided further evidence that the underlying structure of perceptual dialect category representations reflects important linguistic and sociolinguistic factors. PMID:21423862
Computer-assisted design in perceptual-motor skills research
NASA Technical Reports Server (NTRS)
Rogers, C. A., Jr.
1974-01-01
A categorization was made of independent variables previously found to be potent in simple perceptual-motor tasks. A computer was then used to generate hypothetical factorial designs. These were evaluated in terms of literature trends and pragmatic criteria. Potential side-effects of machine-assisted research strategy were discussed.
Effects of age on cognitive control during semantic categorization.
Mudar, Raksha A; Chiang, Hsueh-Sheng; Maguire, Mandy J; Spence, Jeffrey S; Eroh, Justin; Kraut, Michael A; Hart, John
2015-01-01
We used event-related potentials (ERPs) to study age effects of perceptual (basic-level) vs. perceptual-semantic (superordinate-level) categorization on cognitive control using the go/nogo paradigm. Twenty-two younger (11 M; 21 ± 2.2 years) and 22 older adults (9 M; 63 ± 5.8 years) completed two visual go/nogo tasks. In the single-car task (SiC) (basic), go/nogo responses were made based on single exemplars of a car (go) and a dog (nogo). In the object animal task (ObA) (superordinate), responses were based on multiple exemplars of objects (go) and animals (nogo). Each task consisted of 200 trials: 160 (80%) 'go' trials that required a response through button pressing and 40 (20%) 'nogo' trials that required inhibition/withholding of a response. ERP data revealed significantly reduced nogo-N2 and nogo-P3 amplitudes in older compared to younger adults, whereas go-N2 and go-P3 amplitudes were comparable in both groups during both categorization tasks. Although the effects of categorization levels on behavioral data and P3 measures were similar in both groups with longer response times, lower accuracy scores, longer P3 latencies, and lower P3 amplitudes in ObA compared to SiC, N2 latency revealed age group differences moderated by the task. Older adults had longer N2 latency for ObA compared to SiC, in contrast, younger adults showed no N2 latency difference between SiC and ObA. Overall, these findings suggest that age differentially affects neural processing related to cognitive control during semantic categorization. Furthermore, in older adults, unlike in younger adults, levels of categorization modulate neural processing related to cognitive control even at the early stages (N2). Published by Elsevier B.V.
Adaptive Criterion Setting in Perceptual Decision Making
ERIC Educational Resources Information Center
Stuttgen, Maik C.; Yildiz, Ali; Gunturkun, Onur
2011-01-01
Pigeons responded in a perceptual categorization task with six different stimuli (shades of gray), three of which were to be classified as "light" or "dark", respectively. Reinforcement probability for correct responses was varied from 0.2 to 0.6 across blocks of sessions and was unequal for correct light and dark responses. Introduction of a new…
Vanmarcke, Steven; Wagemans, Johan
2017-04-01
Adolescents with and without autism spectrum disorder (ASD) performed two priming experiments in which they implicitly processed a prime stimulus, containing high and/or low spatial frequency information, and then explicitly categorized a target face either as male/female (gender task) or as positive/negative (Valence task). Adolescents with ASD made more categorization errors than typically developing adolescents. They also showed an age-dependent improvement in categorization speed and had more difficulties with categorizing facial expressions than gender. However, in neither of the categorization tasks, we found group differences in the processing of coarse versus fine prime information. This contradicted our expectations, and indicated that the perceptual differences between adolescents with and without ASD critically depended on the processing time available for the primes.
Categorization skills and recall in brain damaged children: a multiple case study.
Mello, Claudia Berlim de; Muszkat, Mauro; Xavier, Gilberto Fernando; Bueno, Orlando Francisco Amodeo
2009-09-01
During development, children become capable of categorically associating stimuli and of using these relationships for memory recall. Brain damage in childhood can interfere with this development. This study investigated categorical association of stimuli and recall in four children with brain damages. The etiology, topography and timing of the lesions were diverse. Tasks included naming and immediate recall of 30 perceptually and semantically related figures, free sorting, delayed recall, and cued recall of the same material. Traditional neuropsychological tests were also employed. Two children with brain damage sustained in middle childhood relied on perceptual rather than on categorical associations in making associations between figures and showed deficits in delayed or cued recall, in contrast to those with perinatal lesions. One child exhibited normal performance in recall despite categorical association deficits. The present results suggest that brain damaged children show deficits in categorization and recall that are not usually identified in traditional neuropsychological tests.
What is automatized during perceptual categorization?
Roeder, Jessica L.; Ashby, F. Gregory
2016-01-01
An experiment is described that tested whether stimulus-response associations or an abstract rule are automatized during extensive practice at perceptual categorization. Twenty-seven participants each completed 12,300 trials of perceptual categorization, either on rule-based (RB) categories that could be learned explicitly or information-integration (II) categories that required procedural learning. Each participant practiced predominantly on a primary category structure, but every third session they switched to a secondary structure that used the same stimuli and responses. Half the stimuli retained their same response on the primary and secondary categories (the congruent stimuli) and half switched responses (the incongruent stimuli). Several results stood out. First, performance on the primary categories met the standard criteria of automaticity by the end of training. Second, for the primary categories in the RB condition, accuracy and response time (RT) were identical on congruent and incongruent stimuli. In contrast, for the primary II categories, accuracy was higher and RT was lower for congruent than for incongruent stimuli. These results are consistent with the hypothesis that rules are automatized in RB tasks, whereas stimulus-response associations are automatized in II tasks. A cognitive neuroscience theory is proposed that accounts for these results. PMID:27232521
Relationship between listeners' nonnative speech recognition and categorization abilities
Atagi, Eriko; Bent, Tessa
2015-01-01
Enhancement of the perceptual encoding of talker characteristics (indexical information) in speech can facilitate listeners' recognition of linguistic content. The present study explored this indexical-linguistic relationship in nonnative speech processing by examining listeners' performance on two tasks: nonnative accent categorization and nonnative speech-in-noise recognition. Results indicated substantial variability across listeners in their performance on both the accent categorization and nonnative speech recognition tasks. Moreover, listeners' accent categorization performance correlated with their nonnative speech-in-noise recognition performance. These results suggest that having more robust indexical representations for nonnative accents may allow listeners to more accurately recognize the linguistic content of nonnative speech. PMID:25618098
ERIC Educational Resources Information Center
Gao, Zaifeng; Bentin, Shlomo
2011-01-01
Face perception studies investigated how spatial frequencies (SF) are extracted from retinal display while forming a perceptual representation, or their selective use during task-imposed categorization. Here we focused on the order of encoding low-spatial frequencies (LSF) and high-spatial frequencies (HSF) from perceptual representations into…
Shifting Attention Between Visual Dimensions as a Source of Switch Costs.
Elchlepp, Heike; Best, Maisy; Lavric, Aureliu; Monsell, Stephen
2017-04-01
Task-switching experiments have documented a puzzling phenomenon: Advance warning of the switch reduces but does not eliminate the switch cost. Theoretical accounts have posited that the residual switch cost arises when one selects the relevant stimulus-response mapping, leaving earlier perceptual processes unaffected. We put this assumption to the test by seeking electrophysiological markers of encoding a perceptual dimension. Participants categorized a colored letter as a vowel or consonant or its color as "warm" or "cold." Orthogonally to the color manipulation, some colors were eight times more frequent than others, and the letters were in upper- or lowercase. Color frequency modulated the electroencephalogram amplitude at around 150 ms when participants repeated the color-classification task. When participants switched from the letter task to the color task, this effect was significantly delayed. Thus, even when prepared for, a task switch delays or prolongs encoding of the relevant perceptual dimension.
A Neurobiological Theory of Automaticity in Perceptual Categorization
ERIC Educational Resources Information Center
Ashby, F. Gregory; Ennis, John M.; Spiering, Brian J.
2007-01-01
A biologically detailed computational model is described of how categorization judgments become automatic in tasks that depend on procedural learning. The model assumes 2 neural pathways from sensory association cortex to the premotor area that mediates response selection. A longer and slower path projects to the premotor area via the striatum,…
Tsai, Chen-Gia; Chen, Chien-Chung; Wen, Ya-Chien; Chou, Tai-Li
2015-01-01
In human cultures, the perceptual categorization of musical pitches relies on pitch-naming systems. A sung pitch name concurrently holds the information of fundamental frequency and pitch name. These two aspects may be either congruent or incongruent with regard to pitch categorization. The present study aimed to compare the neuromagnetic responses to musical and verbal stimuli for congruency judgments, for example a congruent pair for the pitch C4 sung with the pitch name do in a C-major context (the pitch-semantic task) or for the meaning of a word to match the speaker’s identity (the voice-semantic task). Both the behavioral data and neuromagnetic data showed that congruency detection of the speaker’s identity and word meaning was slower than that of the pitch and pitch name. Congruency effects of musical stimuli revealed that pitch categorization and semantic processing of pitch information were associated with P2m and N400m, respectively. For verbal stimuli, P2m and N400m did not show any congruency effect. In both the pitch-semantic task and the voice-semantic task, we found that incongruent stimuli evoked stronger slow waves with the latency of 500–600 ms than congruent stimuli. These findings shed new light on the neural mechanisms underlying pitch-naming processes. PMID:26347638
Evidence of a Transition from Perceptual to Category Induction in 3- to 9-Year-Old Children
ERIC Educational Resources Information Center
Badger, Julia R.; Shapiro, Laura R.
2012-01-01
We examined whether inductive reasoning development is better characterized by accounts assuming an early category bias versus an early perceptual bias. We trained 264 children aged 3 to 9 years to categorize novel insects using a rule that directly pitted category membership against appearance. This was followed by an induction task with…
The activity in the anterior insulae is modulated by perceptual decision-making difficulty.
Lamichhane, Bidhan; Adhikari, Bhim M; Dhamala, Mukesh
2016-07-07
Previous neuroimaging studies provide evidence for the involvement of the anterior insulae (INSs) in perceptual decision-making processes. However, how the insular cortex is involved in integration of degraded sensory information to create a conscious percept of environment and to drive our behaviors still remains a mystery. In this study, using functional magnetic resonance imaging (fMRI) and four different perceptual categorization tasks in visual and audio-visual domains, we measured blood oxygen level dependent (BOLD) signals and examined the roles of INSs in easy and difficult perceptual decision-making. We created a varying degree of degraded stimuli by manipulating the task-specific stimuli in these four experiments to examine the effects of task difficulty on insular cortex response. We hypothesized that significantly higher BOLD response would be associated with the ambiguity of the sensory information and decision-making difficulty. In all of our experimental tasks, we found the INS activity consistently increased with task difficulty and participants' behavioral performance changed with the ambiguity of the presented sensory information. These findings support the hypothesis that the anterior insulae are involved in sensory-guided, goal-directed behaviors and their activities can predict perceptual load and task difficulty. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Fisher, Anna V.
2011-01-01
Is processing of conceptual information as robust as processing of perceptual information early in development? Existing empirical evidence is insufficient to answer this question. To examine this issue, 3- to 5-year-old children were presented with a flexible categorization task, in which target items (e.g., an open red umbrella) shared category…
Seeing is not stereotyping: the functional independence of categorization and stereotype activation
Tomelleri, Silvia
2017-01-01
Abstract Social categorization has been viewed as necessarily resulting in stereotyping, yet extant research suggests the two processes are differentially sensitive to task manipulations. Here, we simultaneously test the degree to which race perception and stereotyping are conditionally automatic. Participants performed a sequential priming task while either explicitly attending to the race of face primes or directing attention away from their semantic nature. We find a dissociation between the perceptual encoding of race and subsequent activation of associated stereotypes, with race perception occurring in both task conditions, but implicit stereotyping occurring only when attention is directed to the race of the face primes. These results support a clear conceptual distinction between categorization and stereotyping and show that the encoding of racial category need not result in stereotype activation. PMID:28338829
Out of sight, out of mind: Categorization learning and normal aging.
Schenk, Sabrina; Minda, John P; Lech, Robert K; Suchan, Boris
2016-10-01
The present combined EEG and eye tracking study examined the process of categorization learning at different age ranges and aimed to investigate to which degree categorization learning is mediated by visual attention and perceptual strategies. Seventeen young subjects and ten elderly subjects had to perform a visual categorization task with two abstract categories. Each category consisted of prototypical stimuli and an exception. The categorization of prototypical stimuli was learned very early during the experiment, while the learning of exceptions was delayed. The categorization of exceptions was accompanied by higher P150, P250 and P300 amplitudes. In contrast to younger subjects, elderly subjects had problems in the categorization of exceptions, but showed an intact categorization performance for prototypical stimuli. Moreover, elderly subjects showed higher fixation rates for important stimulus features and higher P150 amplitudes, which were positively correlated with the categorization performances. These results indicate that elderly subjects compensate for cognitive decline through enhanced perceptual and attentional processing of individual stimulus features. Additionally, a computational approach has been applied and showed a transition away from purely abstraction-based learning to an exemplar-based learning in the middle block for both groups. However, the calculated models provide a better fit for younger subjects than for elderly subjects. The current study demonstrates that human categorization learning is based on early abstraction-based processing followed by an exemplar-memorization stage. This strategy combination facilitates the learning of real world categories with a nuanced category structure. In addition, the present study suggests that categorization learning is affected by normal aging and modulated by perceptual processing and visual attention. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vanmarcke, Steven; Wagemans, Johan
2015-01-01
In everyday life, we are generally able to dynamically understand and adapt to socially (ir)elevant encounters, and to make appropriate decisions about these. All of this requires an impressive ability to directly filter and obtain the most informative aspects of a complex visual scene. Such rapid gist perception can be assessed in multiple ways. In the ultrafast categorization paradigm developed by Simon Thorpe et al. (1996), participants get a clear categorization task in advance and succeed at detecting the target object of interest (animal) almost perfectly (even with 20 ms exposures). Since this pioneering work, follow-up studies consistently reported population-level reaction time differences on different categorization tasks, indicating a superordinate advantage (animal versus dog) and effects of perceptual similarity (animals versus vehicles) and object category size (natural versus animal versus dog). In this study, we replicated and extended these separate findings by using a systematic collection of different categorization tasks (varying in presentation time, task demands, and stimuli) and focusing on individual differences in terms of e.g., gender and intelligence. In addition to replicating the main findings from the literature, we find subtle, yet consistent gender differences (women faster than men). PMID:26034569
Semantic, perceptual and number space: relations between category width and spatial processing.
Brugger, Peter; Loetscher, Tobias; Graves, Roger E; Knoch, Daria
2007-05-17
Coarse semantic encoding and broad categorization behavior are the hallmarks of the right cerebral hemisphere's contribution to language processing. We correlated 40 healthy subjects' breadth of categorization as assessed with Pettigrew's category width scale with lateral asymmetries in perceptual and representational space. Specifically, we hypothesized broader category width to be associated with larger leftward spatial biases. For the 20 men, but not the 20 women, this hypothesis was confirmed both in a lateralized tachistoscopic task with chimeric faces and a random digit generation task; the higher a male participant's score on category width, the more pronounced were his left-visual field bias in the judgement of chimeric faces and his small-number preference in digit generation ("small" is to the left of "large" in number space). Subjects' category width was unrelated to lateral displacements in a blindfolded tactile-motor rod centering task. These findings indicate that visual-spatial functions of the right hemisphere should not be considered independent of the same hemisphere's contribution to language. Linguistic and spatial cognition may be more tightly interwoven than is currently assumed.
Effects of spatial frequency bands on perceptual decision: it is not the stimuli but the comparison.
Rotshtein, Pia; Schofield, Andrew; Funes, María J; Humphreys, Glyn W
2010-08-24
Observers performed three between- and two within-category perceptual decisions with hybrid stimuli comprising low and high spatial frequency (SF) images. We manipulated (a) attention to, and (b) congruency of information in the two SF bands. Processing difficulty of the different SF bands varied across different categorization tasks: house-flower, face-house, and valence decisions were easier when based on high SF bands, while flower-face and gender categorizations were easier when based on low SF bands. Larger interference also arose from response relevant distracters that were presented in the "preferred" SF range of the task. Low SF effects were facilitated by short exposure durations. The results demonstrate that decisions are affected by an interaction of task and SF range and that the information from the non-attended SF range interfered at the decision level. A further analysis revealed that overall differences in the statistics of image features, in particular differences of orientation information between two categories, were associated with decision difficulty. We concluded that the advantage of using information from one SF range over another depends on the specific task requirements that built on the differences of the statistical properties between the compared categories.
Seeing is not stereotyping: the functional independence of categorization and stereotype activation.
Ito, Tiffany A; Tomelleri, Silvia
2017-05-01
Social categorization has been viewed as necessarily resulting in stereotyping, yet extant research suggests the two processes are differentially sensitive to task manipulations. Here, we simultaneously test the degree to which race perception and stereotyping are conditionally automatic. Participants performed a sequential priming task while either explicitly attending to the race of face primes or directing attention away from their semantic nature. We find a dissociation between the perceptual encoding of race and subsequent activation of associated stereotypes, with race perception occurring in both task conditions, but implicit stereotyping occurring only when attention is directed to the race of the face primes. These results support a clear conceptual distinction between categorization and stereotyping and show that the encoding of racial category need not result in stereotype activation. © The Author (2017). Published by Oxford University Press.
Eye tracking measures of uncertainty during perceptual decision making.
Brunyé, Tad T; Gardony, Aaron L
2017-10-01
Perceptual decision making involves gathering and interpreting sensory information to effectively categorize the world and inform behavior. For instance, a radiologist distinguishing the presence versus absence of a tumor, or a luggage screener categorizing objects as threatening or non-threatening. In many cases, sensory information is not sufficient to reliably disambiguate the nature of a stimulus, and resulting decisions are done under conditions of uncertainty. The present study asked whether several oculomotor metrics might prove sensitive to transient states of uncertainty during perceptual decision making. Participants viewed images with varying visual clarity and were asked to categorize them as faces or houses, and rate the certainty of their decisions, while we used eye tracking to monitor fixations, saccades, blinks, and pupil diameter. Results demonstrated that decision certainty influenced several oculomotor variables, including fixation frequency and duration, the frequency, peak velocity, and amplitude of saccades, and phasic pupil diameter. Whereas most measures tended to change linearly along with decision certainty, pupil diameter revealed more nuanced and dynamic information about the time course of perceptual decision making. Together, results demonstrate robust alterations in eye movement behavior as a function of decision certainty and attention demands, and suggest that monitoring oculomotor variables during applied task performance may prove valuable for identifying and remediating transient states of uncertainty. Published by Elsevier B.V.
Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W
2016-03-01
The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).
Post-decision biases reveal a self-consistency principle in perceptual inference.
Luu, Long; Stocker, Alan A
2018-05-15
Making a categorical judgment can systematically bias our subsequent perception of the world. We show that these biases are well explained by a self-consistent Bayesian observer whose perceptual inference process is causally conditioned on the preceding choice. We quantitatively validated the model and its key assumptions with a targeted set of three psychophysical experiments, focusing on a task sequence where subjects first had to make a categorical orientation judgment before estimating the actual orientation of a visual stimulus. Subjects exhibited a high degree of consistency between categorical judgment and estimate, which is difficult to reconcile with alternative models in the face of late, memory related noise. The observed bias patterns resemble the well-known changes in subjective preferences associated with cognitive dissonance, which suggests that the brain's inference processes may be governed by a universal self-consistency constraint that avoids entertaining 'dissonant' interpretations of the evidence. © 2018, Luu et al.
Vanmarcke, Steven; Calders, Filip; Wagemans, Johan
2016-01-01
Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur.
Calders, Filip; Wagemans, Johan
2016-01-01
Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur. PMID:27803794
Lupyan, Gary
2009-08-01
In addition to its communicative functions, language has been argued to have a variety of extracommunicative functions, as assessed by its causal involvement in putatively nonlinguistic tasks. In the present work, I argue that language may be critically involved in the ability of human adults to categorize objects on a specific dimension (e.g., color) while abstracting over other dimensions (e.g., size). This ability is frequently impaired in aphasic patients. The present work demonstrates that normal participants placed under conditions of verbal interference show a pattern of deficits strikingly similar to that of aphasic patients: impaired taxonomic categorization along perceptual dimensions, and preserved thematic categorization. A control experiment using a visuospatial-interference task failed to find this selective pattern of deficits. The present work has implications for understanding the online role of language in normal cognition and supports the claim that language is causally involved in nonverbal cognition.
Rule-Based Category Learning in Children: The Role of Age and Executive Functioning
Rabi, Rahel; Minda, John Paul
2014-01-01
Rule-based category learning was examined in 4–11 year-olds and adults. Participants were asked to learn a set of novel perceptual categories in a classification learning task. Categorization performance improved with age, with younger children showing the strongest rule-based deficit relative to older children and adults. Model-based analyses provided insight regarding the type of strategy being used to solve the categorization task, demonstrating that the use of the task appropriate strategy increased with age. When children and adults who identified the correct categorization rule were compared, the performance deficit was no longer evident. Executive functions were also measured. While both working memory and inhibitory control were related to rule-based categorization and improved with age, working memory specifically was found to marginally mediate the age-related improvements in categorization. When analyses focused only on the sample of children, results showed that working memory ability and inhibitory control were associated with categorization performance and strategy use. The current findings track changes in categorization performance across childhood, demonstrating at which points performance begins to mature and resemble that of adults. Additionally, findings highlight the potential role that working memory and inhibitory control may play in rule-based category learning. PMID:24489658
Shankar, Swetha; Kayser, Andrew S
2017-06-01
To date it has been unclear whether perceptual decision making and rule-based categorization reflect activation of similar cognitive processes and brain regions. On one hand, both map potentially ambiguous stimuli to a smaller set of motor responses. On the other hand, decisions about perceptual salience typically concern concrete sensory representations derived from a noisy stimulus, while categorization is typically conceptualized as an abstract decision about membership in a potentially arbitrary set. Previous work has primarily examined these types of decisions in isolation. Here we independently varied salience in both the perceptual and categorical domains in a random dot-motion framework by manipulating dot-motion coherence and motion direction relative to a category boundary, respectively. Behavioral and modeling results suggest that categorical (more abstract) information, which is more relevant to subjects' decisions, is weighted more strongly than perceptual (more concrete) information, although they also have significant interactive effects on choice. Within the brain, BOLD activity within frontal regions strongly differentiated categorical salience and weakly differentiated perceptual salience; however, the interaction between these two factors activated similar frontoparietal brain networks. Notably, explicitly evaluating feature interactions revealed a frontal-parietal dissociation: parietal activity varied strongly with both features, but frontal activity varied with the combined strength of the information that defined the motor response. Together, these data demonstrate that frontal regions are driven by decision-relevant features and argue that perceptual decisions and rule-based categorization reflect similar cognitive processes and activate similar brain networks to the extent that they define decision-relevant stimulus-response mappings. NEW & NOTEWORTHY Here we study the behavioral and neural dynamics of perceptual categorization when decision information varies in multiple domains at different levels of abstraction. Behavioral and modeling results suggest that categorical (more abstract) information is weighted more strongly than perceptual (more concrete) information but that perceptual and categorical domains interact to influence decisions. Frontoparietal brain activity during categorization flexibly represents decision-relevant features and highlights significant dissociations in frontal and parietal activity during decision making. Copyright © 2017 the American Physiological Society.
Kayser, Andrew S.
2017-01-01
To date it has been unclear whether perceptual decision making and rule-based categorization reflect activation of similar cognitive processes and brain regions. On one hand, both map potentially ambiguous stimuli to a smaller set of motor responses. On the other hand, decisions about perceptual salience typically concern concrete sensory representations derived from a noisy stimulus, while categorization is typically conceptualized as an abstract decision about membership in a potentially arbitrary set. Previous work has primarily examined these types of decisions in isolation. Here we independently varied salience in both the perceptual and categorical domains in a random dot-motion framework by manipulating dot-motion coherence and motion direction relative to a category boundary, respectively. Behavioral and modeling results suggest that categorical (more abstract) information, which is more relevant to subjects’ decisions, is weighted more strongly than perceptual (more concrete) information, although they also have significant interactive effects on choice. Within the brain, BOLD activity within frontal regions strongly differentiated categorical salience and weakly differentiated perceptual salience; however, the interaction between these two factors activated similar frontoparietal brain networks. Notably, explicitly evaluating feature interactions revealed a frontal-parietal dissociation: parietal activity varied strongly with both features, but frontal activity varied with the combined strength of the information that defined the motor response. Together, these data demonstrate that frontal regions are driven by decision-relevant features and argue that perceptual decisions and rule-based categorization reflect similar cognitive processes and activate similar brain networks to the extent that they define decision-relevant stimulus-response mappings. NEW & NOTEWORTHY Here we study the behavioral and neural dynamics of perceptual categorization when decision information varies in multiple domains at different levels of abstraction. Behavioral and modeling results suggest that categorical (more abstract) information is weighted more strongly than perceptual (more concrete) information but that perceptual and categorical domains interact to influence decisions. Frontoparietal brain activity during categorization flexibly represents decision-relevant features and highlights significant dissociations in frontal and parietal activity during decision making. PMID:28250149
Meuwese, Julia D I; van Loon, Anouk M; Lamme, Victor A F; Fahrenfort, Johannes J
2014-05-01
Perceptual decisions seem to be made automatically and almost instantly. Constructing a unitary subjective conscious experience takes more time. For example, when trying to avoid a collision with a car on a foggy road you brake or steer away in a reflex, before realizing you were in a near accident. This subjective aspect of object recognition has been given little attention. We used metacognition (assessed with confidence ratings) to measure subjective experience during object detection and object categorization for degraded and masked objects, while objective performance was matched. Metacognition was equal for degraded and masked objects, but categorization led to higher metacognition than did detection. This effect turned out to be driven by a difference in metacognition for correct rejection trials, which seemed to be caused by an asymmetry of the distractor stimulus: It does not contain object-related information in the detection task, whereas it does contain such information in the categorization task. Strikingly, this asymmetry selectively impacted metacognitive ability when objective performance was matched. This finding reveals a fundamental difference in how humans reflect versus act on information: When matching the amount of information required to perform two tasks at some objective level of accuracy (acting), metacognitive ability (reflecting) is still better in tasks that rely on positive evidence (categorization) than in tasks that rely more strongly on an absence of evidence (detection).
Neuroanatomical correlates of encoding in episodic memory: levels of processing effect.
Kapur, S; Craik, F I; Tulving, E; Wilson, A A; Houle, S; Brown, G M
1994-01-01
Cognitive studies of memory processes demonstrate that memory for stimuli is a function of how they are encoded; stimuli processed semantically are better remembered than those processed in a perceptual or shallow fashion. This study investigates the neural correlates of this cognitive phenomenon. Twelve subjects performed two different cognitive tasks on a series of visually presented nouns. In one task, subjects detected the presence or absence of the letter a; in the other, subjects categorized each noun as living or nonliving. Positron emission tomography (PET) scans using 15O-labeled water were obtained during both tasks. Subjects showed substantially better recognition memory for nouns seen in the living/nonliving task, compared to nouns seen in the a-checking task. Comparison of the PET images between the two cognitive tasks revealed a significant activation in the left inferior prefrontal cortex (Brodmann's areas 45, 46, 47, and 10) in the semantic task as compared to the perceptual task. We propose that memory processes are subserved by a wide neurocognitive network and that encoding processes involve preferential activation of the structures in the left inferior prefrontal cortex. PMID:8134340
Neuroanatomical correlates of encoding in episodic memory: levels of processing effect.
Kapur, S; Craik, F I; Tulving, E; Wilson, A A; Houle, S; Brown, G M
1994-03-15
Cognitive studies of memory processes demonstrate that memory for stimuli is a function of how they are encoded; stimuli processed semantically are better remembered than those processed in a perceptual or shallow fashion. This study investigates the neural correlates of this cognitive phenomenon. Twelve subjects performed two different cognitive tasks on a series of visually presented nouns. In one task, subjects detected the presence or absence of the letter a; in the other, subjects categorized each noun as living or nonliving. Positron emission tomography (PET) scans using 15O-labeled water were obtained during both tasks. Subjects showed substantially better recognition memory for nouns seen in the living/nonliving task, compared to nouns seen in the a-checking task. Comparison of the PET images between the two cognitive tasks revealed a significant activation in the left inferior prefrontal cortex (Brodmann's areas 45, 46, 47, and 10) in the semantic task as compared to the perceptual task. We propose that memory processes are subserved by a wide neurocognitive network and that encoding processes involve preferential activation of the structures in the left inferior prefrontal cortex.
Gorlick, Marissa A.; Mather, Mara
2012-01-01
Past studies have revealed that encountering negative events interferes with cognitive processing of subsequent stimuli. The present study investigated whether negative events affect semantic and perceptual processing differently. Presentation of negative pictures produced slower reaction times than neutral or positive pictures in tasks that require semantic processing, such as natural/man-made judgments about drawings of objects, commonness judgments about objects, and categorical judgments about pairs of words. In contrast, negative picture presentation did not slow down judgments in subsequent perceptual processing (e.g., color judgments about words, and size judgments about objects). The subjective arousal level of negative pictures did not modulate the interference effects on semantic/perceptual processing. These findings indicate that encountering negative emotional events interferes with semantic processing of subsequent stimuli more strongly than perceptual processing, and that not all types of subsequent cognitive processing are impaired by negative events. PMID:22142207
Sakaki, Michiko; Gorlick, Marissa A; Mather, Mara
2011-12-01
Past studies have revealed that encountering negative events interferes with cognitive processing of subsequent stimuli. The present study investigates whether negative events affect semantic and perceptual processing differently. Presentation of negative pictures produced slower reaction times than neutral or positive pictures in tasks that require semantic processing, such as natural or man-made judgments about drawings of objects, commonness judgments about objects, and categorical judgments about pairs of words. In contrast, negative picture presentation did not slow down judgments in subsequent perceptual processing (e.g., color judgments about words, size judgments about objects). The subjective arousal level of negative pictures did not modulate the interference effects on semantic or perceptual processing. These findings indicate that encountering negative emotional events interferes with semantic processing of subsequent stimuli more strongly than perceptual processing, and that not all types of subsequent cognitive processing are impaired by negative events. (c) 2011 APA, all rights reserved.
Evidence of Metacognitive Control by Humans and Monkeys in a Perceptual Categorization Task
ERIC Educational Resources Information Center
Redford, Joshua S.
2010-01-01
Metacognition research has focused on the degree to which nonhuman primates share humans' capacity to monitor their cognitive processes. Convincing evidence now exists that monkeys can engage in metacognitive monitoring. By contrast, few studies have explored metacognitive control in monkeys, and the available evidence of metacognitive control…
Color Categories and Color Appearance
ERIC Educational Resources Information Center
Webster, Michael A.; Kay, Paul
2012-01-01
We examined categorical effects in color appearance in two tasks, which in part differed in the extent to which color naming was explicitly required for the response. In one, we measured the effects of color differences on perceptual grouping for hues that spanned the blue-green boundary, to test whether chromatic differences across the boundary…
Categorical clustering of the neural representation of color.
Brouwer, Gijs Joost; Heeger, David J
2013-09-25
Cortical activity was measured with functional magnetic resonance imaging (fMRI) while human subjects viewed 12 stimulus colors and performed either a color-naming or diverted attention task. A forward model was used to extract lower dimensional neural color spaces from the high-dimensional fMRI responses. The neural color spaces in two visual areas, human ventral V4 (V4v) and VO1, exhibited clustering (greater similarity between activity patterns evoked by stimulus colors within a perceptual category, compared to between-category colors) for the color-naming task, but not for the diverted attention task. Response amplitudes and signal-to-noise ratios were higher in most visual cortical areas for color naming compared to diverted attention. But only in V4v and VO1 did the cortical representation of color change to a categorical color space. A model is presented that induces such a categorical representation by changing the response gains of subpopulations of color-selective neurons.
Categorical Clustering of the Neural Representation of Color
Heeger, David J.
2013-01-01
Cortical activity was measured with functional magnetic resonance imaging (fMRI) while human subjects viewed 12 stimulus colors and performed either a color-naming or diverted attention task. A forward model was used to extract lower dimensional neural color spaces from the high-dimensional fMRI responses. The neural color spaces in two visual areas, human ventral V4 (V4v) and VO1, exhibited clustering (greater similarity between activity patterns evoked by stimulus colors within a perceptual category, compared to between-category colors) for the color-naming task, but not for the diverted attention task. Response amplitudes and signal-to-noise ratios were higher in most visual cortical areas for color naming compared to diverted attention. But only in V4v and VO1 did the cortical representation of color change to a categorical color space. A model is presented that induces such a categorical representation by changing the response gains of subpopulations of color-selective neurons. PMID:24068814
Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T
2015-01-01
The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.
Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T.
2015-01-01
The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms. PMID:26733895
Adaptive categorization of ART networks in robot behavior learning using game-theoretic formulation.
Fung, Wai-keung; Liu, Yun-hui
2003-12-01
Adaptive Resonance Theory (ART) networks are employed in robot behavior learning. Two of the difficulties in online robot behavior learning, namely, (1) exponential memory increases with time, (2) difficulty for operators to specify learning tasks accuracy and control learning attention before learning. In order to remedy the aforementioned difficulties, an adaptive categorization mechanism is introduced in ART networks for perceptual and action patterns categorization in this paper. A game-theoretic formulation of adaptive categorization for ART networks is proposed for vigilance parameter adaptation for category size control on the categories formed. The proposed vigilance parameter update rule can help improving categorization performance in the aspect of category number stability and solve the problem of selecting initial vigilance parameter prior to pattern categorization in traditional ART networks. Behavior learning using physical robot is conducted to demonstrate the effectiveness of the proposed adaptive categorization mechanism in ART networks.
"Are You Looking at Me?" How Children's Gaze Judgments Improve with Age
ERIC Educational Resources Information Center
Mareschal, Isabelle; Otsuka, Yumiko; Clifford, Colin W. G.; Mareschal, Denis
2016-01-01
Adults' judgments of another person's gaze reflect both sensory (e.g., perceptual) and nonsensory (e.g., decisional) processes. We examined how children's performance on a gaze categorization task develops over time by varying uncertainty in the stimulus presented to 6- to 11 year-olds (n = 57). We found that younger children responded…
Zhang, Xiaobin; Li, Qiong; Eskine, Kendall J; Zuo, Bin
2014-01-01
The current studies extend perceptual symbol systems theory to the processing of gender categorization by revealing that gender categorization recruits perceptual simulations of spatial height and size dimensions. In study 1, categorization of male faces were faster when the faces were in the "up" (i.e., higher on the vertical axis) rather than the "down" (i.e., lower on the vertical axis) position and vice versa for female face categorization. Study 2 found that responses to male names depicted in larger font were faster than male names depicted in smaller font, whereas opposite response patterns were given for female names. Study 3 confirmed that the effect in Study 2 was not due to metaphoric relationships between gender and social power. Together, these findings suggest that representation of gender (social categorization) also involves processes of perceptual simulation.
Learning task affects ERP-correlates of the own-race bias, but not recognition memory performance.
Stahl, Johanna; Wiese, Holger; Schweinberger, Stefan R
2010-06-01
People are generally better in recognizing faces from their own ethnic group as opposed to faces from another ethnic group, a finding which has been interpreted in the context of two opposing theories. Whereas perceptual expertise theories stress the role of long-term experience with one's own ethnic group, race feature theories assume that the processing of an other-race-defining feature triggers inferior coding and recognition of faces. The present study tested these hypotheses by manipulating the learning task in a recognition memory test. At learning, one group of participants categorized faces according to ethnicity, whereas another group rated facial attractiveness. Subsequent recognition tests indicated clear and similar own-race biases for both groups. However, ERPs from learning and test phases demonstrated an influence of learning task on neurophysiological processing of own- and other-race faces. While both groups exhibited larger N170 responses to Asian as compared to Caucasian faces, task-dependent differences were seen in a subsequent P2 ERP component. Whereas the P2 was more pronounced for Caucasian faces in the categorization group, this difference was absent in the attractiveness rating group. The learning task thus influences early face encoding. Moreover, comparison with recent research suggests that this attractiveness rating task influences the processes reflected in the P2 in a similar manner as perceptual expertise for other-race faces does. By contrast, the behavioural own-race bias suggests that long-term expertise is required to increase other-race face recognition and hence attenuate the own-race bias. Copyright 2010 Elsevier Ltd. All rights reserved.
Categorization of common sounds by cochlear implanted and normal hearing adults.
Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P
2016-05-01
Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.
Corina, David P.; Grosvald, Michael
2011-01-01
In this paper, we compare responses of deaf signers and hearing non-signers engaged in a categorization task of signs and non-linguistic human actions. We examine the time it takes to make such categorizations under conditions of 180-degree stimulus inversion and as a function of repetition priming, in an effort to understand whether the processing of sign language forms draws upon special processing mechanisms or makes use of mechanisms used in recognition of non-linguistic human actions. Our data show that deaf signers were much faster in the categorization of both linguistic and non-linguistic actions, and relative to hearing non-signers, show evidence that they were more sensitive to the configural properties of signs. Our study suggests that sign expertise may lead to modifications of a general-purpose human action recognition system rather than evoking a qualitatively different mode of processing, and supports the contention that signed languages make use of perceptual systems through which humans understand or parse human actions and gestures more generally. PMID:22153323
Probability assessment with response times and confidence in perception and knowledge.
Petrusic, William M; Baranski, Joseph V
2009-02-01
In both a perceptual and a general knowledge comparison task, participants categorized the time they took to decide, selecting one of six categories ordered from "Slow" to Fast". Subsequently, they rated confidence on a six-category scale ranging from "50%" to "100%". Participants were able to accurately scale their response times thus enabling the treatment of the response time (RT) categories as potential confidence categories. Probability assessment analyses of RTs revealed indices of over/underconfidence, calibration, and resolution, each subject to the "hard-easy" effect, comparable to those obtained with the actual confidence ratings. However, in both the perceptual and knowledge domains, resolution (i.e., the ability to use the confidence categories to distinguish correct from incorrect decisions) was significantly better with confidence ratings than with RT categorization. Generally, comparable results were obtained with scaling of the objective RTs, although subjective categorization of RTs provided probability assessment indices superior to those obtained from objective RTs. Taken together, the findings do not support the view that confidence arises from a scaling of decision time.
Zhang, Xiaobin; Li, Qiong; Eskine, Kendall J.; Zuo, Bin
2014-01-01
The current studies extend perceptual symbol systems theory to the processing of gender categorization by revealing that gender categorization recruits perceptual simulations of spatial height and size dimensions. In study 1, categorization of male faces were faster when the faces were in the “up” (i.e., higher on the vertical axis) rather than the “down” (i.e., lower on the vertical axis) position and vice versa for female face categorization. Study 2 found that responses to male names depicted in larger font were faster than male names depicted in smaller font, whereas opposite response patterns were given for female names. Study 3 confirmed that the effect in Study 2 was not due to metaphoric relationships between gender and social power. Together, these findings suggest that representation of gender (social categorization) also involves processes of perceptual simulation. PMID:24587022
Cerebro-cerebellar interactions underlying temporal information processing.
Aso, Kenji; Hanakawa, Takashi; Aso, Toshihiko; Fukuyama, Hidenao
2010-12-01
The neural basis of temporal information processing remains unclear, but it is proposed that the cerebellum plays an important role through its internal clock or feed-forward computation functions. In this study, fMRI was used to investigate the brain networks engaged in perceptual and motor aspects of subsecond temporal processing without accompanying coprocessing of spatial information. Direct comparison between perceptual and motor aspects of time processing was made with a categorical-design analysis. The right lateral cerebellum (lobule VI) was active during a time discrimination task, whereas the left cerebellar lobule VI was activated during a timed movement generation task. These findings were consistent with the idea that the cerebellum contributed to subsecond time processing in both perceptual and motor aspects. The feed-forward computational theory of the cerebellum predicted increased cerebro-cerebellar interactions during time information processing. In fact, a psychophysiological interaction analysis identified the supplementary motor and dorsal premotor areas, which had a significant functional connectivity with the right cerebellar region during a time discrimination task and with the left lateral cerebellum during a timed movement generation task. The involvement of cerebro-cerebellar interactions may provide supportive evidence that temporal information processing relies on the simulation of timing information through feed-forward computation in the cerebellum.
The Timing of Visual Object Categorization
Mack, Michael L.; Palmeri, Thomas J.
2011-01-01
An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction. PMID:21811480
ERIC Educational Resources Information Center
Sloutsky, Vladimir M.; Fisher, Anna V.
2012-01-01
Noles and Gelman (2012) attempt to critically reevaluate the claim that linguistic labels affect children's judgments of visual similarity. They report results of an experiment that used a modified version of Sloutsky and Fisher's (2004) task and conclude that "labels do not generally affect children's perceptual similarity judgments; rather,…
Auditory working memory predicts individual differences in absolute pitch learning.
Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C
2015-07-01
Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the phenomenon of AP are also discussed. Copyright © 2015. Published by Elsevier B.V.
Sevinc, Gunes; Spreng, R Nathan
2014-01-01
Human morality has been investigated using a variety of tasks ranging from judgments of hypothetical dilemmas to viewing morally salient stimuli. These experiments have provided insight into neural correlates of moral judgments and emotions, yet these approaches reveal important differences in moral cognition. Moral reasoning tasks require active deliberation while moral emotion tasks involve the perception of stimuli with moral implications. We examined convergent and divergent brain activity associated with these experimental paradigms taking a quantitative meta-analytic approach. A systematic search of the literature yielded 40 studies. Studies involving explicit decisions in a moral situation were categorized as active (n = 22); studies evoking moral emotions were categorized as passive (n = 18). We conducted a coordinate-based meta-analysis using the Activation Likelihood Estimation to determine reliable patterns of brain activity. Results revealed a convergent pattern of reliable brain activity for both task categories in regions of the default network, consistent with the social and contextual information processes supported by this brain network. Active tasks revealed more reliable activity in the temporoparietal junction, angular gyrus and temporal pole. Active tasks demand deliberative reasoning and may disproportionately involve the retrieval of social knowledge from memory, mental state attribution, and construction of the context through associative processes. In contrast, passive tasks reliably engaged regions associated with visual and emotional information processing, including lingual gyrus and the amygdala. A laterality effect was observed in dorsomedial prefrontal cortex, with active tasks engaging the left, and passive tasks engaging the right. While overlapping activity patterns suggest a shared neural network for both tasks, differential activity suggests that processing of moral input is affected by task demands. The results provide novel insight into distinct features of moral cognition, including the generation of moral context through associative processes and the perceptual detection of moral salience.
Sevinc, Gunes; Spreng, R. Nathan
2014-01-01
Background and Objectives Human morality has been investigated using a variety of tasks ranging from judgments of hypothetical dilemmas to viewing morally salient stimuli. These experiments have provided insight into neural correlates of moral judgments and emotions, yet these approaches reveal important differences in moral cognition. Moral reasoning tasks require active deliberation while moral emotion tasks involve the perception of stimuli with moral implications. We examined convergent and divergent brain activity associated with these experimental paradigms taking a quantitative meta-analytic approach. Data Source A systematic search of the literature yielded 40 studies. Studies involving explicit decisions in a moral situation were categorized as active (n = 22); studies evoking moral emotions were categorized as passive (n = 18). We conducted a coordinate-based meta-analysis using the Activation Likelihood Estimation to determine reliable patterns of brain activity. Results & Conclusions Results revealed a convergent pattern of reliable brain activity for both task categories in regions of the default network, consistent with the social and contextual information processes supported by this brain network. Active tasks revealed more reliable activity in the temporoparietal junction, angular gyrus and temporal pole. Active tasks demand deliberative reasoning and may disproportionately involve the retrieval of social knowledge from memory, mental state attribution, and construction of the context through associative processes. In contrast, passive tasks reliably engaged regions associated with visual and emotional information processing, including lingual gyrus and the amygdala. A laterality effect was observed in dorsomedial prefrontal cortex, with active tasks engaging the left, and passive tasks engaging the right. While overlapping activity patterns suggest a shared neural network for both tasks, differential activity suggests that processing of moral input is affected by task demands. The results provide novel insight into distinct features of moral cognition, including the generation of moral context through associative processes and the perceptual detection of moral salience. PMID:24503959
Blair, Mark R; Watson, Marcus R; Walshe, R Calen; Maj, Fillip
2009-09-01
Humans have an extremely flexible ability to categorize regularities in their environment, in part because of attentional systems that allow them to focus on important perceptual information. In formal theories of categorization, attention is typically modeled with weights that selectively bias the processing of stimulus features. These theories make differing predictions about the degree of flexibility with which attention can be deployed in response to stimulus properties. Results from 2 eye-tracking studies show that humans can rapidly learn to differently allocate attention to members of different categories. These results provide the first unequivocal demonstration of stimulus-responsive attention in a categorization task. Furthermore, the authors found clear temporal patterns in the shifting of attention within trials that follow from the informativeness of particular stimulus features. These data provide new insights into the attention processes involved in categorization. (c) 2009 APA, all rights reserved.
Atypical Categorization in Children with High Functioning Autism Spectrum Disorder
Church, Barbara A.; Krauss, Maria S.; Lopata, Christopher; Toomey, Jennifer A.; Thomeer, Marcus L.; Coutinho, Mariana V.; Volker, Martin A.; Mercado, Eduardo
2010-01-01
Children with autism spectrum disorder process many perceptual and social events differently from typically developing children, suggesting that they may also form and recognize categories differently. We used a dot pattern categorization task and prototype comparison modeling to compare categorical processing in children with high functioning autism spectrum disorder and matched typical controls. We were interested in whether there were differences in how children with autism use average similarity information about a category to make decisions. During testing, the group with autism spectrum disorder endorsed prototypes less and was seemingly less sensitive to differences between to-be-categorized items and the prototype. The findings suggest that individuals with high functioning autism spectrum disorder are less likely to use overall average similarity when forming categories or making categorical decisions. Such differences in category formation and use may negatively impact processing of socially relevant information, such as facial expressions. PMID:21169581
Prototype learning and dissociable categorization systems in Alzheimer's disease.
Heindel, William C; Festa, Elena K; Ott, Brian R; Landy, Kelly M; Salmon, David P
2013-08-01
Recent neuroimaging studies suggest that prototype learning may be mediated by at least two dissociable memory systems depending on the mode of acquisition, with A/Not-A prototype learning dependent upon a perceptual representation system located within posterior visual cortex and A/B prototype learning dependent upon a declarative memory system associated with medial temporal and frontal regions. The degree to which patients with Alzheimer's disease (AD) can acquire new categorical information may therefore critically depend upon the mode of acquisition. The present study examined A/Not-A and A/B prototype learning in AD patients using procedures that allowed direct comparison of learning across tasks. Despite impaired explicit recall of category features in all tasks, patients showed differential patterns of category acquisition across tasks. First, AD patients demonstrated impaired prototype induction along with intact exemplar classification under incidental A/Not-A conditions, suggesting that the loss of functional connectivity within visual cortical areas disrupted the integration processes supporting prototype induction within the perceptual representation system. Second, AD patients demonstrated intact prototype induction but impaired exemplar classification during A/B learning under observational conditions, suggesting that this form of prototype learning is dependent on a declarative memory system that is disrupted in AD. Third, the surprisingly intact classification of both prototypes and exemplars during A/B learning under trial-and-error feedback conditions suggests that AD patients shifted control from their deficient declarative memory system to a feedback-dependent procedural memory system when training conditions allowed. Taken together, these findings serve to not only increase our understanding of category learning in AD, but to also provide new insights into the ways in which different memory systems interact to support the acquisition of categorical knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.
Prototype Learning and Dissociable Categorization Systems in Alzheimer’s Disease
Heindel, William C.; Festa, Elena K.; Ott, Brian R.; Landy, Kelly M.; Salmon, David P.
2015-01-01
Recent neuroimaging studies suggest that prototype learning may be mediated by at least two dissociable memory systems depending on the mode of acquisition, with A/Not-A prototype learning dependent upon a perceptual representation system located within posterior visual cortex and A/B prototype learning dependent upon a declarative memory system associated with medial temporal and frontal regions. The degree to which patients with Alzheimer’s disease (AD) can acquire new categorical information may therefore critically depend upon the mode of acquisition. The present study examined A/Not-A and A/B prototype learning in AD patients using procedures that allowed direct comparison of learning across tasks. Despite impaired explicit recall of category features in all tasks, patients showed differential patterns of category acquisition across tasks. First, AD patients demonstrated impaired prototype induction along with intact exemplar classification under incidental A/Not-A conditions, suggesting that the loss of functional connectivity within visual cortical areas disrupted the integration processes supporting prototype induction within the perceptual representation system. Second, AD patients demonstrated intact prototype induction but impaired exemplar classification during A/B learning under observational conditions, suggesting that this form of prototype learning is dependent on a declarative memory system that is disrupted in AD. Third, the surprisingly intact classification of both prototypes and exemplars during A/B learning under trial-and-error feedback conditions suggests that AD patients shifted control from their deficient declarative memory system to a feedback-dependent procedural memory system when training conditions allowed. Taken together, these findings serve to not only increase our understanding of category learning in AD, but to also provide new insights into the ways in which different memory systems interact to support the acquisition of categorical knowledge. PMID:23751172
Forms of Memory for Representation of Visual Objects
1991-04-15
neuropsychological syndromes that involve disruption of perceptual representation systems should pay rich dividends for implicit memory research (Schacter et al...BLACKORDi. 1988b. Deficits in the implicit retention of new associations by alcoholic Korsakoff patients. Brain and Cognition 7: 145-156. COFER, C. C...MOREINES & N. BUTTERS. 1973. Retrieving information from Korsakoff patients: Effects of categorical cues and reference to the task. Cortex 9: 165
Active sensing in the categorization of visual patterns
Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M
2016-01-01
Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546
Hue distinctiveness overrides category in determining performance in multiple object tracking.
Sun, Mengdan; Zhang, Xuemin; Fan, Lingxia; Hu, Luming
2018-02-01
The visual distinctiveness between targets and distractors can significantly facilitate performance in multiple object tracking (MOT), in which color is a feature that has been commonly used. However, the processing of color can be more than "visual." Color is continuous in chromaticity, while it is commonly grouped into discrete categories (e.g., red, green). Evidence from color perception suggested that color categories may have a unique role in visual tasks independent of its chromatic appearance. Previous MOT studies have not examined the effect of chromatic and categorical distinctiveness on tracking separately. The current study aimed to reveal how chromatic (hue) and categorical distinctiveness of color between the targets and distractors affects tracking performance. With four experiments, we showed that tracking performance was largely facilitated by the increasing hue distance between the target set and the distractor set, suggesting that perceptual grouping was formed based on hue distinctiveness to aid tracking. However, we found no color categorical effect, because tracking performance was not significantly different when the targets and distractors were from the same or different categories. It was concluded that the chromatic distinctiveness of color overrides category in determining tracking performance, suggesting a dominant role of perceptual feature in MOT.
Monkeys show recognition without priming in a classification task
Basile, Benjamin M.; Hampton, Robert R.
2012-01-01
Humans show visual perceptual priming by identifying degraded images faster and more accurately if they have seen the original images, while simultaneously failing to recognize the same images. Such priming is commonly thought, with little evidence, to be widely distributed phylogenetically. Following Brodbeck (1997), we trained rhesus monkeys (Macaca mulatta) to categorize photographs according to content (e.g., birds, fish, flowers, people). In probe trials, we tested whether monkeys were faster or more accurate at categorizing degraded versions of previously seen images (primed) than degraded versions of novel images (unprimed). Monkeys categorized reliably, but showed no benefit from having previously seen the images. This finding was robust across manipulations of image quality (color, grayscale, line drawings), type of image degradation (occlusion, blurring), levels of processing, and number of repetitions of the prime. By contrast, in probe matching-to-sample trials, monkeys recognized the primes, demonstrating that they remembered the primes and could discriminate them from other images in the same category under the conditions used to test for priming. Two experiments that replicated Brodbeck’s (1997) procedures also produced no evidence of priming. This inability to find priming in monkeys under perceptual conditions sufficient for recognition presents a puzzle. PMID:22975587
Signal detection theory and methods for evaluating human performance in decision tasks
NASA Technical Reports Server (NTRS)
Obrien, Kevin; Feldman, Evan M.
1993-01-01
Signal Detection Theory (SDT) can be used to assess decision making performance in tasks that are not commonly thought of as perceptual. SDT takes into account both the sensitivity and biases in responding when explaining the detection of external events. In the standard SDT tasks, stimuli are selected in order to reveal the sensory capabilities of the observer. SDT can also be used to describe performance when decisions must be made as to the classification of easily and reliably sensed stimuli. Numbers are stimuli that are minimally affected by sensory processing and can belong to meaningful categories that overlap. Multiple studies have shown that the task of categorizing numbers from overlapping normal distributions produces performance predictable by SDT. These findings are particularly interesting in view of the similarity between the task of the categorizing numbers and that of determining the status of a mechanical system based on numerical values that represent sensor readings. Examples of the use of SDT to evaluate performance in decision tasks are reviewed. The methods and assumptions of SDT are shown to be effective in the measurement, evaluation, and prediction of human performance in such tasks.
Mercado, Eduardo; Church, Barbara A.; Coutinho, Mariana V. C.; Dovgopoly, Alexander; Lopata, Christopher J.; Toomey, Jennifer A.; Thomeer, Marcus L.
2015-01-01
Previous research suggests that high functioning (HF) children with autism spectrum disorder (ASD) sometimes have problems learning categories, but often appear to perform normally in categorization tasks. The deficits that individuals with ASD show when learning categories have been attributed to executive dysfunction, general deficits in implicit learning, atypical cognitive strategies, or abnormal perceptual biases and abilities. Several of these psychological explanations for category learning deficits have been associated with neural abnormalities such as cortical underconnectivity. The present study evaluated how well existing neurally based theories account for atypical perceptual category learning shown by HF children with ASD across multiple category learning tasks involving novel, abstract shapes. Consistent with earlier results, children’s performances revealed two distinct patterns of learning and generalization associated with ASD: one was indistinguishable from performance in typically developing children; the other revealed dramatic impairments. These two patterns were evident regardless of training regimen or stimulus set. Surprisingly, some children with ASD showed both patterns. Simulations of perceptual category learning could account for the two observed patterns in terms of differences in neural plasticity. However, no current psychological or neural theory adequately explains why a child with ASD might show such large fluctuations in category learning ability across training conditions or stimulus sets. PMID:26157368
Iconic memory requires attention
Persuh, Marjan; Genzer, Boris; Melara, Robert D.
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features. PMID:22586389
Iconic memory requires attention.
Persuh, Marjan; Genzer, Boris; Melara, Robert D
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features.
Explaining prompts children to privilege inductively rich properties.
Walker, Caren M; Lombrozo, Tania; Legare, Cristine H; Gopnik, Alison
2014-11-01
Four experiments with preschool-aged children test the hypothesis that engaging in explanation promotes inductive reasoning on the basis of shared causal properties as opposed to salient (but superficial) perceptual properties. In Experiments 1a and 1b, 3- to 5-year-old children prompted to explain during a causal learning task were more likely to override a tendency to generalize according to perceptual similarity and instead extend an internal feature to an object that shared a causal property. Experiment 2 replicated this effect of explanation in a case of label extension (i.e., categorization). Experiment 3 demonstrated that explanation improves memory for clusters of causally relevant (non-perceptual) features, but impairs memory for superficial (perceptual) features, providing evidence that effects of explanation are selective in scope and apply to memory as well as inference. In sum, our data support the proposal that engaging in explanation influences children's reasoning by privileging inductively rich, causal properties. Copyright © 2014 Elsevier B.V. All rights reserved.
Implicit and explicit categorization of natural scenes.
Codispoti, Maurizio; Ferrari, Vera; De Cesarei, Andrea; Cardinale, Rossella
2006-01-01
Event-related potential (ERP) studies have consistently found that emotionally arousing (pleasant and unpleasant) pictures elicit a larger late positive potential (LPP) than neutral pictures in a window from 400 to 800 ms after picture onset. In addition, an early ERP component has been reported to vary with emotional arousal in a window from about 150 to 300 ms with affective, compared to neutral stimuli, prompting significantly less positivity over occipito-temporal sites. Similar early and late ERP components have been found in explicit categorization tasks, suggesting that selective attention to target features results in similar cortical changes. Several studies have shown that the affective modulation of the LPP persisted even when the same pictures are repeated several times, when they are presented as distractors, or when participants are engaged in a competing task. These results indicate that categorization of affective stimuli is an obligatory process. On the other hand, perceptual factors (e.g., stimulus size) seem to affect the early ERP component but not the affective modulation of the LPP. Although early and late ERP components vary with stimulus relevance, given that they are differentially affected by stimulus and task manipulations, they appear to index different facets of picture processing.
Use of evidence in a categorization task: analytic and holistic processing modes.
Greco, Alberto; Moretti, Stefania
2017-11-01
Category learning performance can be influenced by many contextual factors, but the effects of these factors are not the same for all learners. The present study suggests that these differences can be due to the different ways evidence is used, according to two main basic modalities of processing information, analytically or holistically. In order to test the impact of the information provided, an inductive rule-based task was designed, in which feature salience and comparison informativeness between examples of two categories were manipulated during the learning phases, by introducing and progressively reducing some perceptual biases. To gather data on processing modalities, we devised the Active Feature Composition task, a production task that does not require classifying new items but reproducing them by combining features. At the end, an explicit rating task was performed, which entailed assessing the accuracy of a set of possible categorization rules. A combined analysis of the data collected with these two different tests enabled profiling participants in regard to the kind of processing modality, the structure of representations and the quality of categorial judgments. Results showed that despite the fact that the information provided was the same for all participants, those who adopted analytic processing better exploited evidence and performed more accurately, whereas with holistic processing categorization is perfectly possible but inaccurate. Finally, the cognitive implications of the proposed procedure, with regard to involved processes and representations, are discussed.
Multiple systems of category learning.
Smith, Edward E; Grossman, Murray
2008-01-01
We review neuropsychological and neuroimaging evidence for the existence of three qualitatively different categorization systems. These categorization systems are themselves based on three distinct memory systems: working memory (WM), explicit long-term memory (explicit LTM), and implicit long-term memory (implicit LTM). We first contrast categorization based on WM with that based on explicit LTM, where the former typically involves applying rules to a test item and the latter involves determining the similarity between stored exemplars or prototypes and a test item. Neuroimaging studies show differences between brain activity in normal participants as a function of whether they are instructed to categorize novel test items by rule or by similarity to known category members. Rule instructions typically lead to more activation in frontal or parietal areas, associated with WM and selective attention, whereas similarity instructions may activate parietal areas associated with the integration of perceptual features. Studies with neurological patients in the same paradigms provide converging evidence, e.g., patients with Alzheimer's disease, who have damage in prefrontal regions, are more impaired with rule than similarity instructions. Our second contrast is between categorization based on explicit LTM with that based on implicit LTM. Neuropsychological studies with patients with medial-temporal lobe damage show that patients are impaired on tasks requiring explicit LTM, but perform relatively normally on an implicit categorization task. Neuroimaging studies provide converging evidence: whereas explicit categorization is mediated by activation in numerous frontal and parietal areas, implicit categorization is mediated by a deactivation in posterior cortex.
Handwriting generates variable visual output to facilitate symbol learning.
Li, Julia X; James, Karin H
2016-03-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Handwriting generates variable visual input to facilitate symbol learning
Li, Julia X.; James, Karin H.
2015-01-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913
Expertise with unfamiliar objects is flexible to changes in task but not changes in class
Tangen, Jason M.
2017-01-01
Perceptual expertise is notoriously specific and bound by familiarity; generalizing to novel or unfamiliar images, objects, identities, and categories often comes at some cost to performance. In forensic and security settings, however, examiners are faced with the task of discriminating unfamiliar images of unfamiliar objects within their general domain of expertise (e.g., fingerprints, faces, or firearms). The job of a fingerprint expert, for instance, is to decide whether two unfamiliar fingerprint images were left by the same unfamiliar finger (e.g., Smith’s left thumb), or two different unfamiliar fingers (e.g., Smith and Jones’s left thumb). Little is known about the limits of this kind of perceptual expertise. Here, we examine fingerprint experts’ and novices’ ability to distinguish fingerprints compared to inverted faces in two different tasks. Inverted face images serve as an ideal comparison because they vary naturally between and within identities, as do fingerprints, and people tend to be less accurate or more novice-like at distinguishing faces when they are presented in an inverted or unfamiliar orientation. In Experiment 1, fingerprint experts outperformed novices in locating categorical fingerprint outliers (i.e., a loop pattern in an array of whorls), but not inverted face outliers (i.e., an inverted male face in an array of inverted female faces). In Experiment 2, fingerprint experts were more accurate than novices at discriminating matching and mismatching fingerprints that were presented very briefly, but not so for inverted faces. Our data show that perceptual expertise with fingerprints can be flexible to changing task demands, but there can also be abrupt limits: fingerprint expertise did not generalize to an unfamiliar class of stimuli. We interpret these findings as evidence that perceptual expertise with unfamiliar objects is highly constrained by one’s experience. PMID:28574998
Dimension-Based Statistical Learning Affects Both Speech Perception and Production.
Lehet, Matthew; Holt, Lori L
2017-04-01
Multiple acoustic dimensions signal speech categories. However, dimensions vary in their informativeness; some are more diagnostic of category membership than others. Speech categorization reflects these dimensional regularities such that diagnostic dimensions carry more "perceptual weight" and more effectively signal category membership to native listeners. Yet perceptual weights are malleable. When short-term experience deviates from long-term language norms, such as in a foreign accent, the perceptual weight of acoustic dimensions in signaling speech category membership rapidly adjusts. The present study investigated whether rapid adjustments in listeners' perceptual weights in response to speech that deviates from the norms also affects listeners' own speech productions. In a word recognition task, the correlation between two acoustic dimensions signaling consonant categories, fundamental frequency (F0) and voice onset time (VOT), matched the correlation typical of English, and then shifted to an "artificial accent" that reversed the relationship, and then shifted back. Brief, incidental exposure to the artificial accent caused participants to down-weight perceptual reliance on F0, consistent with previous research. Throughout the task, participants were intermittently prompted with pictures to produce these same words. In the block in which listeners heard the artificial accent with a reversed F0 × VOT correlation, F0 was a less robust cue to voicing in listeners' own speech productions. The statistical regularities of short-term speech input affect both speech perception and production, as evidenced via shifts in how acoustic dimensions are weighted. Copyright © 2016 Cognitive Science Society, Inc.
Dimension-based statistical learning affects both speech perception and production
Lehet, Matthew; Holt, Lori L.
2016-01-01
Multiple acoustic dimensions signal speech categories. However, dimensions vary in their informativeness; some are more diagnostic of category membership than others. Speech categorization reflects these dimensional regularities such that diagnostic dimensions carry more “perceptual weight” and more effectively signal category membership to native listeners. Yet, perceptual weights are malleable. When short-term experience deviates from long-term language norms, such as in a foreign accent, the perceptual weight of acoustic dimensions in signaling speech category membership rapidly adjusts. The present study investigated whether rapid adjustments in listeners’ perceptual weights in response to speech that deviates from the norms also affects listeners’ own speech productions. In a word recognition task, the correlation between two acoustic dimensions signaling consonant categories, fundamental frequency (F0) and voice onset time (VOT), matched the correlation typical of English, then shifted to an “artificial accent” that reversed the relationship, and then shifted back. Brief, incidental exposure to the artificial accent caused participants to down-weight perceptual reliance on F0, consistent with previous research. Throughout the task, participants were intermittently prompted with pictures to produce these same words. In the block in which listeners heard the artificial accent with a reversed F0 x VOT correlation, F0 was a less robust cue to voicing in listeners’ own speech productions. The statistical regularities of short-term speech input affect both speech perception and production, as evidenced via shifts in how acoustic dimensions are weighted. PMID:27666146
Bernstein, Michael J; Young, Steven G; Hugenberg, Kurt
2007-08-01
Although the cross-race effect (CRE) is a well-established phenomenon, both perceptual-expertise and social-categorization models have been proposed to explain the effect. The two studies reported here investigated the extent to which categorizing other people as in-group versus out-group members is sufficient to elicit a pattern of face recognition analogous to that of the CRE, even when perceptual expertise with the stimuli is held constant. In Study 1, targets were categorized as members of real-life in-groups and out-groups (based on university affiliation), whereas in Study 2, targets were categorized into experimentally created minimal groups. In both studies, recognition performance was better for targets categorized as in-group members, despite the fact that perceptual expertise was equivalent for in-group and out-group faces. These results suggest that social-cognitive mechanisms of in-group and out-group categorization are sufficient to elicit performance differences for in-group and out-group face recognition.
Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian
2018-04-18
Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.
Winn, Matthew B; Won, Jong Ho; Moon, Il Joon
This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language.
Winn, Matthew B.; Won, Jong Ho; Moon, Il Joon
2016-01-01
Objectives This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). We hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. We further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design Nineteen CI listeners and 10 listeners with normal hearing (NH) participated in a suite of tasks that included spectral ripple discrimination (SRD), temporal modulation detection (TMD), and syllable categorization, which was split into a spectral-cue-based task (targeting the /ba/-/da/ contrast) and a timing-cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated in order to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression in order to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for CI listeners. Results CI users were generally less successful at utilizing both spectral and temporal cues for categorization compared to listeners with normal hearing. For the CI listener group, SRD was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. TMD using 100 Hz and 10 Hz modulated noise was not correlated with the CI subjects’ categorization of VOT, nor with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart non-linguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (VOT) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language. PMID:27438871
Weight and See: Loading Working Memory Improves Incidental Identification of Irrelevant Faces
Carmel, David; Fairnie, Jake; Lavie, Nilli
2012-01-01
Are task-irrelevant stimuli processed to a level enabling individual identification? This question is central both for perceptual processing models and for applied settings (e.g., eye-witness testimony). Lavie’s load theory proposes that working memory actively maintains attentional prioritization of relevant over irrelevant information. Loading working memory thus impairs attentional prioritization, leading to increased processing of task-irrelevant stimuli. Previous research has shown that increased working memory load leads to greater interference effects from response-competing distractors. Here we test the novel prediction that increased processing of irrelevant stimuli under high working memory load should lead to a greater likelihood of incidental identification of entirely irrelevant stimuli. To test this, we asked participants to perform a word-categorization task while ignoring task-irrelevant images. The categorization task was performed during the retention interval of a working memory task with either low or high load (defined by memory set size). Following the final experimental trial, a surprise question assessed incidental identification of the irrelevant image. Loading working memory was found to improve identification of task-irrelevant faces, but not of building stimuli (shown in a separate experiment to be less distracting). These findings suggest that working memory plays a critical role in determining whether distracting stimuli will be subsequently identified. PMID:22912623
Monkeys show recognition without priming in a classification task.
Basile, Benjamin M; Hampton, Robert R
2013-02-01
Humans show visual perceptual priming by identifying degraded images faster and more accurately if they have seen the original images, while simultaneously failing to recognize the same images. Such priming is commonly thought, with little evidence, to be widely distributed phylogenetically. Following Brodbeck (1997), we trained rhesus monkeys (Macaca mulatta) to categorize photographs according to content (e.g., birds, fish, flowers, people). In probe trials, we tested whether monkeys were faster or more accurate at categorizing degraded versions of previously seen images (primed) than degraded versions of novel images (unprimed). Monkeys categorized reliably, but showed no benefit from having previously seen the images. This finding was robust across manipulations of image quality (color, grayscale, line drawings), type of image degradation (occlusion, blurring), levels of processing, and number of repetitions of the prime. By contrast, in probe matching-to-sample trials, monkeys recognized the primes, demonstrating that they remembered the primes and could discriminate them from other images in the same category under the conditions used to test for priming. Two experiments that replicated Brodbeck's (1997) procedures also produced no evidence of priming. This inability to find priming in monkeys under perceptual conditions sufficient for recognition presents a puzzle. Copyright © 2012 Elsevier B.V. All rights reserved.
Effects of category-specific costs on neural systems for perceptual decision-making.
Fleming, Stephen M; Whiteley, Louise; Hulme, Oliver J; Sahani, Maneesh; Dolan, Raymond J
2010-06-01
Perceptual judgments are often biased by prospective losses, leading to changes in decision criteria. Little is known about how and where sensory evidence and cost information interact in the brain to influence perceptual categorization. Here we show that prospective losses systematically bias the perception of noisy face-house images. Asymmetries in category-specific cost were associated with enhanced blood-oxygen-level-dependent signal in a frontoparietal network. We observed selective activation of parahippocampal gyrus for changes in category-specific cost in keeping with the hypothesis that loss functions enact a particular task set that is communicated to visual regions. Across subjects, greater shifts in decision criteria were associated with greater activation of the anterior cingulate cortex (ACC). Our results support a hypothesis that costs bias an intermediate representation between perception and action, expressed via general effects on frontal cortex, and selective effects on extrastriate cortex. These findings indicate that asymmetric costs may affect a neural implementation of perceptual decision making in a similar manner to changes in category expectation, constituting a step toward accounting for how prospective losses are flexibly integrated with sensory evidence in the brain.
Basic-Level and Superordinate-like Categorical Representations in Early Infancy.
ERIC Educational Resources Information Center
Behl-Chadha, Gundeep
1996-01-01
Examined three- to four-month-old infants' ability to form perceptually based categorical representation in the domains of natural kinds and artifacts. By showing the availability of perceptually driven basic and superordinate-like representations in early infancy that closely correspond to adult conceptual categories, findings underscored the…
Cheetham, Marcus; Suter, Pascal; Jancke, Lutz
2014-01-01
The Uncanny Valley Hypothesis (UVH) predicts that greater difficulty perceptually discriminating between categorically ambiguous human and humanlike characters (e.g., highly realistic robot) evokes negatively valenced (i.e., uncanny) affect. An ABX perceptual discrimination task and signal detection analysis was used to examine the profile of perceptual discrimination (PD) difficulty along the UVH' dimension of human likeness (DHL). This was represented using avatar-to-human morph continua. Rejecting the implicitly assumed profile of PD difficulty underlying the UVH' prediction, Experiment 1 showed that PD difficulty was reduced for categorically ambiguous faces but, notably, enhanced for human faces. Rejecting the UVH' predicted relationship between PD difficulty and negative affect (assessed in terms of the UVH' familiarity dimension), Experiment 2 demonstrated that greater PD difficulty correlates with more positively valenced affect. Critically, this effect was strongest for the ambiguous faces, suggesting a correlative relationship between PD difficulty and feelings of familiarity more consistent with the metaphor happy valley. This relationship is also consistent with a fluency amplification instead of the hitherto proposed hedonic fluency account of affect along the DHL. Experiment 3 found no evidence that the asymmetry in the profile of PD along the DHL is attributable to a differential processing bias (cf. other-race effect), i.e., processing avatars at a category level but human faces at an individual level. In conclusion, the present data for static faces show clear effects that, however, strongly challenge the UVH' implicitly assumed profile of PD difficulty along the DHL and the predicted relationship between this and feelings of familiarity.
Cheetham, Marcus; Suter, Pascal; Jancke, Lutz
2014-01-01
The Uncanny Valley Hypothesis (UVH) predicts that greater difficulty perceptually discriminating between categorically ambiguous human and humanlike characters (e.g., highly realistic robot) evokes negatively valenced (i.e., uncanny) affect. An ABX perceptual discrimination task and signal detection analysis was used to examine the profile of perceptual discrimination (PD) difficulty along the UVH' dimension of human likeness (DHL). This was represented using avatar-to-human morph continua. Rejecting the implicitly assumed profile of PD difficulty underlying the UVH' prediction, Experiment 1 showed that PD difficulty was reduced for categorically ambiguous faces but, notably, enhanced for human faces. Rejecting the UVH' predicted relationship between PD difficulty and negative affect (assessed in terms of the UVH' familiarity dimension), Experiment 2 demonstrated that greater PD difficulty correlates with more positively valenced affect. Critically, this effect was strongest for the ambiguous faces, suggesting a correlative relationship between PD difficulty and feelings of familiarity more consistent with the metaphor happy valley. This relationship is also consistent with a fluency amplification instead of the hitherto proposed hedonic fluency account of affect along the DHL. Experiment 3 found no evidence that the asymmetry in the profile of PD along the DHL is attributable to a differential processing bias (cf. other-race effect), i.e., processing avatars at a category level but human faces at an individual level. In conclusion, the present data for static faces show clear effects that, however, strongly challenge the UVH' implicitly assumed profile of PD difficulty along the DHL and the predicted relationship between this and feelings of familiarity. PMID:25477829
Kim, Jejoong; Park, Sohee; Blake, Randolph
2011-01-01
Background Anomalous visual perception is a common feature of schizophrenia plausibly associated with impaired social cognition that, in turn, could affect social behavior. Past research suggests impairment in biological motion perception in schizophrenia. Behavioral and functional magnetic resonance imaging (fMRI) experiments were conducted to verify the existence of this impairment, to clarify its perceptual basis, and to identify accompanying neural concomitants of those deficits. Methodology/Findings In Experiment 1, we measured ability to detect biological motion portrayed by point-light animations embedded within masking noise. Experiment 2 measured discrimination accuracy for pairs of point-light biological motion sequences differing in the degree of perturbation of the kinematics portrayed in those sequences. Experiment 3 measured BOLD signals using event-related fMRI during a biological motion categorization task. Compared to healthy individuals, schizophrenia patients performed significantly worse on both the detection (Experiment 1) and discrimination (Experiment 2) tasks. Consistent with the behavioral results, the fMRI study revealed that healthy individuals exhibited strong activation to biological motion, but not to scrambled motion in the posterior portion of the superior temporal sulcus (STSp). Interestingly, strong STSp activation was also observed for scrambled or partially scrambled motion when the healthy participants perceived it as normal biological motion. On the other hand, STSp activation in schizophrenia patients was not selective to biological or scrambled motion. Conclusion Schizophrenia is accompanied by difficulties discriminating biological from non-biological motion, and associated with those difficulties are altered patterns of neural responses within brain area STSp. The perceptual deficits exhibited by schizophrenia patients may be an exaggerated manifestation of neural events within STSp associated with perceptual errors made by healthy observers on these same tasks. The present findings fit within the context of theories of delusion involving perceptual and cognitive processes. PMID:21625492
Newborn infants' sensitivity to perceptual cues to lexical and grammatical words.
Shi, R; Werker, J F; Morgan, J L
1999-09-30
In our study newborn infants were presented with lists of lexical and grammatical words prepared from natural maternal speech. The results show that newborns are able to categorically discriminate these sets of words based on a constellation of perceptual cues that distinguish them. This general ability to detect and categorically discriminate sets of words on the basis of multiple acoustic and phonological cues may provide a perceptual base that can help older infants bootstrap into the acquisition of grammatical categories and syntactic structure.
Mercado, Eduardo; Church, Barbara A
2016-08-01
Children with autism spectrum disorder (ASD) sometimes have difficulties learning categories. Past computational work suggests that such deficits may result from atypical representations in cortical maps. Here we use neural networks to show that idiosyncratic transformations of inputs can result in the formation of feature maps that impair category learning for some inputs, but not for other closely related inputs. These simulations suggest that large inter- and intra-individual variations in learning capacities shown by children with ASD across similar categorization tasks may similarly result from idiosyncratic perceptual encoding that is resistant to experience-dependent changes. If so, then both feedback- and exposure-based category learning should lead to heterogeneous, stimulus-dependent deficits in children with ASD.
Perceptual Categorization of Cat and Dog Silhouettes by 3- to 4-Month-Old Infants.
ERIC Educational Resources Information Center
Quinn, Paul C.; Eimas, Peter D.; Tarr, Michael
2001-01-01
Four experiments utilizing the familiarization-novelty preference procedure examined whether 3- and 4-month-olds could form categorical representations for cats versus dogs from the perceptual information available in silhouettes. Findings indicated that general shape or external contour information centered about the head was sufficient for…
Kéri, Szabolcs; Kiss, Imre; Kelemen, Oguz; Benedek, György; Janka, Zoltán
2005-10-01
Schizophrenia is associated with impaired visual information processing. The aim of this study was to investigate the relationship between anomalous perceptual experiences, positive and negative symptoms, perceptual organization, rapid categorization of natural images and magnocellular (M) and parvocellular (P) visual pathway functioning. Thirty-five unmedicated patients with schizophrenia and 20 matched healthy control volunteers participated. Anomalous perceptual experiences were assessed with the Bonn Scale for the Assessment Basic Symptoms (BSABS). General intellectual functions were evaluated with the revised version of the Wechsler Adult Intelligence Scale. The 1-9 version of the Continuous Performance Test (CPT) was used to investigate sustained attention. The following psychophysical tests were used: detection of Gabor patches with collinear and orthogonal flankers (perceptual organization), categorization of briefly presented natural scenes (rapid visual processing), low-contrast and frequency-doubling vernier threshold (M pathway functioning), isoluminant colour vernier threshold and high spatial frequency discrimination (P pathway functioning). The patients with schizophrenia were impaired on test of perceptual organization, rapid visual processing and M pathway functioning. There was a significant correlation between BSABS scores, negative symptoms, perceptual organization, rapid visual processing and M pathway functioning. Positive symptoms, IQ, CPT and P pathway measures did not correlate with these parameters. The best predictor of the BSABS score was the perceptual organization deficit. These results raise the possibility that multiple facets of visual information processing deficits can be explained by M pathway dysfunctions in schizophrenia, resulting in impaired attentional modulation of perceptual organization and of natural image categorization.
Little, Daniel R; Wang, Tony; Nosofsky, Robert M
2016-09-01
Among the most fundamental results in the area of perceptual classification are the "correlated facilitation" and "filtering interference" effects observed in Garner's (1974) speeded categorization tasks: In the case of integral-dimension stimuli, relative to a control task, single-dimension classification is faster when there is correlated variation along a second dimension, but slower when there is orthogonal variation that cannot be filtered out (e.g., by attention). These fundamental effects may result from participants' use of a trial-by-trial bypass strategy in the control and correlated tasks: The observer changes the previous category response whenever the stimulus changes, and maintains responses if the stimulus repeats. Here we conduct modified versions of the Garner tasks that eliminate the availability of a pure bypass strategy. The fundamental facilitation and interference effects remain, but are still largely explainable in terms of pronounced sequential effects in all tasks. We develop sequence-sensitive versions of exemplar-retrieval and decision-bound models aimed at capturing the detailed, trial-by-trial response-time distribution data. The models combine assumptions involving: (i) strengthened perceptual/memory representations of stimuli that repeat across consecutive trials, and (ii) a bias to change category responses on trials in which the stimulus changes. These models can predict our observed effects and provide a more complete account of the underlying bases of performance in our modified Garner tasks. Copyright © 2016 Elsevier Inc. All rights reserved.
Harel, Assaf; Bentin, Shlomo
2013-01-01
A much-debated question in object recognition is whether expertise for faces and expertise for non-face objects utilize common perceptual information. We investigated this issue by assessing the diagnostic information required for different types of expertise. Specifically, we asked whether face categorization and expert car categorization at the subordinate level relies on the same spatial frequency (SF) scales. Fifteen car experts and fifteen novices performed a category verification task with spatially filtered images of faces, cars, and airplanes. Images were categorized based on their basic (e.g. "car") and subordinate level (e.g. "Japanese car") identity. The effect of expertise was not evident when objects were categorized at the basic level. However, when the car experts categorized faces and cars at the subordinate level, the two types of expertise required different kinds of SF information. Subordinate categorization of faces relied on low SFs more than on high SFs, whereas subordinate expert car categorization relied on high SFs more than on low SFs. These findings suggest that expertise in the recognition of objects and faces do not utilize the same type of information. Rather, different types of expertise require different types of diagnostic visual information.
Harel, Assaf; Bentin, Shlomo
2013-01-01
A much-debated question in object recognition is whether expertise for faces and expertise for non-face objects utilize common perceptual information. We investigated this issue by assessing the diagnostic information required for different types of expertise. Specifically, we asked whether face categorization and expert car categorization at the subordinate level relies on the same spatial frequency (SF) scales. Fifteen car experts and fifteen novices performed a category verification task with spatially filtered images of faces, cars, and airplanes. Images were categorized based on their basic (e.g. “car”) and subordinate level (e.g. “Japanese car”) identity. The effect of expertise was not evident when objects were categorized at the basic level. However, when the car experts categorized faces and cars at the subordinate level, the two types of expertise required different kinds of SF information. Subordinate categorization of faces relied on low SFs more than on high SFs, whereas subordinate expert car categorization relied on high SFs more than on low SFs. These findings suggest that expertise in the recognition of objects and faces do not utilize the same type of information. Rather, different types of expertise require different types of diagnostic visual information. PMID:23826188
Children's use of comparison and function in novel object categorization.
Kimura, Katherine; Hunley, Samuel B; Namy, Laura L
2018-06-01
Although young children often rely on salient perceptual cues, such as shape, when categorizing novel objects, children eventually shift towards deeper relational reasoning about category membership. This study investigates what information young children use to classify novel instances of familiar categories. Specifically, we investigated two sources of information that have the potential to facilitate the classification of novel exemplars: (1) comparison of familiar category instances, and (2) attention to function information that might direct children's attention to functionally relevant perceptual features. Across two experiments, we found that comparing two perceptually similar category members-particularly when function information was also highlighted-led children to discover non-obvious relational features that supported their categorization of novel category instances. Together, these findings demonstrate that comparison may aid in novel object categorization by heightening the salience of less obvious, yet functionally relevant, relational structures that support conceptual reasoning. Copyright © 2018. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Quinn, Paul C.; Schyns, Philippe G.; Goldstone, Robert L.
2006-01-01
The relation between perceptual organization and categorization processes in 3- and 4-month-olds was explored. The question was whether an invariant part abstracted during category learning could interfere with Gestalt organizational processes. A 2003 study by Quinn and Schyns had reported that an initial category familiarization experience in…
Differential effect of visual masking in perceptual categorization.
Hélie, Sébastien; Cousineau, Denis
2015-06-01
This article explores the visual information used to categorize stimuli drawn from a common stimulus space into verbal and nonverbal categories using 2 experiments. Experiment 1 explores the effect of target duration on verbal and nonverbal categorization using backward masking to interrupt visual processing. With categories equated for difficulty for long and short target durations, intermediate target duration shows an advantage for verbal categorization over nonverbal categorization. Experiment 2 tests whether the results of Experiment 1 can be explained by shorter target duration resulting in a smaller signal-to-noise ratio of the categorization stimulus. To test for this possibility, Experiment 2 used integration masking with the same stimuli, categories, and masks as Experiment 1 with a varying level of mask opacity. As predicted, low mask opacity yielded similar results to long target duration while high mask opacity yielded similar results to short target duration. Importantly, intermediate mask opacity produced an advantage for verbal categorization over nonverbal categorization, similar to intermediate target duration. These results suggest that verbal and nonverbal categorization are affected differently by manipulations affecting the signal-to-noise ratio of the stimulus, consistent with multiple-system theories of categorizations. The results further suggest that verbal categorization may be more digital (and more robust to low signal-to-noise ratio) while the information used in nonverbal categorization may be more analog (and less robust to lower signal-to-noise ratio). This article concludes with a discussion of how these new results affect the use of masking in perceptual categorization and multiple-system theories of perceptual category learning. (c) 2015 APA, all rights reserved).
Furley, Philip; Memmert, Daniel; Schmid, Simone
2013-03-01
In two experiments, we transferred perceptual load theory to the dynamic field of team sports and tested the predictions derived from the theory using a novel task and stimuli. We tested a group of college students (N = 33) and a group of expert team sport players (N = 32) on a general perceptual load task and a complex, soccer-specific perceptual load task in order to extend the understanding of the applicability of perceptual load theory and further investigate whether distractor interference may differ between the groups, as the sport-specific processing task may not exhaust the processing capacity of the expert participants. In both, the general and the specific task, the pattern of results supported perceptual load theory and demonstrates that the predictions of the theory also transfer to more complex, unstructured situations. Further, perceptual load was the only determinant of distractor processing, as we neither found expertise effects in the general perceptual load task nor the sport-specific task. We discuss the heuristic utility of using response-competition paradigms for studying both general and domain-specific perceptual-cognitive adaptations.
Mattys, Sven L; Scharenborg, Odette
2014-03-01
This study investigates the extent to which age-related language processing difficulties are due to a decline in sensory processes or to a deterioration of cognitive factors, specifically, attentional control. Two facets of attentional control were examined: inhibition of irrelevant information and divided attention. Younger and older adults were asked to categorize the initial phoneme of spoken syllables ("Was it m or n?"), trying to ignore the lexical status of the syllables. The phonemes were manipulated to range in eight steps from m to n. Participants also did a discrimination task on syllable pairs ("Were the initial sounds the same or different?"). Categorization and discrimination were performed under either divided attention (concurrent visual-search task) or focused attention (no visual task). The results showed that even when the younger and older adults were matched on their discrimination scores: (1) the older adults had more difficulty inhibiting lexical knowledge than did younger adults, (2) divided attention weakened lexical inhibition in both younger and older adults, and (3) divided attention impaired sound discrimination more in older than younger listeners. The results confirm the independent and combined contribution of sensory decline and deficit in attentional control to language processing difficulties associated with aging. The relative weight of these variables and their mechanisms of action are discussed in the context of theories of aging and language. (c) 2014 APA, all rights reserved.
Summerfield, Christopher; Tsetsos, Konstantinos
2012-01-01
Investigation into the neural and computational bases of decision-making has proceeded in two parallel but distinct streams. Perceptual decision-making (PDM) is concerned with how observers detect, discriminate, and categorize noisy sensory information. Economic decision-making (EDM) explores how options are selected on the basis of their reinforcement history. Traditionally, the sub-fields of PDM and EDM have employed different paradigms, proposed different mechanistic models, explored different brain regions, disagreed about whether decisions approach optimality. Nevertheless, we argue that there is a common framework for understanding decisions made in both tasks, under which an agent has to combine sensory information (what is the stimulus) with value information (what is it worth). We review computational models of the decision process typically used in PDM, based around the idea that decisions involve a serial integration of evidence, and assess their applicability to decisions between good and gambles. Subsequently, we consider the contribution of three key brain regions - the parietal cortex, the basal ganglia, and the orbitofrontal cortex (OFC) - to perceptual and EDM, with a focus on the mechanisms by which sensory and reward information are integrated during choice. We find that although the parietal cortex is often implicated in the integration of sensory evidence, there is evidence for its role in encoding the expected value of a decision. Similarly, although much research has emphasized the role of the striatum and OFC in value-guided choices, they may play an important role in categorization of perceptual information. In conclusion, we consider how findings from the two fields might be brought together, in order to move toward a general framework for understanding decision-making in humans and other primates.
Summerfield, Christopher; Tsetsos, Konstantinos
2012-01-01
Investigation into the neural and computational bases of decision-making has proceeded in two parallel but distinct streams. Perceptual decision-making (PDM) is concerned with how observers detect, discriminate, and categorize noisy sensory information. Economic decision-making (EDM) explores how options are selected on the basis of their reinforcement history. Traditionally, the sub-fields of PDM and EDM have employed different paradigms, proposed different mechanistic models, explored different brain regions, disagreed about whether decisions approach optimality. Nevertheless, we argue that there is a common framework for understanding decisions made in both tasks, under which an agent has to combine sensory information (what is the stimulus) with value information (what is it worth). We review computational models of the decision process typically used in PDM, based around the idea that decisions involve a serial integration of evidence, and assess their applicability to decisions between good and gambles. Subsequently, we consider the contribution of three key brain regions – the parietal cortex, the basal ganglia, and the orbitofrontal cortex (OFC) – to perceptual and EDM, with a focus on the mechanisms by which sensory and reward information are integrated during choice. We find that although the parietal cortex is often implicated in the integration of sensory evidence, there is evidence for its role in encoding the expected value of a decision. Similarly, although much research has emphasized the role of the striatum and OFC in value-guided choices, they may play an important role in categorization of perceptual information. In conclusion, we consider how findings from the two fields might be brought together, in order to move toward a general framework for understanding decision-making in humans and other primates. PMID:22654730
Effects of Category-Specific Costs on Neural Systems for Perceptual Decision-Making
Whiteley, Louise; Hulme, Oliver J.; Sahani, Maneesh; Dolan, Raymond J.
2010-01-01
Perceptual judgments are often biased by prospective losses, leading to changes in decision criteria. Little is known about how and where sensory evidence and cost information interact in the brain to influence perceptual categorization. Here we show that prospective losses systematically bias the perception of noisy face-house images. Asymmetries in category-specific cost were associated with enhanced blood-oxygen-level-dependent signal in a frontoparietal network. We observed selective activation of parahippocampal gyrus for changes in category-specific cost in keeping with the hypothesis that loss functions enact a particular task set that is communicated to visual regions. Across subjects, greater shifts in decision criteria were associated with greater activation of the anterior cingulate cortex (ACC). Our results support a hypothesis that costs bias an intermediate representation between perception and action, expressed via general effects on frontal cortex, and selective effects on extrastriate cortex. These findings indicate that asymmetric costs may affect a neural implementation of perceptual decision making in a similar manner to changes in category expectation, constituting a step toward accounting for how prospective losses are flexibly integrated with sensory evidence in the brain. PMID:20357071
Szabo, Miruna; Deco, Gustavo; Fusi, Stefano; Del Giudice, Paolo; Mattia, Maurizio; Stetter, Martin
2006-05-01
Recent experiments on behaving monkeys have shown that learning a visual categorization task makes the neurons in infero-temporal cortex (ITC) more selective to the task-relevant features of the stimuli (Sigala and Logothetis in Nature 415 318-320, 2002). We hypothesize that such a selectivity modulation emerges from the interaction between ITC and other cortical area, presumably the prefrontal cortex (PFC), where the previously learned stimulus categories are encoded. We propose a biologically inspired model of excitatory and inhibitory spiking neurons with plastic synapses, modified according to a reward based Hebbian learning rule, to explain the experimental results and test the validity of our hypothesis. We assume that the ITC neurons, receiving feature selective inputs, form stronger connections with the category specific neurons to which they are consistently associated in rewarded trials. After learning, the top-down influence of PFC neurons enhances the selectivity of the ITC neurons encoding the behaviorally relevant features of the stimuli, as observed in the experiments. We conclude that the perceptual representation in visual areas like ITC can be strongly affected by the interaction with other areas which are devoted to higher cognitive functions.
Association between educational status and dual-task performance in young adults.
Voos, Mariana Callil; Pimentel Piemonte, Maria Elisa; Castelli, Lilian Zanchetta; Andrade Machado, Mariane Silva; Dos Santos Teixeira, Patrícia Pereira; Caromano, Fátima Aparecida; Ribeiro Do Valle, Luiz Eduardo
2015-04-01
The influence of educational status on perceptual-motor performance has not been investigated. The single- and dual-task performances of 15 Low educated adults (9 men, 6 women; M age=24.1 yr.; 6-9 yr. of education) and 15 Higher educated adults (8 men, 7 women; M age=24.7 yr.; 10-13 yr. of education) were compared. The perceptual task consisted of verbally classifying two figures (equal or different). The motor task consisted of alternating steps from the floor to a stool. Tasks were assessed individually and simultaneously. Two analyses of variance (2 groups×4 blocks) compared the errors and steps. The Low education group committed more errors and had less improvement on the perceptual task than the High education group. During and after the perceptual-motor task performance, errors increased only in the Low education group. Education correlated to perceptual and motor performance. The Low education group showed more errors and less step alternations on the perceptual-motor task compared to the High education group. This difference on the number of errors was also observed after the dual-task, when the perceptual task was performed alone.
The role of perceptual load in inattentional blindness.
Cartwright-Finch, Ula; Lavie, Nilli
2007-03-01
Perceptual load theory offers a resolution to the long-standing early vs. late selection debate over whether task-irrelevant stimuli are perceived, suggesting that irrelevant perception depends upon the perceptual load of task-relevant processing. However, previous evidence for this theory has relied on RTs and neuroimaging. Here we tested the effects of load on conscious perception using the "inattentional blindness" paradigm. As predicted by load theory, awareness of a task-irrelevant stimulus was significantly reduced by higher perceptual load (with increased numbers of search items, or a harder discrimination vs. detection task). These results demonstrate that conscious perception of task-irrelevant stimuli critically depends upon the level of task-relevant perceptual load rather than intentions or expectations, thus enhancing the resolution to the early vs. late selection debate offered by the perceptual load theory.
Bogale, Bezawork Afework; Aoyama, Masato; Sugita, Shoei
2011-01-01
We trained jungle crows to discriminate among photographs of human face according to their sex in a simultaneous two-alternative task to study their categorical learning ability. Once the crows reached a discrimination criterion (greater than or equal to 80% correct choices in two consecutive sessions; binomial probability test, p<.05), they next received generalization and transfer tests (i.e., greyscale, contour, and 'full' occlusion) in Experiment 1 followed by a 'partial' occlusion test in Experiment 2 and random stimuli pair test in Experiment 3. Jungle crows learned the discrimination task in a few trials and successfully generalized to novel stimuli sets. However, all crows failed the greyscale test and half of them the contour test. Neither occlusion of internal features of the face, nor randomly pairing of exemplars affected discrimination performance of most, if not all crows. We suggest that jungle crows categorize human face photographs based on perceptual similarities as other non-human animals do, and colour appears to be the most salient feature controlling discriminative behaviour. However, the variability in the use of facial contours among individuals suggests an exploitation of multiple features and individual differences in visual information processing among jungle crows. Copyright © 2010 Elsevier B.V. All rights reserved.
Decoupling Object Detection and Categorization
ERIC Educational Resources Information Center
Mack, Michael L.; Palmeri, Thomas J.
2010-01-01
We investigated whether there exists a behavioral dependency between object detection and categorization. Previous work (Grill-Spector & Kanwisher, 2005) suggests that object detection and basic-level categorization may be the very same perceptual mechanism: As objects are parsed from the background they are categorized at the basic level. In…
Group benefits in joint perceptual tasks-a review.
Wahn, Basil; Kingstone, Alan; König, Peter
2018-05-12
In daily life, humans often perform perceptual tasks together to reach a shared goal. In these situations, individuals may collaborate (e.g., by distributing task demands) to perform the task better than when the task is performed alone (i.e., attain a group benefit). In this review, we identify the factors influencing if, and to what extent, a group benefit is attained and provide a framework of measures to assess group benefits in perceptual tasks. In particular, we integrate findings from two frequently investigated joint perceptual tasks: visuospatial tasks and decision-making tasks. For both task types, we find that an exchange of information between coactors is critical to improve joint performance. Yet, the type of exchanged information and how coactors collaborate differs between tasks. In visuospatial tasks, coactors exchange information about the performed actions to distribute task demands. In perceptual decision-making tasks, coactors exchange their confidence on their individual perceptual judgments to negotiate a joint decision. We argue that these differences can be explained by the task structure: coactors distribute task demands if a joint task allows for a spatial division and stimuli can be accurately processed by one individual. Otherwise, they perform the task individually and then integrate their individual judgments. © 2018 New York Academy of Sciences.
Implicit and Explicit Contributions to Object Recognition: Evidence from Rapid Perceptual Learning
Hassler, Uwe; Friese, Uwe; Gruber, Thomas
2012-01-01
The present study investigated implicit and explicit recognition processes of rapidly perceptually learned objects by means of steady-state visual evoked potentials (SSVEP). Participants were initially exposed to object pictures within an incidental learning task (living/non-living categorization). Subsequently, degraded versions of some of these learned pictures were presented together with degraded versions of unlearned pictures and participants had to judge, whether they recognized an object or not. During this test phase, stimuli were presented at 15 Hz eliciting an SSVEP at the same frequency. Source localizations of SSVEP effects revealed for implicit and explicit processes overlapping activations in orbito-frontal and temporal regions. Correlates of explicit object recognition were additionally found in the superior parietal lobe. These findings are discussed to reflect facilitation of object-specific processing areas within the temporal lobe by an orbito-frontal top-down signal as proposed by bi-directional accounts of object recognition. PMID:23056558
Transfer of perceptual learning between different visual tasks
McGovern, David P.; Webb, Ben S.; Peirce, Jonathan W.
2012-01-01
Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this ‘perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a ‘global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks. PMID:23048211
Transfer of perceptual learning between different visual tasks.
McGovern, David P; Webb, Ben S; Peirce, Jonathan W
2012-10-09
Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this 'perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a 'global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks.
Decoding stimulus features in primate somatosensory cortex during perceptual categorization
Alvarez, Manuel; Zainos, Antonio; Romo, Ranulfo
2015-01-01
Neurons of the primary somatosensory cortex (S1) respond as functions of frequency or amplitude of a vibrotactile stimulus. However, whether S1 neurons encode both frequency and amplitude of the vibrotactile stimulus or whether each sensory feature is encoded by separate populations of S1 neurons is not known, To further address these questions, we recorded S1 neurons while trained monkeys categorized only one sensory feature of the vibrotactile stimulus: frequency, amplitude, or duration. The results suggest a hierarchical encoding scheme in S1: from neurons that encode all sensory features of the vibrotactile stimulus to neurons that encode only one sensory feature. We hypothesize that the dynamic representation of each sensory feature in S1 might serve for further downstream processing that leads to the monkey’s psychophysical behavior observed in these tasks. PMID:25825711
Blank, Helen; Biele, Guido; Heekeren, Hauke R; Philiastides, Marios G
2013-02-27
Perceptual decision making is the process by which information from sensory systems is combined and used to influence our behavior. In addition to the sensory input, this process can be affected by other factors, such as reward and punishment for correct and incorrect responses. To investigate the temporal dynamics of how monetary punishment influences perceptual decision making in humans, we collected electroencephalography (EEG) data during a perceptual categorization task whereby the punishment level for incorrect responses was parametrically manipulated across blocks of trials. Behaviorally, we observed improved accuracy for high relative to low punishment levels. Using multivariate linear discriminant analysis of the EEG, we identified multiple punishment-induced discriminating components with spatially distinct scalp topographies. Compared with components related to sensory evidence, components discriminating punishment levels appeared later in the trial, suggesting that punishment affects primarily late postsensory, decision-related processing. Crucially, the amplitude of these punishment components across participants was predictive of the size of the behavioral improvements induced by punishment. Finally, trial-by-trial changes in prestimulus oscillatory activity in the alpha and gamma bands were good predictors of the amplitude of these components. We discuss these findings in the context of increased motivation/attention, resulting from increases in punishment, which in turn yields improved decision-related processing.
Evaluative pressure overcomes perceptual load effects.
Normand, Alice; Autin, Frédérique; Croizet, Jean-Claude
2015-06-01
Perceptual load has been found to be a powerful bottom-up determinant of distractibility, with high perceptual load preventing distraction by any irrelevant information. However, when under evaluative pressure, individuals exert top-down attentional control by giving greater weight to task-relevant features, making them more distractible from task-relevant distractors. One study tested whether the top-down modulation of attention under evaluative pressure overcomes the beneficial bottom-up effect of high perceptual load on distraction. Using a response-competition task, we replicated previous findings that high levels of perceptual load suppress task-relevant distractor response interference, but only for participants in a control condition. Participants under evaluative pressure (i.e., who believed their intelligence was assessed) showed interference from task-relevant distractor at all levels of perceptual load. This research challenges the assumptions of the perceptual load theory and sheds light on a neglected determinant of distractibility: the self-relevance of the performance situation in which attentional control is solicited.
The Influence of Flankers on Race Categorization of Faces
Sun, Hsin-Mei; Balas, Benjamin
2012-01-01
Context affects multiple cognitive and perceptual processes. In the present study, we asked how the context of a set of faces affected the perception of a target face’s race in two distinct tasks. In Experiments 1 and 2, participants categorized target faces according to perceived racial category (Black or White). In Experiment 1, the target face was presented alone, or with Black or White flanker faces. The orientation of flanker faces was also manipulated to investigate how face inversion effect interacts with the influences of flanker faces on the target face. The results showed that participants were more likely to categorize the target face as White when it was surrounded by inverted White faces (an assimilation effect). Experiment 2 further examined how different aspects of the visual context affect the perception of the target face by manipulating flanker faces’ shape and pigmentation as well as their orientation. The results showed that flanker faces’ shape and pigmentation affected the perception of the target face differently. While shape elicited a contrast effect, pigmentation appeared to be assimilative. These novel findings suggest that the perceived race of a face is modulated by the appearance of other faces and their distinct shape and pigmentation properties. However, the contrast and assimilation effects elicited by flanker faces’ shape and pigmentation may be specific to race categorization, since the same stimuli used in a delayed matching task (Experiment 3) revealed that flanker pigmentation induced a contrast effect on the perception of target pigmentation. PMID:22825930
Perceptual processing affects conceptual processing.
Van Dantzig, Saskia; Pecher, Diane; Zeelenberg, René; Barsalou, Lawrence W
2008-04-05
According to the Perceptual Symbols Theory of cognition (Barsalou, 1999), modality-specific simulations underlie the representation of concepts. A strong prediction of this view is that perceptual processing affects conceptual processing. In this study, participants performed a perceptual detection task and a conceptual property-verification task in alternation. Responses on the property-verification task were slower for those trials that were preceded by a perceptual trial in a different modality than for those that were preceded by a perceptual trial in the same modality. This finding of a modality-switch effect across perceptual processing and conceptual processing supports the hypothesis that perceptual and conceptual representations are partially based on the same systems. 2008 Cognitive Science Society, Inc.
Semantic activation in the absence of perceptual awareness.
Ortells, Juan J; Daza, María Teresa; Fox, Elaine
2003-11-01
Participants performed a semantic categorization task on a target that was preceded by a prime word belonging either to the same category (20% of trials) or to a different category (80% of trials). The prime was presented for 33 msec and followed either immediately or after a delay by a pattern mask. With the immediate mask, reaction times (RTs) were shorter on related than on unrelated trials. This facilitatory priming reached significance at prime-target stimulus onset asynchronies (SOAs) of 400 msec or less and remained unaffected by task practice. With the delayed mask, RTs were longer on related than on unrelated trials. This reversed (strategic) semantic priming proved to be significant (1) only at a prime-target SOA of 400 msec or longer and (2) after the participants had some practice with the task. The present findings provide further evidence that perceiving a stimulus with and without phenomenological awareness can lead to qualitatively different behavioral consequences.
The Relationship Between Perceptual Development and the Acquisition of Reading Skill. Final Report.
ERIC Educational Resources Information Center
Gibson, Eleanor J.
The work described in this report is aimed at understanding the role of cognitive development, especially perceptual development, in the reading process and its acquisition. The papers included describe: (1) a theory of perceptual learning, (2) an investigation of the perception of morphological information, (3) the role of categorical semantic…
The influence of schizotypal traits on attention under high perceptual load.
Stotesbury, Hanne; Gaigg, Sebastian B; Kirhan, Saim; Haenschel, Corinna
2018-03-01
Schizophrenia Spectrum Disorders (SSD) are known to be characterised by abnormalities in attentional processes, but there are inconsistencies in the literature that remain unresolved. This article considers whether perceptual resource limitations play a role in moderating attentional abnormalities in SSD. According to perceptual load theory, perceptual resource limitations can lead to attenuated or superior performance on dual-task paradigms depending on whether participants are required to process, or attempt to ignore, secondary stimuli. If SSD is associated with perceptual resource limitations, and if it represents the extreme end of an otherwise normally distributed neuropsychological phenotype, schizotypal traits in the general population should lead to disproportionate performance costs on dual-task paradigms as a function of the perceptual task demands. To test this prediction, schizotypal traits were quantified via the Schizotypal Personality Questionnaire (SPQ) in 74 healthy volunteers, who also completed a dual-task signal detection paradigm that required participants to detect central and peripheral stimuli across conditions that varied in the overall number of stimuli presented. The results confirmed decreasing performance as the perceptual load of the task increased. More importantly, significant correlations between SPQ scores and task performance confirmed that increased schizotypal traits, particularly in the cognitive-perceptual domain, are associated with greater performance decrements under increasing perceptual load. These results confirm that attentional difficulties associated with SSD extend sub-clinically into the general population and suggest that cognitive-perceptual schizotypal traits may represent a risk factor for difficulties in the regulation of attention under increasing perceptual load.
Perceptual grouping enhances visual plasticity.
Mastropasqua, Tommaso; Turatto, Massimo
2013-01-01
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.
Timing of repetition suppression of event-related potentials to unattended objects.
Stefanics, Gabor; Heinzle, Jakob; Czigler, István; Valentini, Elia; Stephan, Klaas Enno
2018-05-26
Current theories of object perception emphasize the automatic nature of perceptual inference. Repetition suppression (RS), the successive decrease of brain responses to repeated stimuli, is thought to reflect the optimization of perceptual inference through neural plasticity. While functional imaging studies revealed brain regions that show suppressed responses to the repeated presentation of an object, little is known about the intra-trial time course of repetition effects to everyday objects. Here we used event-related potentials (ERP) to task-irrelevant line-drawn objects, while participants engaged in a distractor task. We quantified changes in ERPs over repetitions using three general linear models (GLM) that modelled RS by an exponential, linear, or categorical "change detection" function in each subject. Our aim was to select the model with highest evidence and determine the within-trial time-course and scalp distribution of repetition effects using that model. Model comparison revealed the superiority of the exponential model indicating that repetition effects are observable for trials beyond the first repetition. Model parameter estimates revealed a sequence of RS effects in three time windows (86-140ms, 322-360ms, and 400-446ms) and with occipital, temporo-parietal, and fronto-temporal distribution, respectively. An interval of repetition enhancement (RE) was also observed (320-340ms) over occipito-temporal sensors. Our results show that automatic processing of task-irrelevant objects involves multiple intervals of RS with distinct scalp topographies. These sequential intervals of RS and RE might reflect the short-term plasticity required for optimization of perceptual inference and the associated changes in prediction errors (PE) and predictions, respectively, over stimulus repetitions during automatic object processing. This article is protected by copyright. All rights reserved. © 2018 The Authors European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Hartfield, Kia N; Conture, Edward G
2006-01-01
The purpose of this study was to investigate the influence of conceptual and perceptual properties of words on the speed and accuracy of lexical retrieval of children who do (CWS) and do not stutter (CWNS) during a picture-naming task. Participants consisted of 13 3-5-year-old CWS and the same number of CWNS. All participants had speech, language, and hearing development within normal limits, with the exception of stuttering for CWS. Both talker groups participated in a picture-naming task where they named, one at a time, computer-presented, black-on-white drawings of common age-appropriate objects. These pictures were named during four auditory priming conditions: (a) a neutral prime consisting of a tone, (b) a word prime physically related to the target word, (c) a word prime functionally related to the target word, and (d) a word prime categorically related to the target word. Speech reaction time (SRT) was measured from the offset of presentation of the picture target to the onset of participant's verbal speech response. Results indicated that CWS were slower than CWNS across priming conditions (i.e., neutral, physical, function, category) and that the speed of lexical retrieval of CWS was more influenced by functional than perceptual aspects of target pictures named. Findings were taken to suggest that CWS tend to organize lexical information functionally more so than physically and that this tendency may relate to difficulties establishing normally fluent speech and language. The reader will learn about and be able to (1) communicate the relevance of examining lexical retrieval in relation to childhood stuttering and (2) describe the method of measuring speech reaction times of accurate and fluent responses during a picture-naming task as a means of assessing lexical retrieval skills.
Object-Based Attention Overrides Perceptual Load to Modulate Visual Distraction
ERIC Educational Resources Information Center
Cosman, Joshua D.; Vecera, Shaun P.
2012-01-01
The ability to ignore task-irrelevant information and overcome distraction is central to our ability to efficiently carry out a number of tasks. One factor shown to strongly influence distraction is the perceptual load of the task being performed; as the perceptual load of task-relevant information processing increases, the likelihood that…
Hartfield, Kia N.; Conture, Edward G.
2007-01-01
The purpose of this study was to investigate the influence of conceptual and perceptual properties of words on the speed and accuracy of lexical retrieval of children who do (CWS) and do not stutter (CWNS) during a picture-naming task. Participants consisted of 13 3- to 5-year-old CWS and the same number of CWNS. All participants had speech, language, and hearing development within normal limits, with the exception of stuttering for CWS. Both talker groups participated in a picture-naming task where they named, one at a time, computer-presented, black-on-white drawings of common age-appropriate objects. These pictures were named during four auditory priming conditions: (a) a neutral prime consisting of a tone, (b) a word prime physically related to the target word, (c) a word prime functionally related to the target word, and (d) a word prime categorically related to the target word. Speech reaction time (SRT) was measured from the offset of presentation of the picture target to the onset of participant’s verbal speech response. Results indicated that CWS were slower than CWNS across priming conditions (i.e., neutral, physical, function, category) and that the speed of lexical retrieval of CWS was more influenced by functional than perceptual aspects of target pictures named. Findings were taken to suggest that CWS tend to organize lexical information functionally more so than physically and that this tendency may relate to difficulties establishing normally fluent speech and language. PMID:17010422
A Neural Signature Encoding Decisions under Perceptual Ambiguity
Sun, Sai; Yu, Rongjun
2017-01-01
Abstract People often make perceptual decisions with ambiguous information, but it remains unclear whether the brain has a common neural substrate that encodes various forms of perceptual ambiguity. Here, we used three types of perceptually ambiguous stimuli as well as task instructions to examine the neural basis for both stimulus-driven and task-driven perceptual ambiguity. We identified a neural signature, the late positive potential (LPP), that encoded a general form of stimulus-driven perceptual ambiguity. In addition to stimulus-driven ambiguity, the LPP was also modulated by ambiguity in task instructions. To further specify the functional role of the LPP and elucidate the relationship between stimulus ambiguity, behavioral response, and the LPP, we employed regression models and found that the LPP was specifically associated with response latency and confidence rating, suggesting that the LPP encoded decisions under perceptual ambiguity. Finally, direct behavioral ratings of stimulus and task ambiguity confirmed our neurophysiological findings, which could not be attributed to differences in eye movements either. Together, our findings argue for a common neural signature that encodes decisions under perceptual ambiguity but is subject to the modulation of task ambiguity. Our results represent an essential first step toward a complete neural understanding of human perceptual decision making. PMID:29177189
A Neural Signature Encoding Decisions under Perceptual Ambiguity.
Sun, Sai; Yu, Rongjun; Wang, Shuo
2017-01-01
People often make perceptual decisions with ambiguous information, but it remains unclear whether the brain has a common neural substrate that encodes various forms of perceptual ambiguity. Here, we used three types of perceptually ambiguous stimuli as well as task instructions to examine the neural basis for both stimulus-driven and task-driven perceptual ambiguity. We identified a neural signature, the late positive potential (LPP), that encoded a general form of stimulus-driven perceptual ambiguity. In addition to stimulus-driven ambiguity, the LPP was also modulated by ambiguity in task instructions. To further specify the functional role of the LPP and elucidate the relationship between stimulus ambiguity, behavioral response, and the LPP, we employed regression models and found that the LPP was specifically associated with response latency and confidence rating, suggesting that the LPP encoded decisions under perceptual ambiguity. Finally, direct behavioral ratings of stimulus and task ambiguity confirmed our neurophysiological findings, which could not be attributed to differences in eye movements either. Together, our findings argue for a common neural signature that encodes decisions under perceptual ambiguity but is subject to the modulation of task ambiguity. Our results represent an essential first step toward a complete neural understanding of human perceptual decision making.
Perceptual load interacts with stimulus processing across sensory modalities.
Klemen, J; Büchel, C; Rose, M
2009-06-01
According to perceptual load theory, processing of task-irrelevant stimuli is limited by the perceptual load of a parallel attended task if both the task and the irrelevant stimuli are presented to the same sensory modality. However, it remains a matter of debate whether the same principles apply to cross-sensory perceptual load and, more generally, what form cross-sensory attentional modulation in early perceptual areas takes in humans. Here we addressed these questions using functional magnetic resonance imaging. Participants undertook an auditory one-back working memory task of low or high perceptual load, while concurrently viewing task-irrelevant images at one of three object visibility levels. The processing of the visual and auditory stimuli was measured in the lateral occipital cortex (LOC) and auditory cortex (AC), respectively. Cross-sensory interference with sensory processing was observed in both the LOC and AC, in accordance with previous results of unisensory perceptual load studies. The present neuroimaging results therefore warrant the extension of perceptual load theory from a unisensory to a cross-sensory context: a validation of this cross-sensory interference effect through behavioural measures would consolidate the findings.
Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M
2011-05-01
The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.
Reetzke, Rachel; Xie, Zilong; Llanos, Fernando; Chandrasekaran, Bharath
2018-05-07
Although challenging, adults can learn non-native phonetic contrasts with extensive training [1, 2], indicative of perceptual learning beyond an early sensitivity period [3, 4]. Training can alter low-level sensory encoding of newly acquired speech sound patterns [5]; however, the time-course, behavioral relevance, and long-term retention of such sensory plasticity is unclear. Some theories argue that sensory plasticity underlying signal enhancement is immediate and critical to perceptual learning [6, 7]. Others, like the reverse hierarchy theory (RHT), posit a slower time-course for sensory plasticity [8]. RHT proposes that higher-level categorical representations guide immediate, novice learning, while lower-level sensory changes do not emerge until expert stages of learning [9]. We trained 20 English-speaking adults to categorize a non-native phonetic contrast (Mandarin lexical tones) using a criterion-dependent sound-to-category training paradigm. Sensory and perceptual indices were assayed across operationally defined learning phases (novice, experienced, over-trained, and 8-week retention) by measuring the frequency-following response, a neurophonic potential that reflects fidelity of sensory encoding, and the perceptual identification of a tone continuum. Our results demonstrate that while robust changes in sensory encoding and perceptual identification of Mandarin tones emerged with training and were retained, such changes followed different timescales. Sensory changes were evidenced and related to behavioral performance only when participants were over-trained. In contrast, changes in perceptual identification reflecting improvement in categorical percept emerged relatively earlier. Individual differences in perceptual identification, and not sensory encoding, related to faster learning. Our findings support the RHT-sensory plasticity accompanies, rather than drives, expert levels of non-native speech learning. Copyright © 2018 Elsevier Ltd. All rights reserved.
The perception of visual emotion: comparing different measures of awareness.
Szczepanowski, Remigiusz; Traczyk, Jakub; Wierzchoń, Michał; Cleeremans, Axel
2013-03-01
Here, we explore the sensitivity of different awareness scales in revealing conscious reports on visual emotion perception. Participants were exposed to a backward masking task involving fearful faces and asked to rate their conscious awareness in perceiving emotion in facial expression using three different subjective measures: confidence ratings (CRs), with the conventional taxonomy of certainty, the perceptual awareness scale (PAS), through which participants categorize "raw" visual experience, and post-decision wagering (PDW), which involves economic categorization. Our results show that the CR measure was the most exhaustive and the most graded. In contrast, the PAS and PDW measures suggested instead that consciousness of emotional stimuli is dichotomous. Possible explanations of the inconsistency were discussed. Finally, our results also indicate that PDW biases awareness ratings by enhancing first-order accuracy of emotion perception. This effect was possibly a result of higher motivation induced by monetary incentives. Copyright © 2012 Elsevier Inc. All rights reserved.
Perceptual Grouping Enhances Visual Plasticity
Mastropasqua, Tommaso; Turatto, Massimo
2013-01-01
Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100
Accurate expectancies diminish perceptual distraction during visual search
Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry
2014-01-01
The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374
‘If you are good, I get better’: the role of social hierarchy in perceptual decision-making
Pannunzi, Mario; Ayneto, Alba; Deco, Gustavo; Sebastián-Gallés, Nuria
2014-01-01
So far, it was unclear if social hierarchy could influence sensory or perceptual cognitive processes. We evaluated the effects of social hierarchy on these processes using a basic visual perceptual decision task. We constructed a social hierarchy where participants performed the perceptual task separately with two covertly simulated players (superior, inferior). Participants were faster (better) when performing the discrimination task with the superior player. We studied the time course when social hierarchy was processed using event-related potentials and observed hierarchical effects even in early stages of sensory-perceptual processing, suggesting early top–down modulation by social hierarchy. Moreover, in a parallel analysis, we fitted a drift-diffusion model (DDM) to the results to evaluate the decision making process of this perceptual task in the context of a social hierarchy. Consistently, the DDM pointed to nondecision time (probably perceptual encoding) as the principal period influenced by social hierarchy. PMID:23946003
Perceptual fluency and judgments of vocal aesthetics and stereotypicality.
Babel, Molly; McGuire, Grant
2015-05-01
Research has shown that processing dynamics on the perceiver's end determine aesthetic pleasure. Specifically, typical objects, which are processed more fluently, are perceived as more attractive. We extend this notion of perceptual fluency to judgments of vocal aesthetics. Vocal attractiveness has traditionally been examined with respect to sexual dimorphism and the apparent size of a talker, as reconstructed from the acoustic signal, despite evidence that gender-specific speech patterns are learned social behaviors. In this study, we report on a series of three experiments using 60 voices (30 females) to compare the relationship between judgments of vocal attractiveness, stereotypicality, and gender categorization fluency. Our results indicate that attractiveness and stereotypicality are highly correlated for female and male voices. Stereotypicality and categorization fluency were also correlated for male voices, but not female voices. Crucially, stereotypicality and categorization fluency interacted to predict attractiveness, suggesting the role of perceptual fluency is present, but nuanced, in judgments of human voices. © 2014 Cognitive Science Society, Inc.
Lim, Sung-joo; Holt, Lori L
2011-01-01
Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.
Lim, Sung-joo; Holt, Lori L.
2011-01-01
Although speech categories are defined by multiple acoustic dimensions, some are perceptually-weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information and players’ responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5 hours across 5 days exhibited improvements in /r/-/l/ perception on par with 2–4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. PMID:21827533
How we categorize objects is related to how we remember them: The shape bias as a memory bias
Vlach, Haley A.
2016-01-01
The “shape bias” describes the phenomenon that, after a certain point in development, children and adults generalize object categories based upon shape to a greater degree than other perceptual features. The focus of research on the shape bias has been to examine the types of information that learners attend to in one moment in time. The current work takes a different approach by examining whether learners' categorical biases are related to their retention of information across time. In three experiments, children's (N = 72) and adults' (N = 240) memory performance for features of objects was examined in relation to their categorical biases. The results of these experiments demonstrated that the number of shape matches chosen during the shape bias task significantly predicted shape memory. Moreover, children and adults with a shape bias were more likely to remember the shape of objects than they were the color and size of objects. Taken together, this work suggests the development of a shape bias may engender better memory for shape information. PMID:27454236
Ecological dynamics of continuous and categorical decision-making: the regatta start in sailing.
Araújo, Duarte; Davids, Keith; Diniz, Ana; Rocha, Luis; Santos, João Coelho; Dias, Gonçalo; Fernandes, Orlando
2015-01-01
Ecological dynamics of decision-making in the sport of sailing exemplifies emergent, conditionally coupled, co-adaptive behaviours. In this study, observation of the coupling dynamics of paired boats during competitive sailing showed that decision-making can be modelled as a self-sustained, co-adapting system of informationally coupled oscillators (boats). Bytracing the spatial-temporal displacements of the boats, time series analyses (autocorrelations, periodograms and running correlations) revealed that trajectories of match racing boats are coupled more than 88% of the time during a pre-start race, via continuous, competing co-adaptions between boats. Results showed that both the continuously selected trajectories of the sailors (12 years of age) and their categorical starting point locations were examples of emergent decisions. In this dynamical conception of decision-making behaviours, strategic positioning (categorical) and continuous displacement of a boat over the course in match-race sailing emerged as a function of interacting task, personal and environmental constraints. Results suggest how key interacting constraints could be manipulated in practice to enhance sailors' perceptual attunement to them in competition.
How we categorize objects is related to how we remember them: The shape bias as a memory bias.
Vlach, Haley A
2016-12-01
The "shape bias" describes the phenomenon that, after a certain point in development, children and adults generalize object categories based on shape to a greater degree than other perceptual features. The focus of research on the shape bias has been to examine the types of information that learners attend to in one moment in time. The current work takes a different approach by examining whether learners' categorical biases are related to their retention of information across time. In three experiments, children's (N=72) and adults' (N=240) memory performance for features of objects was examined in relation to their categorical biases. The results of these experiments demonstrated that the number of shape matches chosen during the shape bias task significantly predicted shape memory. Moreover, children and adults with a shape bias were more likely to remember the shape of objects than the color and size of objects. Taken together, this work suggests that the development of a shape bias may engender better memory for shape information. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Busey, Thomas; Yu, Chen; Wyatte, Dean; Vanderkolk, John
2013-01-01
Perceptual tasks such as object matching, mammogram interpretation, mental rotation, and satellite imagery change detection often require the assignment of correspondences to fuse information across views. We apply techniques developed for machine translation to the gaze data recorded from a complex perceptual matching task modeled after…
Categorical Representation of Facial Expressions in the Infant Brain
ERIC Educational Resources Information Center
Leppanen, Jukka M.; Richmond, Jenny; Vogel-Farley, Vanessa K.; Moulson, Margaret C.; Nelson, Charles A.
2009-01-01
Categorical perception, demonstrated as reduced discrimination of within-category relative to between-category differences in stimuli, has been found in a variety of perceptual domains in adults. To examine the development of categorical perception in the domain of facial expression processing, we used behavioral and event-related potential (ERP)…
Audiovisual speech perception development at varying levels of perceptual processing
Lalonde, Kaylah; Holt, Rachael Frush
2016-01-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318
Audiovisual speech perception development at varying levels of perceptual processing.
Lalonde, Kaylah; Holt, Rachael Frush
2016-04-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Human Non-linguistic Vocal Repertoire: Call Types and Their Meaning.
Anikin, Andrey; Bååth, Rasmus; Persson, Tomas
2018-01-01
Recent research on human nonverbal vocalizations has led to considerable progress in our understanding of vocal communication of emotion. However, in contrast to studies of animal vocalizations, this research has focused mainly on the emotional interpretation of such signals. The repertoire of human nonverbal vocalizations as acoustic types, and the mapping between acoustic and emotional categories, thus remain underexplored. In a cross-linguistic naming task (Experiment 1), verbal categorization of 132 authentic (non-acted) human vocalizations by English-, Swedish- and Russian-speaking participants revealed the same major acoustic types: laugh, cry, scream, moan, and possibly roar and sigh. The association between call type and perceived emotion was systematic but non-redundant: listeners associated every call type with a limited, but in some cases relatively wide, range of emotions. The speed and consistency of naming the call type predicted the speed and consistency of inferring the caller's emotion, suggesting that acoustic and emotional categorizations are closely related. However, participants preferred to name the call type before naming the emotion. Furthermore, nonverbal categorization of the same stimuli in a triad classification task (Experiment 2) was more compatible with classification by call type than by emotion, indicating the former's greater perceptual salience. These results suggest that acoustic categorization may precede attribution of emotion, highlighting the need to distinguish between the overt form of nonverbal signals and their interpretation by the perceiver. Both within- and between-call acoustic variation can then be modeled explicitly, bringing research on human nonverbal vocalizations more in line with the work on animal communication.
Autism-specific covariation in perceptual performances: "g" or "p" factor?
Meilleur, Andrée-Anne S; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent
2014-01-01
Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor). Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor), which may drive perceptual abilities differently in autistic and non-autistic individuals.
Autism-Specific Covariation in Perceptual Performances: “g” or “p” Factor?
Meilleur, Andrée-Anne S.; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent
2014-01-01
Background Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. Methods We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. Results In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Conclusions Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or “g” factor). Instead, this residual covariation is accounted for by a common perceptual process (or “p” factor), which may drive perceptual abilities differently in autistic and non-autistic individuals. PMID:25117450
Aust, Ulrike; Braunöder, Elisabeth
2015-02-01
The present experiment investigated pigeons' and humans' processing styles-local or global-in an exemplar-based visual categorization task in which category membership of every stimulus had to be learned individually, and in a rule-based task in which category membership was defined by a perceptual rule. Group Intact was trained with the original pictures (providing both intact local and global information), Group Scrambled was trained with scrambled versions of the same pictures (impairing global information), and Group Blurred was trained with blurred versions (impairing local information). Subsequently, all subjects were tested for transfer to the 2 untrained presentation modes. Humans outperformed pigeons regarding learning speed and accuracy as well as transfer performance and showed good learning irrespective of group assignment, whereas the pigeons of Group Blurred needed longer to learn the training tasks than the pigeons of Groups Intact and Scrambled. Also, whereas humans generalized equally well to any novel presentation mode, pigeons' transfer from and to blurred stimuli was impaired. Both species showed faster learning and, for the most part, better transfer in the rule-based than in the exemplar-based task, but there was no evidence of the used processing mode depending on the type of task (exemplar- or rule-based). Whereas pigeons relied on local information throughout, humans did not show a preference for either processing level. Additional tests with grayscale versions of the training stimuli, with versions that were both blurred and scrambled, and with novel instances of the rule-based task confirmed and further extended these findings. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Rapid Presentation of Emotional Expressions Reveals New Emotional Impairments in Tourette’s Syndrome
Mermillod, Martial; Devaux, Damien; Derost, Philippe; Rieu, Isabelle; Chambres, Patrick; Auxiette, Catherine; Legrand, Guillaume; Galland, Fabienne; Dalens, Hélène; Coulangeon, Louise Marie; Broussolle, Emmanuel; Durif, Franck; Jalenques, Isabelle
2013-01-01
Objective: Based on a variety of empirical evidence obtained within the theoretical framework of embodiment theory, we considered it likely that motor disorders in Tourette’s syndrome (TS) would have emotional consequences for TS patients. However, previous research using emotional facial categorization tasks suggests that these consequences are limited to TS patients with obsessive-compulsive behaviors (OCB). Method: These studies used long stimulus presentations which allowed the participants to categorize the different emotional facial expressions (EFEs) on the basis of a perceptual analysis that might potentially hide a lack of emotional feeling for certain emotions. In order to reduce this perceptual bias, we used a rapid visual presentation procedure. Results: Using this new experimental method, we revealed different and surprising impairments on several EFEs in TS patients compared to matched healthy control participants. Moreover, a spatial frequency analysis of the visual signal processed by the patients suggests that these impairments may be located at a cortical level. Conclusion: The current study indicates that the rapid visual presentation paradigm makes it possible to identify various potential emotional disorders that were not revealed by the standard visual presentation procedures previously reported in the literature. Moreover, the spatial frequency analysis performed in our study suggests that emotional deficit in TS might lie at the level of temporal cortical areas dedicated to the processing of HSF visual information. PMID:23630481
Fific, Mario; Little, Daniel R; Nosofsky, Robert M
2010-04-01
We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli along a set of component dimensions. Those independent decisions are then combined via logical rules to determine the overall categorization response. The time course of the independent decisions is modeled via random-walk processes operating along individual dimensions. Alternative mental architectures are used as mechanisms for combining the independent decisions to implement the logical rules. We derive fundamental qualitative contrasts for distinguishing among the predictions of the rule models and major alternative models of classification RT. We also use the models to predict detailed RT-distribution data associated with individual stimuli in tasks of speeded perceptual classification. PsycINFO Database Record (c) 2010 APA, all rights reserved.
ERIC Educational Resources Information Center
Mulligan, Neil W.; Dew, Ilana T. Z.
2009-01-01
The generation manipulation has been critical in delineating differences between implicit and explicit memory. In contrast to past research, the present experiments indicate that generating from a rhyme cue produces as much perceptual priming as does reading. This is demonstrated for 3 visual priming tasks: perceptual identification, word-fragment…
'If you are good, I get better': the role of social hierarchy in perceptual decision-making.
Santamaría-García, Hernando; Pannunzi, Mario; Ayneto, Alba; Deco, Gustavo; Sebastián-Gallés, Nuria
2014-10-01
So far, it was unclear if social hierarchy could influence sensory or perceptual cognitive processes. We evaluated the effects of social hierarchy on these processes using a basic visual perceptual decision task. We constructed a social hierarchy where participants performed the perceptual task separately with two covertly simulated players (superior, inferior). Participants were faster (better) when performing the discrimination task with the superior player. We studied the time course when social hierarchy was processed using event-related potentials and observed hierarchical effects even in early stages of sensory-perceptual processing, suggesting early top-down modulation by social hierarchy. Moreover, in a parallel analysis, we fitted a drift-diffusion model (DDM) to the results to evaluate the decision making process of this perceptual task in the context of a social hierarchy. Consistently, the DDM pointed to nondecision time (probably perceptual encoding) as the principal period influenced by social hierarchy. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Ecker, Ullrich K H; Arend, Anna M; Bergström, Kirstin; Zimmer, Hubert D
2009-09-01
Research on the effects of perceptual manipulations on recognition memory has suggested that (a) recollection is selectively influenced by task-relevant information and (b) familiarity can be considered perceptually specific. The present experiment tested divergent assumptions that (a) perceptual features can influence conscious object recollection via verbal code despite being task-irrelevant and that (b) perceptual features do not influence object familiarity if study is verbal-conceptual. At study, subjects named objects and their presentation colour; this was followed by an old/new object recognition test. Event-related potentials (ERP) showed that a study-test manipulation of colour impacted selectively on the ERP effect associated with recollection, while a size manipulation showed no effect. It is concluded that (a) verbal predicates generated at study are potent episodic memory agents that modulate recollection even if the recovered feature information is task-irrelevant and (b) commonly found perceptual match effects on familiarity critically depend on perceptual processing at study.
Task-relevant perceptual features can define categories in visual memory too.
Antonelli, Karla B; Williams, Carrick C
2017-11-01
Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.
The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.
Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli
2009-11-18
We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.
NASA Technical Reports Server (NTRS)
Kole, James A.; Schneider, Vivian I.; Healy, Alice F.; Barshi, Immanuel
2017-01-01
Subjects trained in a standard data entry task, which involved typing numbers (e.g., 5421) using their right hands. At test (6 months post-training), subjects completed the standard task, followed by a left-hand variant (typing with their left hands) that involved the same perceptual, but different motoric, processes as the standard task. At a second test (8 months post-training), subjects completed the standard task, followed by a code variant (translating letters into digits, then typing the digits with their right hands) that involved different perceptual, but the same motoric, processes as the standard task. For each of the three tasks, half the trials were trained numbers (old) and half were new. Repetition priming (faster response times to old than new numbers) was found for each task. Repetition priming for the standard task reflects retention of trained numbers; for the left-hand variant reflects transfer of perceptual processes; and for the code variant reflects transfer of motoric processes. There was thus evidence for both specificity and generalizability of training data entry perceptual and motoric processes over very long retention intervals.
ERIC Educational Resources Information Center
Heit, Evan; Hayes, Brett K.
2005-01-01
V. M. Sloutsky and A. V. Fisher reported 5 experiments documenting relations among categorization, induction, recognition, and similarity in children as well as adults and proposed a new model of induction, SINC (similarity, induction, categorization). Those authors concluded that induction depends on perceptual similarity rather than conceptual…
The neural basis for novel semantic categorization.
Koenig, Phyllis; Smith, Edward E; Glosser, Guila; DeVita, Chris; Moore, Peachie; McMillan, Corey; Gee, Jim; Grossman, Murray
2005-01-15
We monitored regional cerebral activity with BOLD fMRI during acquisition of a novel semantic category and subsequent categorization of test stimuli by a rule-based strategy or a similarity-based strategy. We observed different patterns of activation in direct comparisons of rule- and similarity-based categorization. During rule-based category acquisition, subjects recruited anterior cingulate, thalamic, and parietal regions to support selective attention to perceptual features, and left inferior frontal cortex to helps maintain rules in working memory. Subsequent rule-based categorization revealed anterior cingulate and parietal activation while judging stimuli whose conformity with the rules was readily apparent, and left inferior frontal recruitment during judgments of stimuli whose conformity was less apparent. By comparison, similarity-based category acquisition showed recruitment of anterior prefrontal and posterior cingulate regions, presumably to support successful retrieval of previously encountered exemplars from long-term memory, and bilateral temporal-parietal activation for perceptual feature integration. Subsequent similarity-based categorization revealed temporal-parietal, posterior cingulate, and anterior prefrontal activation. These findings suggest that large-scale networks support relatively distinct categorization processes during the acquisition and judgment of semantic category knowledge.
Effects of regular aerobic exercise on visual perceptual learning.
Connell, Charlotte J W; Thompson, Benjamin; Green, Hayden; Sullivan, Rachel K; Gant, Nicholas
2017-12-02
This study investigated the influence of five days of moderate intensity aerobic exercise on the acquisition and consolidation of visual perceptual learning using a motion direction discrimination (MDD) task. The timing of exercise relative to learning was manipulated by administering exercise either before or after perceptual training. Within a matched-subjects design, twenty-seven healthy participants (n = 9 per group) completed five consecutive days of perceptual training on a MDD task under one of three interventions: no exercise, exercise before the MDD task, or exercise after the MDD task. MDD task accuracy improved in all groups over the five-day period, but there was a trend for impaired learning when exercise was performed before visual perceptual training. MDD task accuracy (mean ± SD) increased in exercise before by 4.5 ± 6.5%; exercise after by 11.8 ± 6.4%; and no exercise by 11.3 ± 7.2%. All intervention groups displayed similar MDD threshold reductions for the trained and untrained motion axes after training. These findings suggest that moderate daily exercise does not enhance the rate of visual perceptual learning for an MDD task or the transfer of learning to an untrained motion axis. Furthermore, exercise performed immediately prior to a visual perceptual learning task may impair learning. Further research with larger groups is required in order to better understand these effects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Harnessing the wandering mind: the role of perceptual load.
Forster, Sophie; Lavie, Nilli
2009-06-01
Perceptual load is a key determinant of distraction by task-irrelevant stimuli (e.g., Lavie, N. (2005). Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82). Here we establish the role of perceptual load in determining an internal form of distraction by task-unrelated thoughts (TUTs or "mind-wandering"). Four experiments demonstrated reduced frequency of TUTs with high compared to low perceptual load in a visual-search task. Alternative accounts in terms of increased demands on responses, verbal working memory or motivation were ruled out and clear effects of load were found for unintentional TUTs. Individual differences in load effects on internal (TUTs) and external (response-competition) distractors were correlated. These results suggest that exhausting attentional capacity in task-relevant processing under high perceptual load can reduce processing of task-irrelevant information from external and internal sources alike.
Sadeh, Naomi; Bredemeier, Keith
2010-01-01
Attentional control theory (Eysenck et al., 2007) posits that taxing attentional resources impairs performance efficiency in anxious individuals. This theory, however, does not explicitly address if or how the relation between anxiety and attentional control depends upon the perceptual demands of the task at hand. Consequently, the present study examined the relation between trait anxiety and task performance using a perceptual load task (Maylor & Lavie, 1998). Sixty-eight male college students completed a visual search task that indexed processing of irrelevant distractors systematically across four levels of perceptual load. Results indicated that anxiety was related to difficulty suppressing the behavioral effects of irrelevant distractors (i.e., decreased reaction time efficiency) under high, but not low, perceptual loads. In contrast, anxiety was not associated with error rates on the task. These findings are consistent with the prediction that anxiety is associated with impairments in performance efficiency under conditions that tax attentional resources. PMID:21547776
Sadeh, Naomi; Bredemeier, Keith
2011-06-01
Attentional control theory (Eysenck et al., 2007) posits that taxing attentional resources impairs performance efficiency in anxious individuals. This theory, however, does not explicitly address if or how the relation between anxiety and attentional control depends upon the perceptual demands of the task at hand. Consequently, the present study examined the relation between trait anxiety and task performance using a perceptual load task (Maylor & Lavie, 1998). Sixty-eight male college students completed a visual search task that indexed processing of irrelevant distractors systematically across four levels of perceptual load. Results indicated that anxiety was related to difficulty suppressing the behavioural effects of irrelevant distractors (i.e., decreased reaction time efficiency) under high, but not low, perceptual loads. In contrast, anxiety was not associated with error rates on the task. These findings are consistent with the prediction that anxiety is associated with impairments in performance efficiency under conditions that tax attentional resources.
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
Parks, Colleen M
2013-07-01
Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which suggest that the extent to which perceptual fluency matters on a recognition test depends in large part on the task demands. A test that recruits perceptual processing for discrimination should show greater perceptual effects and smaller conceptual effects than standard recognition, similar to the pattern of effects found in perceptual implicit memory tasks. This idea was tested in the current experiment by crossing a levels of processing manipulation with a modality manipulation on a series of recognition tests that ranged from conceptual (standard recognition) to very perceptually demanding (a speeded recognition test with degraded stimuli). Results showed that the levels of processing effect decreased and the effect of modality increased when tests were made perceptually demanding. These results support the idea that surface-level features influence performance on recognition tests when they are made salient by the task demands. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Shared mechanisms of perceptual learning and decision making.
Law, Chi-Tat; Gold, Joshua I
2010-04-01
Perceptual decisions require the brain to weigh noisy evidence from sensory neurons to form categorical judgments that guide behavior. Here we review behavioral and neurophysiological findings suggesting that at least some forms of perceptual learning do not appear to affect the response properties of neurons that represent the sensory evidence. Instead, improved perceptual performance results from changes in how the sensory evidence is selected and weighed to form the decision. We discuss the implications of this idea for possible sites and mechanisms of training-induced improvements in perceptual processing in the brain. Copyright © 2009 Cognitive Science Society, Inc.
Mid-level perceptual features distinguish objects of different real-world sizes.
Long, Bria; Konkle, Talia; Cohen, Michael A; Alvarez, George A
2016-01-01
Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms--unrecognizable textures that loosely preserve an object's form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes. (c) 2015 APA, all rights reserved).
Visual search and autism symptoms: What young children search for and co-occurring ADHD matter.
Doherty, Brianna R; Charman, Tony; Johnson, Mark H; Scerif, Gaia; Gliga, Teodora
2018-05-03
Superior visual search is one of the most common findings in the autism spectrum disorder (ASD) literature. Here, we ascertain how generalizable these findings are across task and participant characteristics, in light of recent replication failures. We tested 106 3-year-old children at familial risk for ASD, a sample that presents high ASD and ADHD symptoms, and 25 control participants, in three multi-target search conditions: easy exemplar search (look for cats amongst artefacts), difficult exemplar search (look for dogs amongst chairs/tables perceptually similar to dogs), and categorical search (look for animals amongst artefacts). Performance was related to dimensional measures of ASD and ADHD, in agreement with current research domain criteria (RDoC). We found that ASD symptom severity did not associate with enhanced performance in search, but did associate with poorer categorical search in particular, consistent with literature describing impairments in categorical knowledge in ASD. Furthermore, ASD and ADHD symptoms were both associated with more disorganized search paths across all conditions. Thus, ASD traits do not always convey an advantage in visual search; on the contrary, ASD traits may be associated with difficulties in search depending upon the nature of the stimuli (e.g., exemplar vs. categorical search) and the presence of co-occurring symptoms. © 2018 John Wiley & Sons Ltd.
Lambert, Anthony J; Wootton, Adrienne
2017-08-01
Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kim, Junsuk; Chung, Yoon Gi; Chung, Soon-Cheol; Bulthoff, Heinrich H; Kim, Sung-Phil
2016-01-01
As the use of wearable haptic devices with vibrating alert features is commonplace, an understanding of the perceptual categorization of vibrotactile frequencies has become important. This understanding can be substantially enhanced by unveiling how neural activity represents vibrotactile frequency information. Using functional magnetic resonance imaging (fMRI), this study investigated categorical clustering patterns of the frequency-dependent neural activity evoked by vibrotactile stimuli with gradually changing frequencies from 20 to 200 Hz. First, a searchlight multi-voxel pattern analysis (MVPA) was used to find brain regions exhibiting neural activities associated with frequency information. We found that the contralateral postcentral gyrus (S1) and the supramarginal gyrus (SMG) carried frequency-dependent information. Next, we applied multidimensional scaling (MDS) to find low-dimensional neural representations of different frequencies obtained from the multi-voxel activity patterns within these regions. The clustering analysis on the MDS results showed that neural activity patterns of 20-100 Hz and 120-200 Hz were divided into two distinct groups. Interestingly, this neural grouping conformed to the perceptual frequency categories found in the previous behavioral studies. Our findings therefore suggest that neural activity patterns in the somatosensory cortical regions may provide a neural basis for the perceptual categorization of vibrotactile frequency.
Visual short-term memory load strengthens selective attention.
Roper, Zachary J J; Vecera, Shaun P
2014-04-01
Perceptual load theory accounts for many attentional phenomena; however, its mechanism remains elusive because it invokes underspecified attentional resources. Recent dual-task evidence has revealed that a concurrent visual short-term memory (VSTM) load slows visual search and reduces contrast sensitivity, but it is unknown whether a VSTM load also constricts attention in a canonical perceptual load task. If attentional selection draws upon VSTM resources, then distraction effects-which measure attentional "spill-over"-will be reduced as competition for resources increases. Observers performed a low perceptual load flanker task during the delay period of a VSTM change detection task. We observed a reduction of the flanker effect in the perceptual load task as a function of increasing concurrent VSTM load. These findings were not due to perceptual-level interactions between the physical displays of the two tasks. Our findings suggest that perceptual representations of distractor stimuli compete with the maintenance of visual representations held in memory. We conclude that access to VSTM determines the degree of attentional selectivity; when VSTM is not completely taxed, it is more likely for task-irrelevant items to be consolidated and, consequently, affect responses. The "resources" hypothesized by load theory are at least partly mnemonic in nature, due to the strong correspondence they share with VSTM capacity.
Jackson, Jade; Rich, Anina N; Williams, Mark A; Woolgar, Alexandra
2017-02-01
Human cognition is characterized by astounding flexibility, enabling us to select appropriate information according to the objectives of our current task. A circuit of frontal and parietal brain regions, often referred to as the frontoparietal attention network or multiple-demand (MD) regions, are believed to play a fundamental role in this flexibility. There is evidence that these regions dynamically adjust their responses to selectively process information that is currently relevant for behavior, as proposed by the "adaptive coding hypothesis" [Duncan, J. An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2, 820-829, 2001]. Could this provide a neural mechanism for feature-selective attention, the process by which we preferentially process one feature of a stimulus over another? We used multivariate pattern analysis of fMRI data during a perceptually challenging categorization task to investigate whether the representation of visual object features in the MD regions flexibly adjusts according to task relevance. Participants were trained to categorize visually similar novel objects along two orthogonal stimulus dimensions (length/orientation) and performed short alternating blocks in which only one of these dimensions was relevant. We found that multivoxel patterns of activation in the MD regions encoded the task-relevant distinctions more strongly than the task-irrelevant distinctions: The MD regions discriminated between stimuli of different lengths when length was relevant and between the same objects according to orientation when orientation was relevant. The data suggest a flexible neural system that adjusts its representation of visual objects to preferentially encode stimulus features that are currently relevant for behavior, providing a neural mechanism for feature-selective attention.
How "mere" is the mere ownership effect in memory? Evidence for semantic organization processes.
Englert, Julia; Wentura, Dirk
2016-11-01
Memory is better for items arbitrarily assigned to the self than for items assigned to another person (mere ownership effect, MOE). In a series of six experiments, we investigated the role of semantic processes for the MOE. Following successful replication, we investigated whether the MOE was contingent upon semantic processing: For meaningless stimuli, there was no MOE. Testing for a potential role of semantic elaboration using meaningful stimuli in an encoding task without verbal labels, we found evidence of spontaneous semantic processing irrespective of self- or other-assignment. When semantic organization was manipulated, the MOE vanished if a semantic classification task was added to the self/other assignment but persisted for a perceptual classification task. Furthermore, we found greater clustering of self-assigned than of other-assigned items in free recall. Taken together, these results suggest that the MOE could be based on the organizational principle of a "me" versus "not-me" categorization. Copyright © 2016 Elsevier Inc. All rights reserved.
The neural basis of metacognitive ability
Fleming, Stephen M.; Dolan, Raymond J.
2012-01-01
Ability in various cognitive domains is often assessed by measuring task performance, such as the accuracy of a perceptual categorization. A similar analysis can be applied to metacognitive reports about a task to quantify the degree to which an individual is aware of his or her success or failure. Here, we review the psychological and neural underpinnings of metacognitive accuracy, drawing on research in memory and decision-making. These data show that metacognitive accuracy is dissociable from task performance and varies across individuals. Convergent evidence indicates that the function of the rostral and dorsal aspect of the lateral prefrontal cortex (PFC) is important for the accuracy of retrospective judgements of performance. In contrast, prospective judgements of performance may depend upon medial PFC. We close with a discussion of how metacognitive processes relate to concepts of cognitive control, and propose a neural synthesis in which dorsolateral and anterior prefrontal cortical subregions interact with interoceptive cortices (cingulate and insula) to promote accurate judgements of performance. PMID:22492751
Action video game play facilitates the development of better perceptual templates.
Bejjanki, Vikranth R; Zhang, Ruyuan; Li, Renjie; Pouget, Alexandre; Green, C Shawn; Lu, Zhong-Lin; Bavelier, Daphne
2014-11-25
The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play.
Action video game play facilitates the development of better perceptual templates
Bejjanki, Vikranth R.; Zhang, Ruyuan; Li, Renjie; Pouget, Alexandre; Green, C. Shawn; Lu, Zhong-Lin; Bavelier, Daphne
2014-01-01
The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play. PMID:25385590
To hear or not to hear: Voice processing under visual load.
Zäske, Romi; Perlich, Marie-Christin; Schweinberger, Stefan R
2016-07-01
Adaptation to female voices causes subsequent voices to be perceived as more male, and vice versa. This contrastive aftereffect disappears under spatial inattention to adaptors, suggesting that voices are not encoded automatically. According to Lavie, Hirst, de Fockert, and Viding (2004), the processing of task-irrelevant stimuli during selective attention depends on perceptual resources and working memory. Possibly due to their social significance, faces may be an exceptional domain: That is, task-irrelevant faces can escape perceptual load effects. Here we tested voice processing, to study whether voice gender aftereffects (VGAEs) depend on low or high perceptual (Exp. 1) or working memory (Exp. 2) load in a relevant visual task. Participants adapted to irrelevant voices while either searching digit displays for a target (Exp. 1) or recognizing studied digits (Exp. 2). We found that the VGAE was unaffected by perceptual load, indicating that task-irrelevant voices, like faces, can also escape perceptual-load effects. Intriguingly, the VGAE was increased under high memory load. Therefore, visual working memory load, but not general perceptual load, determines the processing of task-irrelevant voices.
ERIC Educational Resources Information Center
Blair, Mark R.; Watson, Marcus R.; Walshe, R. Calen; Maj, Fillip
2009-01-01
Humans have an extremely flexible ability to categorize regularities in their environment, in part because of attentional systems that allow them to focus on important perceptual information. In formal theories of categorization, attention is typically modeled with weights that selectively bias the processing of stimulus features. These theories…
Perceiving Age and Gender in Unfamiliar Faces: An fMRI Study on Face Categorization
ERIC Educational Resources Information Center
Wiese, Holger; Kloth, Nadine; Gullmar, Daniel; Reichenbach, Jurgen R.; Schweinberger, Stefan R.
2012-01-01
Efficient processing of unfamiliar faces typically involves their categorization (e.g., into old vs. young or male vs. female). However, age and gender categorization may pose different perceptual demands. In the present study, we employed functional magnetic resonance imaging (fMRI) to compare the activity evoked during age vs. gender…
ERIC Educational Resources Information Center
Diesendruck, Gil; Peretz, Shimon
2013-01-01
Visual appearance is one of the main cues children rely on when categorizing novel objects. In 3 studies, testing 128 3-year-olds and 192 5-year-olds, we investigated how various kinds of information may differentially lead children to overlook visual appearance in their categorization decisions across domains. Participants saw novel animals or…
Ashley, Mark J; Ashley, Jessica; Kreber, Lisa
2012-01-01
Traumatic brain injury (TBI) results in disruption of information processing via damage to primary, secondary, and tertiary cortical regions, as well as, subcortical pathways supporting information flow within and between cortical structures. TBI predominantly affects the anterior frontal poles, anterior temporal poles, white matter tracts and medial temporal structures. Fundamental information processing skills such as attention, perceptual processing, categorization and cognitive distance are concentrated within these same regions and are frequently disrupted following injury. Information processing skills improve in accordance with the extent to which residual frontal and temporal neurons can be encouraged to recruit and bias neuronal networks or the degree to which the functional connectivity of neural networks can be re-established and result in re-emergence or regeneration of specific cognitive skills. Higher-order cognitive processes, i.e., memory, reasoning, problem solving and other executive functions, are dependent upon the integrity of attention, perceptual processing, categorization, and cognitive distance. A therapeutic construct for treatment of attention, perceptual processing, categorization and cognitive distance deficits is presented along with an interventional model for encouragement of re-emergence or regeneration of these fundamental information processing skills.
Attentional capture under high perceptual load.
Cosman, Joshua D; Vecera, Shaun P
2010-12-01
Attentional capture by abrupt onsets can be modulated by several factors, including the complexity, or perceptual load, of a scene. We have recently demonstrated that observers are less likely to be captured by abruptly appearing, task-irrelevant stimuli when they perform a search that is high, as opposed to low, in perceptual load (Cosman & Vecera, 2009), consistent with perceptual load theory. However, recent results indicate that onset frequency can influence stimulus-driven capture, with infrequent onsets capturing attention more often than did frequent onsets. Importantly, in our previous task, an abrupt onset was present on every trial, and consequently, attentional capture might have been affected by both onset frequency and perceptual load. In the present experiment, we examined whether onset frequency influences attentional capture under conditions of high perceptual load. When onsets were presented frequently, we replicated our earlier results; attentional capture by onsets was modulated under conditions of high perceptual load. Importantly, however, when onsets were presented infrequently, we observed robust capture effects. These results conflict with a strong form of load theory and, instead, suggest that exposure to the elements of a task (e.g., abrupt onsets) combines with high perceptual load to modulate attentional capture by task-irrelevant information.
McMurray, Bob; Jongman, Allard
2012-01-01
Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important is the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context-dependent. This study assessed the informational assumptions of several models of speech categorization, in particular, the number of cues that are the basis of categorization and whether these cues represent the input veridically or have undergone compensation. We collected a corpus of 2880 fricative productions (Jongman, Wayland & Wong, 2000) spanning many talker- and vowel-contexts and measured 24 cues for each. A subset was also presented to listeners in an 8AFC phoneme categorization task. We then trained a common classification model based on logistic regression to categorize the fricative from the cue values, and manipulated the information in the training set to contrast 1) models based on a small number of invariant cues; 2) models using all cues without compensation, and 3) models in which cues underwent compensation for contextual factors. Compensation was modeled by Computing Cues Relative to Expectations (C-CuRE), a new approach to compensation that preserves fine-grained detail in the signal. Only the compensation model achieved a similar accuracy to listeners, and showed the same effects of context. Thus, even simple categorization metrics can overcome the variability in speech when sufficient information is available and compensation schemes like C-CuRE are employed. PMID:21417542
Fabi, Sarah; Leuthold, Hartmut
2018-06-01
The aim of this study was to identify racial bias influences on empathic processing from early stimulus encoding, over categorization until late motor processing stages by comparing brain responses (electroencephalogram) to pictures of fair- and dark-colored hands in painful or neutral daily-life situations. Participants performed a pain judgment task and a skin color judgment task. Event-related brain potentials (ERPs) substantiated former findings of automatic empathic influences on stimulus encoding, reflected by the early posterior negativity (EPN), and late controlled influences on the stimulus categorization, as reflected by the late posterior positivity (P3b). Concerning the racial bias in empathy (RBE) effect, more positive amplitudes in the 280-340 ms time window over frontocentral electrodes in the painful than the neutral condition for fair- but not dark-colored hands speak for an early influence of racial bias. This was further supported by correlations with empathic concern scores for fair- but not dark-colored stimuli. Additionally, P3b amplitude differences between fair- and dark-colored hands to painful stimuli increased with the implicit racial attitude of participants, suggesting that the categorization stage is not completely immune to racial bias. Regarding the motor processing stages, power change values in the upper beta-band (19-30 Hz) revealed for painful compared to neutral stimuli larger facilitation of sensorimotor activity before the response and larger inhibition after the response, independent of skin color. In conclusion, present findings speak for an influence of the RBE on early perceptual encoding but also on the late categorization stage that depends on the participant's implicit attitude towards racial outgroups. Copyright © 2018 Elsevier Ltd. All rights reserved.
Causal Relations Drive Young Children's Induction, Naming, and Categorization
ERIC Educational Resources Information Center
Opfer, John E.; Bulloch, Megan J.
2007-01-01
A number of recent models and experiments have suggested that evidence of early category-based induction is an artifact of perceptual cues provided by experimenters. We tested these accounts against the prediction that different relations (causal versus non-causal) determine the types of perceptual similarity by which children generalize. Young…
Representing and Inferring Visual Perceptual Skills in Dermatological Image Understanding
ERIC Educational Resources Information Center
Li, Rui
2013-01-01
Experts have a remarkable capability of locating, perceptually organizing, identifying, and categorizing objects in images specific to their domains of expertise. Eliciting and representing their visual strategies and some aspects of domain knowledge will benefit a wide range of studies and applications. For example, image understanding may be…
The benefits of sensorimotor knowledge: body-object interaction facilitates semantic processing.
Siakaluk, Paul D; Pexman, Penny M; Sears, Christopher R; Wilson, Kim; Locheed, Keri; Owen, William J
2008-04-05
This article examined the effects of body-object interaction (BOI) on semantic processing. BOI measures perceptions of the ease with which a human body can physically interact with a word's referent. In Experiment 1, BOI effects were examined in 2 semantic categorization tasks (SCT) in which participants decided if words are easily imageable. Responses were faster and more accurate for high BOI words (e.g., mask) than for low BOI words (e.g., ship). In Experiment 2, BOI effects were examined in a semantic lexical decision task (SLDT), which taps both semantic feedback and semantic processing. The BOI effect was larger in the SLDT than in the SCT, suggesting that BOI facilitates both semantic feedback and semantic processing. The findings are consistent with the embodied cognition perspective (e.g., Barsalou's, 1999, Perceptual Symbols Theory), which proposes that sensorimotor interactions with the environment are incorporated in semantic knowledge. 2008 Cognitive Science Society, Inc.
The idiosyncratic nature of confidence
Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador
2017-01-01
Confidence is the ‘feeling of knowing’ that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence. PMID:29152591
Odour discrimination and identification are improved in early blindness.
Cuevas, Isabel; Plaza, Paula; Rombaux, Philippe; De Volder, Anne G; Renier, Laurent
2009-12-01
Previous studies showed that early blind humans develop superior abilities in the use of their remaining senses, hypothetically due to a functional reorganization of the deprived visual brain areas. While auditory and tactile functions have been investigated for long, little is known about the effects of early visual deprivation on olfactory processing. However, blind humans make an extensive use of olfactory information in their daily life. Here we investigated olfactory discrimination and identification abilities in early blind subjects and age-matched sighted controls. Three levels of cuing were used in the identification task, i.e., free-identification (no cue), categorization (semantic cues) and multiple choice (semantic and phonological cues). Early blind subjects significantly outperformed the controls in odour discrimination, free-identification and categorization. In addition, the larger group difference was observed in the free-identification as compared to the categorization and the multiple choice conditions. This indicated that a better access to the semantic information from odour perception accounted for part of the improved olfactory performances in odour identification in the blind. We concluded that early blind subjects have both improved perceptual abilities and a better access to the information stored in semantic memory than sighted subjects.
Network Configurations in the Human Brain Reflect Choice Bias during Rapid Face Processing.
Tu, Tao; Schneck, Noam; Muraskin, Jordan; Sajda, Paul
2017-12-13
Network interactions are likely to be instrumental in processes underlying rapid perception and cognition. Specifically, high-level and perceptual regions must interact to balance pre-existing models of the environment with new incoming stimuli. Simultaneous electroencephalography (EEG) and fMRI (EEG/fMRI) enables temporal characterization of brain-network interactions combined with improved anatomical localization of regional activity. In this paper, we use simultaneous EEG/fMRI and multivariate dynamical systems (MDS) analysis to characterize network relationships between constitute brain areas that reflect a subject's choice for a face versus nonface categorization task. Our simultaneous EEG and fMRI analysis on 21 human subjects (12 males, 9 females) identifies early perceptual and late frontal subsystems that are selective to the categorical choice of faces versus nonfaces. We analyze the interactions between these subsystems using an MDS in the space of the BOLD signal. Our main findings show that differences between face-choice and house-choice networks are seen in the network interactions between the early and late subsystems, and that the magnitude of the difference in network interaction positively correlates with the behavioral false-positive rate of face choices. We interpret this to reflect the role of saliency and expectations likely encoded in frontal "late" regions on perceptual processes occurring in "early" perceptual regions. SIGNIFICANCE STATEMENT Our choices are affected by our biases. In visual perception and cognition such biases can be commonplace and quite curious-e.g., we see a human face when staring up at a cloud formation or down at a piece of toast at the breakfast table. Here we use multimodal neuroimaging and dynamical systems analysis to measure whole-brain spatiotemporal dynamics while subjects make decisions regarding the type of object they see in rapidly flashed images. We find that the degree of interaction in these networks accounts for a substantial fraction of our bias to see faces. In general, our findings illustrate how the properties of spatiotemporal networks yield insight into the mechanisms of how we form decisions. Copyright © 2017 the authors 0270-6474/17/3712226-12$15.00/0.
Network Configurations in the Human Brain Reflect Choice Bias during Rapid Face Processing
Schneck, Noam
2017-01-01
Network interactions are likely to be instrumental in processes underlying rapid perception and cognition. Specifically, high-level and perceptual regions must interact to balance pre-existing models of the environment with new incoming stimuli. Simultaneous electroencephalography (EEG) and fMRI (EEG/fMRI) enables temporal characterization of brain–network interactions combined with improved anatomical localization of regional activity. In this paper, we use simultaneous EEG/fMRI and multivariate dynamical systems (MDS) analysis to characterize network relationships between constitute brain areas that reflect a subject's choice for a face versus nonface categorization task. Our simultaneous EEG and fMRI analysis on 21 human subjects (12 males, 9 females) identifies early perceptual and late frontal subsystems that are selective to the categorical choice of faces versus nonfaces. We analyze the interactions between these subsystems using an MDS in the space of the BOLD signal. Our main findings show that differences between face-choice and house-choice networks are seen in the network interactions between the early and late subsystems, and that the magnitude of the difference in network interaction positively correlates with the behavioral false-positive rate of face choices. We interpret this to reflect the role of saliency and expectations likely encoded in frontal “late” regions on perceptual processes occurring in “early” perceptual regions. SIGNIFICANCE STATEMENT Our choices are affected by our biases. In visual perception and cognition such biases can be commonplace and quite curious—e.g., we see a human face when staring up at a cloud formation or down at a piece of toast at the breakfast table. Here we use multimodal neuroimaging and dynamical systems analysis to measure whole-brain spatiotemporal dynamics while subjects make decisions regarding the type of object they see in rapidly flashed images. We find that the degree of interaction in these networks accounts for a substantial fraction of our bias to see faces. In general, our findings illustrate how the properties of spatiotemporal networks yield insight into the mechanisms of how we form decisions. PMID:29118108
The Role of Perceptual Load in Inattentional Blindness
ERIC Educational Resources Information Center
Cartwright-Finch, Ula; Lavie, Nilli
2007-01-01
Perceptual load theory offers a resolution to the long-standing early vs. late selection debate over whether task-irrelevant stimuli are perceived, suggesting that irrelevant perception depends upon the perceptual load of task-relevant processing. However, previous evidence for this theory has relied on RTs and neuroimaging. Here we tested the…
Stimulus recognition occurs under high perceptual load: Evidence from correlated flankers.
Cosman, Joshua D; Mordkoff, J Toby; Vecera, Shaun P
2016-12-01
A dominant account of selective attention, perceptual load theory, proposes that when attentional resources are exhausted, task-irrelevant information receives little attention and goes unrecognized. However, the flanker effect-typically used to assay stimulus identification-requires an arbitrary mapping between a stimulus and a response. We looked for failures of flanker identification by using a more-sensitive measure that does not require arbitrary stimulus-response mappings: the correlated flankers effect. We found that flanking items that were task-irrelevant but that correlated with target identity produced a correlated flanker effect. Participants were faster on trials in which the irrelevant flanker had previously correlated with the target than when it did not. Of importance, this correlated flanker effect appeared regardless of perceptual load, occurring even in high-load displays that should have abolished flanker identification. Findings from a standard flanker task replicated the basic perceptual load effect, with flankers not affecting response times under high perceptual load. Our results indicate that task-irrelevant information can be processed to a high level (identification), even under high perceptual load. This challenges a strong account of high perceptual load effects that hypothesizes complete failures of stimulus identification under high perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Effects of Goal Relevance and Perceptual Features on Emotional Items and Associative Memory
Mao, Wei B.; An, Shu; Yang, Xiao F.
2017-01-01
Showing an emotional item in a neutral background scene often leads to enhanced memory for the emotional item and impaired associative memory for background details. Meanwhile, both top–down goal relevance and bottom–up perceptual features played important roles in memory binding. We conducted two experiments and aimed to further examine the effects of goal relevance and perceptual features on emotional items and associative memory. By manipulating goal relevance (asking participants to categorize only each item image as living or non-living or to categorize each whole composite picture consisted of item image and background scene as natural scene or manufactured scene) and perceptual features (controlling visual contrast and visual familiarity) in two experiments, we found that both high goal relevance and salient perceptual features (high salience of items vs. high familiarity of items) could promote emotional item memory, but they had different effects on associative memory for emotional items and neutral backgrounds. Specifically, high goal relevance and high perceptual-salience of items could jointly impair the associative memory for emotional items and neutral backgrounds, while the effect of item familiarity on associative memory for emotional items would be modulated by goal relevance. High familiarity of items could increase associative memory for negative items and neutral backgrounds only in the low goal relevance condition. These findings suggest the effect of emotion on associative memory is not only related to attentional capture elicited by emotion, but also can be affected by goal relevance and perceptual features of stimulus. PMID:28790943
The Effects of Goal Relevance and Perceptual Features on Emotional Items and Associative Memory.
Mao, Wei B; An, Shu; Yang, Xiao F
2017-01-01
Showing an emotional item in a neutral background scene often leads to enhanced memory for the emotional item and impaired associative memory for background details. Meanwhile, both top-down goal relevance and bottom-up perceptual features played important roles in memory binding. We conducted two experiments and aimed to further examine the effects of goal relevance and perceptual features on emotional items and associative memory. By manipulating goal relevance (asking participants to categorize only each item image as living or non-living or to categorize each whole composite picture consisted of item image and background scene as natural scene or manufactured scene) and perceptual features (controlling visual contrast and visual familiarity) in two experiments, we found that both high goal relevance and salient perceptual features (high salience of items vs. high familiarity of items) could promote emotional item memory, but they had different effects on associative memory for emotional items and neutral backgrounds. Specifically, high goal relevance and high perceptual-salience of items could jointly impair the associative memory for emotional items and neutral backgrounds, while the effect of item familiarity on associative memory for emotional items would be modulated by goal relevance. High familiarity of items could increase associative memory for negative items and neutral backgrounds only in the low goal relevance condition. These findings suggest the effect of emotion on associative memory is not only related to attentional capture elicited by emotion, but also can be affected by goal relevance and perceptual features of stimulus.
Perceptual Learning in the Absence of Task or Stimulus Specificity
Webb, Ben S.; Roach, Neil W.; McGraw, Paul V.
2007-01-01
Performance on most sensory tasks improves with practice. When making particularly challenging sensory judgments, perceptual improvements in performance are tightly coupled to the trained task and stimulus configuration. The form of this specificity is believed to provide a strong indication of which neurons are solving the task or encoding the learned stimulus. Here we systematically decouple task- and stimulus-mediated components of trained improvements in perceptual performance and show that neither provides an adequate description of the learning process. Twenty-four human subjects trained on a unique combination of task (three-element alignment or bisection) and stimulus configuration (vertical or horizontal orientation). Before and after training, we measured subjects' performance on all four task-configuration combinations. What we demonstrate for the first time is that learning does actually transfer across both task and configuration provided there is a common spatial axis to the judgment. The critical factor underlying the transfer of learning effects is not the task or stimulus arrangements themselves, but rather the recruitment of commons sets of neurons most informative for making each perceptual judgment. PMID:18094748
Catching Audiovisual Interactions With a First-Person Fisherman Video Game.
Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert
2017-07-01
The human brain is excellent at integrating information from different sources across multiple sensory modalities. To examine one particularly important form of multisensory interaction, we manipulated the temporal correlation between visual and auditory stimuli in a first-person fisherman video game. Subjects saw rapidly swimming fish whose size oscillated, either at 6 or 8 Hz. Subjects categorized each fish according to its rate of size oscillation, while trying to ignore a concurrent broadband sound seemingly emitted by the fish. In three experiments, categorization was faster and more accurate when the rate at which a fish oscillated in size matched the rate at which the accompanying, task-irrelevant sound was amplitude modulated. Control conditions showed that the difference between responses to matched and mismatched audiovisual signals reflected a performance gain in the matched condition, rather than a cost from the mismatched condition. The performance advantage with matched audiovisual signals was remarkably robust over changes in task demands between experiments. Performance with matched or unmatched audiovisual signals improved over successive trials at about the same rate, emblematic of perceptual learning in which visual oscillation rate becomes more discriminable with experience. Finally, analysis at the level of individual subjects' performance pointed to differences in the rates at which subjects can extract information from audiovisual stimuli.
Common angle plots as perception-true visualizations of categorical associations.
Hofmann, Heike; Vendettuoli, Marie
2013-12-01
Visualizations are great tools of communications-they summarize findings and quickly convey main messages to our audience. As designers of charts we have to make sure that information is shown with a minimum of distortion. We have to also consider illusions and other perceptual limitations of our audience. In this paper we discuss the effect and strength of the line width illusion, a Muller-Lyer type illusion, on designs related to displaying associations between categorical variables. Parallel sets and hammock plots are both affected by line width illusions. We introduce the common-angle plot as an alternative method for displaying categorical data in a manner that minimizes the effect from perceptual illusions. Results from user studies both highlight the need for addressing line-width illusions in displays and provide evidence that common angle charts successfully resolve this issue.
The effects of attention on perceptual implicit memory.
Rajaram, S; Srinivas, K; Travers, S
2001-10-01
Reports on the effects of dividing attention at study on subsequent perceptual priming suggest that perceptual priming is generally unaffected by attentional manipulations as long as word identity is processed. We tested this hypothesis in three experiments by using the implicit word fragment completion and word stem completion tasks. Division of attention was instantiated with the Stroop task in order to ensure the processing of word identity even when the participant's attention was directed to a stimulus attribute other than the word itself. Under these conditions, we found that even though perceptual priming was significant, it was significantly reduced in magnitude. A stem cued recall test in Experiment 2 confirmed a more deleterious effect of divided attention on explicit memory. Taken together, our findings delineate the relative contributions of perceptual analysis and attentional processes in mediating perceptual priming on two ubiquitously used tasks of word fragment completion and word stem completion.
Wolf, Christian; Schütz, Alexander C
2017-06-01
Saccades bring objects of interest onto the fovea for high-acuity processing. Saccades to rewarded targets show shorter latencies that correlate negatively with expected motivational value. Shorter latencies are also observed when the saccade target is relevant for a perceptual discrimination task. Here we tested whether saccade preparation is equally influenced by informational value as it is by motivational value. We defined informational value as the probability that information is task-relevant times the ratio between postsaccadic foveal and presaccadic peripheral discriminability. Using a gaze-contingent display, we independently manipulated peripheral and foveal discriminability of the saccade target. Latencies of saccades with perceptual task were reduced by 36 ms in general, but they were not modulated by the information saccades provide (Experiments 1 and 2). However, latencies showed a clear negative linear correlation with the probability that the target is task-relevant (Experiment 3). We replicated that the facilitation by a perceptual task is spatially specific and not due to generally heightened arousal (Experiment 4). Finally, the facilitation only emerged when the perceptual task is in the visual but not in the auditory modality (Experiment 5). Taken together, these results suggest that saccade latencies are not equally modulated by informational value as by motivational value. The facilitation by a perceptual task only arises when task-relevant visual information is foveated, irrespective of whether the foveation is useful or not.
Categorical dimensions of human odor descriptor space revealed by non-negative matrix factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chennubhotla, Chakra; Castro, Jason
2013-01-01
In contrast to most other sensory modalities, the basic perceptual dimensions of olfaction remain un- clear. Here, we use non-negative matrix factorization (NMF) - a dimensionality reduction technique - to uncover structure in a panel of odor profiles, with each odor defined as a point in multi-dimensional descriptor space. The properties of NMF are favorable for the analysis of such lexical and perceptual data, and lead to a high-dimensional account of odor space. We further provide evidence that odor di- mensions apply categorically. That is, odor space is not occupied homogenously, but rather in a discrete and intrinsically clustered manner.more » We discuss the potential implications of these results for the neural coding of odors, as well as for developing classifiers on larger datasets that may be useful for predicting perceptual qualities from chemical structures.« less
Harnessing the Wandering Mind: The Role of Perceptual Load
ERIC Educational Resources Information Center
Forster, Sophie; Lavie, Nilli
2009-01-01
Perceptual load is a key determinant of distraction by task-irrelevant stimuli (e.g., Lavie, N. (2005). "Distracted and confused?: Selective attention under load." "Trends in Cognitive Sciences," 9, 75-82). Here we establish the role of perceptual load in determining an internal form of distraction by task-unrelated thoughts (TUTs or…
Perceptual learning: toward a comprehensive theory.
Watanabe, Takeo; Sasaki, Yuka
2015-01-03
Visual perceptual learning (VPL) is long-term performance increase resulting from visual perceptual experience. Task-relevant VPL of a feature results from training of a task on the feature relevant to the task. Task-irrelevant VPL arises as a result of exposure to the feature irrelevant to the trained task. At least two serious problems exist. First, there is the controversy over which stage of information processing is changed in association with task-relevant VPL. Second, no model has ever explained both task-relevant and task-irrelevant VPL. Here we propose a dual plasticity model in which feature-based plasticity is a change in a representation of the learned feature, and task-based plasticity is a change in processing of the trained task. Although the two types of plasticity underlie task-relevant VPL, only feature-based plasticity underlies task-irrelevant VPL. This model provides a new comprehensive framework in which apparently contradictory results could be explained.
Koshino, Hideya
2017-01-01
Working memory and attention are closely related. Recent research has shown that working memory can be viewed as internally directed attention. Working memory can affect attention in at least two ways. One is the effect of working memory load on attention, and the other is the effect of working memory contents on attention. In the present study, an interaction between working memory contents and perceptual load in distractor processing was investigated. Participants performed a perceptual load task in a standard form in one condition (Single task). In the other condition, a response-related distractor was maintained in working memory, rather than presented in the same stimulus display as a target (Dual task). For the Dual task condition, a significant compatibility effect was found under high perceptual load; however, there was no compatibility effect under low perceptual load. These results suggest that the way the contents of working memory affect visual search depends on perceptual load. Copyright © 2016 Elsevier B.V. All rights reserved.
A systems neurophysiology approach to voluntary event coding.
Petruo, Vanessa A; Stock, Ann-Kathrin; Münchau, Alexander; Beste, Christian
2016-07-15
Mechanisms responsible for the integration of perceptual events and appropriate actions (sensorimotor processes) have been subject to intense research. Different theoretical frameworks have been put forward with the "Theory of Event Coding (TEC)" being one of the most influential. In the current study, we focus on the concept of 'event files' within TEC and examine what sub-processes being dissociable by means of cognitive-neurophysiological methods are involved in voluntary event coding. This was combined with EEG source localization. We also introduce reward manipulations to delineate the neurophysiological sub-processes most relevant for performance variations during event coding. The results show that processes involved in voluntary event coding included predominantly stimulus categorization, feature unbinding and response selection, which were reflected by distinct neurophysiological processes (the P1, N2 and P3 ERPs). On a system's neurophysiological level, voluntary event-file coding is thus related to widely distributed parietal-medial frontal networks. Attentional selection processes (N1 ERP) turned out to be less important. Reward modulated stimulus categorization in parietal regions likely reflecting aspects of perceptual decision making but not in other processes. The perceptual categorization stage appears central for voluntary event-file coding. Copyright © 2016 Elsevier Inc. All rights reserved.
Perceptual sensitivity to spectral properties of earlier sounds during speech categorization.
Stilp, Christian E; Assgari, Ashley A
2018-02-28
Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs) that bias perception of later sounds. For example, when context sounds have more energy in low-F 1 frequency regions, listeners report more high-F 1 responses to a target vowel, and vice versa. SCEs have been reported using various approaches for a wide range of stimuli, but most often, large spectral peaks were added to the context to bias speech categorization. This obscures the lower limit of perceptual sensitivity to spectral properties of earlier sounds, i.e., when SCEs begin to bias speech categorization. Listeners categorized vowels (/ɪ/-/ɛ/, Experiment 1) or consonants (/d/-/g/, Experiment 2) following a context sentence with little spectral amplification (+1 to +4 dB) in frequency regions known to produce SCEs. In both experiments, +3 and +4 dB amplification in key frequency regions of the context produced SCEs, but lesser amplification was insufficient to bias performance. This establishes a lower limit of perceptual sensitivity where spectral differences across sounds can bias subsequent speech categorization. These results are consistent with proposed adaptation-based mechanisms that potentially underlie SCEs in auditory perception. Recent sounds can change what speech sounds we hear later. This can occur when the average frequency composition of earlier sounds differs from that of later sounds, biasing how they are perceived. These "spectral contrast effects" are widely observed when sounds' frequency compositions differ substantially. We reveal the lower limit of these effects, as +3 dB amplification of key frequency regions in earlier sounds was enough to bias categorization of the following vowel or consonant sound. Speech categorization being biased by very small spectral differences across sounds suggests that spectral contrast effects occur frequently in everyday speech perception.
Perceptual and Conceptual Priming of Cue Encoding in Task Switching
ERIC Educational Resources Information Center
Schneider, Darryl W.
2016-01-01
Transition effects in task-cuing experiments can be partitioned into task switching and cue repetition effects by using multiple cues per task. In the present study, the author shows that cue repetition effects can be partitioned into perceptual and conceptual priming effects. In 2 experiments, letters or numbers in their uppercase/lowercase or…
Dilution: A Theoretical Burden or Just Load? A Reply to Tsal and Benoni (2010)
ERIC Educational Resources Information Center
Lavie, Nilli; Torralbo, Ana
2010-01-01
Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and…
Xue, Linyan; Huang, Dan; Wang, Tong; Hu, Qiyi; Chai, Xinyu; Li, Liming; Chen, Yao
2017-11-28
Selective spatial attention enhances task performance at restricted regions within the visual field. The magnitude of this effect depends on the level of attentional load, which determines the efficiency of distractor rejection. Mechanisms of attentional load include perceptual selection and/or cognitive control involving working memory. Recent studies have provided evidence that microsaccades are influenced by spatial attention. Therefore, microsaccade activities may be exploited to help understand the dynamic control of selective attention under different load levels. However, previous reports in humans on the effect of attentional load on microsaccades are inconsistent, and it is not clear to what extent these results and the dynamic changes of microsaccade activities are similar in monkeys. We trained monkeys to perform a color detection task in which the perceptual load was manipulated by task difficulty with limited involvement of working memory. Our results indicate that during the task with high perceptual load, the rate and amplitude of microsaccades immediately before the target color change were significantly suppressed. We also found that the occurrence of microsaccades before the monkeys' detection response deteriorated their performance, especially in the hard task. We propose that the activity of microsaccades might be an efficacious indicator of the perceptual load.
Ongoing behavior predicts perceptual report of interval duration
Gouvêa, Thiago S.; Monteiro, Tiago; Soares, Sofia; Atallah, Bassam V.; Paton, Joseph J.
2014-01-01
The ability to estimate the passage of time is essential for adaptive behavior in complex environments. Yet, it is not known how the brain encodes time over the durations necessary to explain animal behavior. Under temporally structured reinforcement schedules, animals tend to develop temporally structured behavior, and interval timing has been suggested to be accomplished by learning sequences of behavioral states. If this is true, trial to trial fluctuations in behavioral sequences should be predictive of fluctuations in time estimation. We trained rodents in an duration categorization task while continuously monitoring their behavior with a high speed camera. Animals developed highly reproducible behavioral sequences during the interval being timed. Moreover, those sequences were often predictive of perceptual report from early in the trial, providing support to the idea that animals may use learned behavioral patterns to estimate the duration of time intervals. To better resolve the issue, we propose that continuous and simultaneous behavioral and neural monitoring will enable identification of neural activity related to time perception that is not explained by ongoing behavior. PMID:24672473
Neural Patterns of the Implicit Association Test
Healy, Graham F.; Boran, Lorraine; Smeaton, Alan F.
2015-01-01
The Implicit Association Test (IAT) is a reaction time based categorization task that measures the differential associative strength between bipolar targets and evaluative attribute concepts as an approach to indexing implicit beliefs or biases. An open question exists as to what exactly the IAT measures, and here EEG (Electroencephalography) has been used to investigate the time course of ERPs (Event-related Potential) indices and implicated brain regions in the IAT. IAT-EEG research identifies a number of early (250–450 ms) negative ERPs indexing early-(pre-response) processing stages of the IAT. ERP activity in this time range is known to index processes related to cognitive control and semantic processing. A central focus of these efforts has been to use IAT-ERPs to delineate the implicit and explicit factors contributing to measured IAT effects. Increasing evidence indicates that cognitive control (and related top-down modulation of attention/perceptual processing) may be components in the effective measurement of IAT effects, as factors such as physical setting or task instruction can change an IAT measurement. In this study we further implicate the role of proactive cognitive control and top-down modulation of attention/perceptual processing in the IAT-EEG. We find statistically significant relationships between D-score (a reaction-time based measure of the IAT-effect) and early ERP-time windows, indicating where more rapid word categorizations driving the IAT effect are present, they are at least partly explainable by neural activity not significantly correlated with the IAT measurement itself. Using LORETA, we identify a number of brain regions driving these ERP-IAT relationships notably involving left-temporal, insular, cingulate, medial frontal and parietal cortex in time regions corresponding to the N2- and P3-related activity. The identified brain regions involved with reduced reaction times on congruent blocks coincide with those of previous studies. PMID:26635570
Listeners are maximally flexible in updating phonetic beliefs over time.
Saltzman, David; Myers, Emily
2018-04-01
Perceptual learning serves as a mechanism for listenexrs to adapt to novel phonetic information. Distributional tracking theories posit that this adaptation occurs as a result of listeners accumulating talker-specific distributional information about the phonetic category in question (Kleinschmidt & Jaeger, 2015, Psychological Review, 122). What is not known is how listeners build these talker-specific distributions; that is, if they aggregate all information received over a certain time period, or if they rely more heavily upon the most recent information received and down-weight older, consolidated information. In the present experiment, listeners were exposed to four interleaved blocks of a lexical decision task and a phonetic categorization task in which the lexical decision blocks were designed to bias perception in opposite directions along a "s"-"sh" continuum. Listeners returned several days later and completed the identical task again. Evidence was consistent with listeners using a relatively short temporal window of integration at the individual session level. Namely, in each individual session, listeners' perception of a "s"-"sh" contrast was biased by the information in the immediately preceding lexical decision block, and there was no evidence that listeners summed their experience with the talker over the entire session. Similarly, the magnitude of the bias effect did not change between sessions, consistent with the idea that talker-specific information remains flexible, even after consolidation. In general, results suggest that listeners are maximally flexible when considering how to categorize speech from a novel talker.
Vakil, E; Sigal, J
1997-07-01
Twenty-four closed-head-injured (CHI) and 24 control participants studied two word lists under shallow (i.e., nonsemantic) and deep (i.e., semantic) encoding conditions. They were then tested on free recall, perceptual priming (i.e., perceptual partial word identification) and conceptual priming (i.e., category production) tasks. Previous findings have demonstrated that memory in CHI is characterized by inefficient conceptual processing of information. It was thus hypothesized that the CHI participants would perform more poorly than the control participants on the explicit and on the conceptual priming tasks. On these tasks the CHI group was expected to benefit to a lesser degree from prior deep encoding, as compared to controls. The groups were not expected to significantly differ from each other on the perceptual priming task. Prior deep encoding was not expected to improve the perceptual priming performance of either group. All findings were as predicted, with the exception that a significant effect was not found between groups for deep encoding in the conceptual priming task. The results are discussed (1) in terms of their theoretical contribution in further validating the dissociation between perceptual and conceptual priming; and (2) in terms of the contribution in differentiating between amnesic and CHI patients. Conceptual priming is preserved in amnesics but not in CHI patients.
Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.
Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U
2016-03-01
We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Color Categories: Evidence for the Cultural Relativity Hypothesis
ERIC Educational Resources Information Center
Roberson, D.; Davidoff, J.; Davies, I.R.L.; Shapiro, L.R.
2005-01-01
The question of whether language affects our categorization of perceptual continua is of particular interest for the domain of color where constraints on categorization have been proposed both within the visual system and in the visual environment. Recent research (Roberson, Davies, & Davidoff, 2000; Roberson et al., in press) found…
Tuma, J M
1975-02-01
The present investigation tested the hypothesis advanced by J. Inglis (1961) that perceptual defense and perceptual vigilance result from an interaction between personality differences and degrees of experimental stress. The design, which controlled for questionable procedures used in previous studies, utilized 32 introverts and 32 extraverts, half male and half female, in an experiment with a visual recognition-task. Results indicated that under low-stress conditions introverts and extraverts identified by their response to a thematic apperception task react to threatening stimuli with perceptual defense and perceptual vigilance, respectively. Under high-stress conditions, type of avoidance activity reverses; extraverts react with perceptual defense and introverts with perceptual vigilance. It was suggested that, when both personality and stress variables are controlled, results of the perceptual defense paradigm are predictable and consistent, in support of Inglis' hypothesis.
Lundqvist, Daniel; Bruce, Neil; Öhman, Arne
2015-01-01
In this article, we examine how emotional and perceptual stimulus factors influence visual search efficiency. In an initial task, we run a visual search task, using a large number of target/distractor emotion combinations. In two subsequent tasks, we then assess measures of perceptual (rated and computational distances) and emotional (rated valence, arousal and potency) stimulus properties. In a series of regression analyses, we then explore the degree to which target salience (the size of target/distractor dissimilarities) on these emotional and perceptual measures predict the outcome on search efficiency measures (response times and accuracy) from the visual search task. The results show that both emotional and perceptual stimulus salience contribute to visual search efficiency. The results show that among the emotional measures, salience on arousal measures was more influential than valence salience. The importance of the arousal factor may be a contributing factor to contradictory history of results within this field.
ERIC Educational Resources Information Center
Wood, Milton E.
The purpose of the effort was to determine the benefits to be derived from the adaptive training technique of automatically adjusting task difficulty as a function of a student skill during early learning of a complex perceptual motor task. A digital computer provided the task dynamics, scoring, and adaptive control of a second-order, two-axis,…
Category learning increases discriminability of relevant object dimensions in visual cortex.
Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel
2013-04-01
Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.
Ren, Xuezhu; Altmeyer, Michael; Reiss, Siegbert; Schweizer, Karl
2013-02-01
Perceptual attention and executive attention represent two higher-order types of attention and associate with distinctly different ways of information processing. It is hypothesized that these two types of attention implicate different cognitive processes, which are assumed to account for the differential effects of perceptual attention and executive attention on fluid intelligence. Specifically, an encoding process is assumed to be crucial in completing the tasks of perceptual attention while two executive processes, updating and shifting, are stimulated in completing the tasks of executive attention. The proposed hypothesis was tested by means of an integrative approach combining experimental manipulations and psychometric modeling. In a sample of 210 participants the encoding process has proven indispensable in completing the tasks of perceptual attention, and this process accounted for a considerable part of fluid intelligence that was assessed by two figural reasoning tests. In contrast, the two executive processes, updating and shifting, turned out to be necessary in performance according to the tasks of executive attention and these processes accounted for a larger part of the variance in fluid intelligence than that of the processes underlying perceptual attention. Copyright © 2012 Elsevier B.V. All rights reserved.
The effect of perceptual load on tactile spatial attention: Evidence from event-related potentials.
Gherri, Elena; Berreby, Fiona
2017-10-15
To investigate whether tactile spatial attention is modulated by perceptual load, behavioural and electrophysiological measures were recorded during two spatial cuing tasks in which the difficulty of the target/non-target discrimination was varied (High and Low load tasks). Moreover, to study whether attentional modulations by load are sensitive to the availability of visual information, the High and Low load tasks were carried out under both illuminated and darkness conditions. ERPs to cued and uncued non-targets were compared as a function of task (High vs. Low load) and illumination condition (Light vs. Darkness). Results revealed that the locus of tactile spatial attention was determined by a complex interaction between perceptual load and illumination conditions during sensory-specific stages of processing. In the Darkness, earlier effects of attention were present in the High load than in the Low load task, while no difference between tasks emerged in the Light. By contrast, increased load was associated with stronger attention effects during later post-perceptual processing stages regardless of illumination conditions. These findings demonstrate that ERP correlates of tactile spatial attention are strongly affected by the perceptual load of the target/non-target discrimination. However, differences between illumination conditions show that the impact of load on tactile attention depends on the presence of visual information. Perceptual load is one of the many factors that contribute to determine the effects of spatial selectivity in touch. Copyright © 2017 Elsevier B.V. All rights reserved.
Task-Set Reconfiguration and Perceptual Processing: Behavioral and Electrophysiological Evidence
ERIC Educational Resources Information Center
Mackenzie, Ian G.; Leuthold, Hartmut
2011-01-01
Oriet and Jolicoeur (2003) proposed that an endogenous task-set reconfiguration process acts as a hard bottleneck during which even early perceptual processing is impossible. We examined this assumption using a psychophysiological approach. Participants were required to switch between magnitude and parity judgment tasks within a predictable task…
How learning to abstract shapes neural sound representations
Ley, Anke; Vroomen, Jean; Formisano, Elia
2014-01-01
The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes—even in absence of changes in overall signal level—these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations. PMID:24917783
Multisensory perceptual learning is dependent upon task difficulty.
De Niear, Matthew A; Koo, Bonhwang; Wallace, Mark T
2016-11-01
There has been a growing interest in developing behavioral tasks to enhance temporal acuity as recent findings have demonstrated changes in temporal processing in a number of clinical conditions. Prior research has demonstrated that perceptual training can enhance temporal acuity both within and across different sensory modalities. Although certain forms of unisensory perceptual learning have been shown to be dependent upon task difficulty, this relationship has not been explored for multisensory learning. The present study sought to determine the effects of task difficulty on multisensory perceptual learning. Prior to and following a single training session, participants completed a simultaneity judgment (SJ) task, which required them to judge whether a visual stimulus (flash) and auditory stimulus (beep) presented in synchrony or at various stimulus onset asynchronies (SOAs) occurred synchronously or asynchronously. During the training session, participants completed the same SJ task but received feedback regarding the accuracy of their responses. Participants were randomly assigned to one of three levels of difficulty during training: easy, moderate, and hard, which were distinguished based on the SOAs used during training. We report that only the most difficult (i.e., hard) training protocol enhanced temporal acuity. We conclude that perceptual training protocols for enhancing multisensory temporal acuity may be optimized by employing audiovisual stimuli for which it is difficult to discriminate temporal synchrony from asynchrony.
Action video games do not improve the speed of information processing in simple perceptual tasks.
van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U; Ratcliff, Roger; Wagenmakers, Eric-Jan
2014-10-01
Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks.
Action Video Games Do Not Improve the Speed of Information Processing in Simple Perceptual Tasks
van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U.; Ratcliff, Roger; Wagenmakers, Eric-Jan
2015-01-01
Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks. PMID:24933517
Acquisition and production of skilled behavior in dynamic decision-making tasks
NASA Technical Reports Server (NTRS)
Kirlik, Alex
1990-01-01
Ongoing research investigating perceptual and contextual influences on skilled human performance in dynamic decision making environments is discussed. The research is motivated by two general classes of findings in recent decision making research. First, many studies suggest that the concrete context in which a task is presented has strong influences on the psychological processes used to perform the task and on subsequent performance. Second, studies of skilled behavior in a wide variety of task environments typically implicate the perceptual system as an important contributor to decision-making performance, either in its role as a mediator between the current decision context and stored knowledge, or as a mechanism capable of directly initiating activity through the development of a 'trained eye.' Both contextual and perceptual influences place limits on the ability of traditional utility-theoretic accounts of decision-making to guide display design, as variance in behavior due to contextual factors or the development of a perceptual skill is left unexplained. The author outlines a framework in which to view questions of perceptual and contextual influences on behavior and describe an experimental task and analysis technique which will be used to diagnose the possible role of perception in skilled decision making performance.
Learning and transfer of category knowledge in an indirect categorization task.
Helie, Sebastien; Ashby, F Gregory
2012-05-01
Knowledge representations acquired during category learning experiments are 'tuned' to the task goal. A useful paradigm to study category representations is indirect category learning. In the present article, we propose a new indirect categorization task called the "same"-"different" categorization task. The same-different categorization task is a regular same-different task, but the question asked to the participants is about the stimulus category membership instead of stimulus identity. Experiment 1 explores the possibility of indirectly learning rule-based and information-integration category structures using the new paradigm. The results suggest that there is little learning about the category structures resulting from an indirect categorization task unless the categories can be separated by a one-dimensional rule. Experiment 2 explores whether a category representation learned indirectly can be used in a direct classification task (and vice versa). The results suggest that previous categorical knowledge acquired during a direct classification task can be expressed in the same-different categorization task only when the categories can be separated by a rule that is easily verbalized. Implications of these results for categorization research are discussed.
Moehler, Tobias; Fiehler, Katja
2015-11-01
Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation=simultaneous vs. before saccade preparation=sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
French, Robert M.; Mareschal, Denis; Mermillod, Martial; Quinn, Paul C.
2004-01-01
Disentangling bottom-up and top-down processing in adult category learning is notoriously difficult. Studying category learning in infancy provides a simple way of exploring category learning while minimizing the contribution of top-down information. Three- to 4-month-old infants presented with cat or dog images will form a perceptual category…
ERIC Educational Resources Information Center
Chang, Seung-Eun
2013-01-01
The perception of lexical tones is addressed through research on South Kyungsang Korean, spoken in the southeastern part of Korea. Based on an earlier production study (Chang, 2008a, 2008b), a categorization experiment was conducted to determine the perceptually salient aspects of the perceptual nature of a high tone and a rising tone. The…
Baudouin, Alexia; Clarys, David; Vanneste, Sandrine; Isingrini, Michel
2009-12-01
The aim of the present study was to examine executive dysfunctioning and decreased processing speed as potential mediators of age-related differences in episodic memory. We compared the performances of young and elderly adults in a free-recall task. Participants were also given tests to measure executive functions and perceptual processing speed and a coding task (the Digit Symbol Substitution Test, DSST). More precisely, we tested the hypothesis that executive functions would mediate the age-related differences observed in the free-recall task better than perceptual speed. We also tested the assumption that a coding task, assumed to involve both executive processes and perceptual speed, would be the best mediator of age-related differences in memory. Findings first confirmed that the DSST combines executive processes and perceptual speed. Secondly, they showed that executive functions are a significant mediator of age-related differences in memory, and that DSST performance is the best predictor.
Does Task-Set Reconfiguration Create Cognitive Slack?
ERIC Educational Resources Information Center
Gilbert, Sam J.
2005-01-01
C. Oriet and P. Jolicoeur (see record 2003-08747-016) reported 2 experiments in which the perceptual contrast of stimuli was manipulated in a task-switching paradigm. They failed to observe an interaction in the reaction time data between task switching, perceptual contrast, and response-stimulus interval. Using the locus of slack logic, they…
On the Perception of Religious Group Membership from Faces
Rule, Nicholas O.; Garrett, James V.; Ambady, Nalini
2010-01-01
Background The study of social categorization has largely been confined to examining groups distinguished by perceptually obvious cues. Yet many ecologically important group distinctions are less clear, permitting insights into the general processes involved in person perception. Although religious group membership is thought to be perceptually ambiguous, folk beliefs suggest that Mormons and non-Mormons can be categorized from their appearance. We tested whether Mormons could be distinguished from non-Mormons and investigated the basis for this effect to gain insight to how subtle perceptual cues can support complex social categorizations. Methodology/Principal Findings Participants categorized Mormons' and non-Mormons' faces or facial features according to their group membership. Individuals could distinguish between the two groups significantly better than chance guessing from their full faces and faces without hair, with eyes and mouth covered, without outer face shape, and inverted 180°; but not from isolated features (i.e., eyes, nose, or mouth). Perceivers' estimations of their accuracy did not match their actual accuracy. Exploration of the remaining features showed that Mormons and non-Mormons significantly differed in perceived health and that these perceptions were related to perceptions of skin quality, as demonstrated in a structural equation model representing the contributions of skin color and skin texture. Other judgments related to health (facial attractiveness, facial symmetry, and structural aspects related to body weight) did not differ between the two groups. Perceptions of health were also responsible for differences in perceived spirituality, explaining folk hypotheses that Mormons are distinct because they appear more spiritual than non-Mormons. Conclusions/Significance Subtle markers of group membership can influence how others are perceived and categorized. Perceptions of health from non-obvious and minimal cues distinguished individuals according to their religious group membership. These data illustrate how the non-conscious detection of very subtle differences in others' appearances supports cognitively complex judgments such as social categorization. PMID:21151864
Neural correlates of concreteness in semantic categorization.
Pexman, Penny M; Hargreaves, Ian S; Edwards, Jodi D; Henry, Luke C; Goodyear, Bradley G
2007-08-01
In some contexts, concrete words (CARROT) are recognized and remembered more readily than abstract words (TRUTH). This concreteness effect has historically been explained by two theories of semantic representation: dual-coding [Paivio, A. Dual coding theory: Retrospect and current status. Canadian Journal of Psychology, 45, 255-287, 1991] and context-availability [Schwanenflugel, P. J. Why are abstract concepts hard to understand? In P. J. Schwanenflugel (Ed.), The psychology of word meanings (pp. 223-250). Hillsdale, NJ: Erlbaum, 1991]. Past efforts to adjudicate between these theories using functional magnetic resonance imaging have produced mixed results. Using event-related functional magnetic resonance imaging, we reexamined this issue with a semantic categorization task that allowed for uniform semantic judgments of concrete and abstract words. The participants were 20 healthy adults. Functional analyses contrasted activation associated with concrete and abstract meanings of ambiguous and unambiguous words. Results showed that for both ambiguous and unambiguous words, abstract meanings were associated with more widespread cortical activation than concrete meanings in numerous regions associated with semantic processing, including temporal, parietal, and frontal cortices. These results are inconsistent with both dual-coding and context-availability theories, as these theories propose that the representations of abstract concepts are relatively impoverished. Our results suggest, instead, that semantic retrieval of abstract concepts involves a network of association areas. We argue that this finding is compatible with a theory of semantic representation such as Barsalou's [Barsalou, L. W. Perceptual symbol systems. Behavioral & Brain Sciences, 22, 577-660, 1999] perceptual symbol systems, whereby concrete and abstract concepts are represented by similar mechanisms but with differences in focal content.
The Role of Age and Executive Function in Auditory Category Learning
Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath
2015-01-01
Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
Attention and implicit memory in the category-verification and lexical decision tasks.
Mulligan, Neil W; Peterson, Daniel
2008-05-01
Prior research on implicit memory appeared to support 3 generalizations: Conceptual tests are affected by divided attention, perceptual tasks are affected by certain divided-attention manipulations, and all types of priming are affected by selective attention. These generalizations are challenged in experiments using the implicit tests of category verification and lexical decision. First, both tasks were unaffected by divided-attention tasks known to impact other priming tasks. Second, both tasks were unaffected by a manipulation of selective attention in which colored words were either named or their colors identified. Thus, category verification, unlike other conceptual tasks, appears unaffected by divided attention, and some selective-attention tasks, and lexical decision, unlike other perceptual tasks, appears unaffected by a difficult divided-attention task and some selective-attention tasks. Finally, both tasks were affected by a selective-attention task in which attention was manipulated across objects (rather than within objects), indicating some susceptibility to selective attention. The results contradict an analysis on the basis of the conceptual-perceptual distinction and other more specific hypotheses but are consistent with the distinction between production and identification priming.
Perceptual training effects on anticipation of direct and deceptive 7-m throws in handball.
Alsharji, Khaled E; Wade, Michael G
2016-01-01
We examined the effectiveness of perceptual training on the performance of handball goalkeepers when anticipating the direction of both direct and deceptive 7-m throws. Skilled goalkeepers were assigned equally to three matched-ability groups based on their pre-test performance: a perceptual training group (n = 14) received video-based perceptual training, a placebo training group (n = 14) received video-based regular training and a control group received no training. Participants in the perceptual training group significantly improved their performance compared to both placebo and control groups; however, anticipation of deceptive throws improved less than for direct throws. The results confirm that although anticipating deception in handball is a challenging task for goalkeepers, task-specific perceptual training can minimise its effect and improve performance.
True or false: do 5-year-olds understand belief?
Fabricius, William V; Boyer, Ty W; Weimer, Amy A; Carroll, Kathleen
2010-11-01
In 3 studies (N = 188) we tested the hypothesis that children use a perceptual access approach to reason about mental states before they understand beliefs. The perceptual access hypothesis predicts a U-shaped developmental pattern of performance in true belief tasks, in which 3-year-olds who reason about reality should succeed, 4- to 5-year-olds who use perceptual access reasoning should fail, and older children who use belief reasoning should succeed. The results of Study 1 revealed the predicted pattern in 2 different true belief tasks. The results of Study 2 disconfirmed several alternate explanations based on possible pragmatic and inhibitory demands of the true belief tasks. In Study 3, we compared 2 methods of classifying individuals according to which 1 of the 3 reasoning strategies (reality reasoning, perceptual access reasoning, belief reasoning) they used. The 2 methods gave converging results. Both methods indicated that the majority of children used the same approach across tasks and that it was not until after 6 years of age that most children reasoned about beliefs. We conclude that because most prior studies have failed to detect young children's use of perceptual access reasoning, they have overestimated their understanding of false beliefs. We outline several theoretical implications that follow from the perceptual access hypothesis.
Perceptual Processing Affects Conceptual Processing
ERIC Educational Resources Information Center
van Dantzig, Saskia; Pecher, Diane; Zeelenberg, Rene; Barsalou, Lawrence W.
2008-01-01
According to the Perceptual Symbols Theory of cognition (Barsalou, 1999), modality-specific simulations underlie the representation of concepts. A strong prediction of this view is that perceptual processing affects conceptual processing. In this study, participants performed a perceptual detection task and a conceptual property-verification task…
High perceptual load leads to both reduced gain and broader orientation tuning
Stolte, Moritz; Bahrami, Bahador; Lavie, Nilli
2014-01-01
Due to its limited capacity, visual perception depends on the allocation of attention. The resultant phenomena of inattentional blindness, accompanied by reduced sensory visual cortex response to unattended stimuli in conditions of high perceptual load in the attended task, are now well established (Lavie, 2005; Lavie, 2010, for reviews). However, the underlying mechanisms for these effects remain to be elucidated. Specifically, is reduced perceptual processing under high perceptual load a result of reduced sensory signal gain, broader tuning, or both? We examined this question with psychophysical measures of orientation tuning under different levels of perceptual load in the task performed. Our results show that increased perceptual load leads to both reduced sensory signal and broadening of tuning. These results clarify the effects of attention on elementary visual perception and suggest that high perceptual load is critical for attentional effects on sensory tuning. PMID:24610952
Neural correlates of auditory scene analysis and perception
Cohen, Yale E.
2014-01-01
The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354
Fitting perception in and to cognition.
Goldstone, Robert L; de Leeuw, Joshua R; Landy, David H
2015-02-01
Perceptual modules adapt at evolutionary, lifelong, and moment-to-moment temporal scales to better serve the informational needs of cognizers. Perceptual learning is a powerful way for an individual to become tuned to frequently recurring patterns in its specific local environment that are pertinent to its goals without requiring costly executive control resources to be deployed. Mechanisms like predictive coding, categorical perception, and action-informed vision allow our perceptual systems to interface well with cognition by generating perceptual outputs that are systematically guided by how they will be used. In classic conceptions of perceptual modules, people have access to the modules' outputs but no ability to adjust their internal workings. However, humans routinely and strategically alter their perceptual systems via training regimes that have predictable and specific outcomes. In fact, employing a combination of strategic and automatic devices for adapting perception is one of the most promising approaches to improving cognition. Copyright © 2014 Elsevier B.V. All rights reserved.
Chen, Wei-Ying; Wu, Sheng K; Song, Tai-Fen; Chou, Kuei-Ming; Wang, Kuei-Yuan; Chang, Yao-Ching; Goodbourn, Patrick T
2016-12-07
The specific demands of a combat-sport discipline may be reflected in the perceptual-motor performance of its athletes. Taekwondo, which emphasizes kicking, might require faster perceptual processing to compensate for longer latencies to initiate lower-limb movements and to give rapid visual feedback for dynamic postural control, while Karate, which emphasizes both striking with the hands and kicking, might require exceptional eye-hand coordination and fast perceptual processing. In samples of 38 Taekwondo athletes (16 females, 22 males; mean age = 19.9 years, SD = 1.2), 24 Karate athletes (9 females, 15 males; mean age = 18.9 years, SD = 0.9), and 35 Nonathletes (20 females, 15 males; mean age = 20.6 years, SD = 1.5), we measured eye-hand coordination with the Finger-Nose-Finger task, and both perceptual-processing speed and attentional control with the Covert Orienting of Visual Attention (COVAT) task. Eye-hand coordination was significantly better for Karate athletes than for Taekwondo athletes and Nonathletes, but reaction times for the upper extremities in the COVAT task-indicative of perceptual-processing speed-were faster for Taekwondo athletes than for Karate athletes and Nonathletes. In addition, we found no significant difference among groups in attentional control, as indexed by the reaction-time cost of an invalid cue in the COVAT task. The results suggest that athletes in different combat sports exhibit distinct profiles of perceptual-motor performance. © The Author(s) 2016.
Neurofeedback training of gamma band oscillations improves perceptual processing.
Salari, Neda; Büchel, Christian; Rose, Michael
2014-10-01
In this study, a noninvasive electroencephalography-based neurofeedback method is applied to train volunteers to deliberately increase gamma band oscillations (40 Hz) in the visual cortex. Gamma band oscillations in the visual cortex play a functional role in perceptual processing. In a previous study, we were able to demonstrate that gamma band oscillations prior to stimulus presentation have a significant influence on perceptual processing of visual stimuli. In the present study, we aimed to investigate longer lasting effects of gamma band neurofeedback training on perceptual processing. For this purpose, a feedback group was trained to modulate oscillations in the gamma band, while a control group participated in a task with an identical design setting but without gamma band feedback. Before and after training, both groups participated in a perceptual object detection task and a spatial attention task. Our results clearly revealed that only the feedback group but not the control group exhibited a visual processing advantage and an increase in oscillatory gamma band activity in the pre-stimulus period of the processing of the visual object stimuli after the neurofeedback training. Results of the spatial attention task showed no difference between the groups, which underlines the specific role of gamma band oscillations for perceptual processing. In summary, our results show that modulation of gamma band activity selectively affects perceptual processing and therefore supports the relevant role of gamma band activity for this specific process. Furthermore, our results demonstrate the eligibility of gamma band oscillations as a valuable tool for neurofeedback applications.
When action is not enough: tool-use reveals tactile-dependent access to Body Schema.
Cardinali, L; Brozzoli, C; Urquizar, C; Salemme, R; Roy, A C; Farnè, A
2011-11-01
Proper motor control of our own body implies a reliable representation of body parts. This information is supposed to be stored in the Body Schema (BS), a body representation that appears separate from a more perceptual body representation, the Body Image (BI). The dissociation between BS for action and BI for perception, originally based on neuropsychological evidence, has recently become the focus of behavioural studies in physiological conditions. By inducing the rubber hand illusion in healthy participants, Kammers et al. (2009) showed perceptual changes attributable to the BI to which the BS, as indexed via motor tasks, was immune. To more definitively support the existence of dissociable body representations in physiological conditions, here we tested for the opposite dissociation, namely, whether a tool-use paradigm would induce a functional update of the BS (via a motor localization task) without affecting the BI (via a perceptual localization task). Healthy subjects were required to localize three anatomical landmarks on their right arm, before and after using the same arm to control a tool. In addition to this classical task-dependency approach, we assessed whether preferential access to the BS could also depend upon the way positional information about forearm targets is provided, to subsequently execute the same task. To this aim, participants performed either verbally or tactually driven versions of the motor and perceptual localization tasks. Results showed that both the motor and perceptual tasks were sensitive to the update of the forearm representation, but only when the localization task (perceptual or motor) was driven by a tactile input. This pattern reveals that the motor output is not sufficient per se, but has to be coupled with tactually mediated information to guarantee access to the BS. These findings shade a new light on the action-perception models of body representations and underlie how functional plasticity may be a useful tool to clarify their operational definition. Copyright © 2011 Elsevier Ltd. All rights reserved.
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity
Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information. PMID:28163675
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.
Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
The role of features and configural processing in face-race classification
Zhao, Lun; Bentin, Shlomo
2011-01-01
We explored perceptual factors that might account for the other-race classification advantage (ORCA) in classifying faces by race. Testing Chinese participants in China and Israeli participants in Israel we show that: (a) The distinction between Chinese and Israeli faces is highly accurate even on the basis of isolated eyes or faces with eyes concealed, but full faces are categorized faster. (b) The ORCA is similarly robust for full faces and for face parts. (c) The ORCA was larger when the configuration of the inner-face components was distorted, reflecting delayed categorization of own-race distorted faces relative to own-race normally configured faces but no conspicuous distortion effect on other-race faces. These data demonstrate that perceptual factors can account for the ORCA independently of social bias. We suggest that one source of the ORCA in race categorization is the configural analysis applied by default while processing own-race but not other-race faces. PMID:22008980
Perceptual and conceptual information processing in schizophrenia and depression.
Dreben, E K; Fryer, J H; McNair, D M
1995-04-01
Schizophrenic patients (n = 20), depressive patients (n = 20), and normal adults (n = 20) were compared on global vs local analyses of perceptual information using tachistoscopic tasks and on top-down vs bottom-up conceptual processing using card-sort tasks. The schizophrenic group performed more poorly on tasks requiring either global analyses (counting lines when distracting circles were present) or top-down conceptual processing (rule learning) than they did on tasks requiring local analyses (counting heterogeneous lines) or bottom-up processing (attribute identification). The schizophrenic group appeared not to use conceptually guided processing. Normal adults showed the reverse pattern. The depressive group performed similarly to the schizophrenic group on perceptual tasks but closer to the normal group on conceptual tasks, thereby appearing to be less dependent on a particular information-processing strategy. These deficits in organizational strategy may be related to the use of available processing resources as well as the allocation of attention.
Subthalamic nucleus stimulation impairs emotional conflict adaptation in Parkinson's disease.
Irmen, Friederike; Huebl, Julius; Schroll, Henning; Brücke, Christof; Schneider, Gerd-Helge; Hamker, Fred H; Kühn, Andrea A
2017-10-01
The subthalamic nucleus (STN) occupies a strategic position in the motor network, slowing down responses in situations with conflicting perceptual input. Recent evidence suggests a role of the STN in emotion processing through strong connections with emotion recognition structures. As deep brain stimulation (DBS) of the STN in patients with Parkinson's disease (PD) inhibits monitoring of perceptual and value-based conflict, STN DBS may also interfere with emotional conflict processing. To assess a possible interference of STN DBS with emotional conflict processing, we used an emotional Stroop paradigm. Subjects categorized face stimuli according to their emotional expression while ignoring emotionally congruent or incongruent superimposed word labels. Eleven PD patients ON and OFF STN DBS and eleven age-matched healthy subjects conducted the task. We found conflict-induced response slowing in healthy controls and PD patients OFF DBS, but not ON DBS, suggesting STN DBS to decrease adaptation to within-trial conflict. OFF DBS, patients showed more conflict-induced slowing for negative conflict stimuli, which was diminished by STN DBS. Computational modelling of STN influence on conflict adaptation disclosed DBS to interfere via increased baseline activity. © The Author (2017). Published by Oxford University Press.
Perceptual and affective mechanisms in facial expression recognition: An integrative review.
Calvo, Manuel G; Nummenmaa, Lauri
2016-09-01
Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.
Neural correlates of context-dependent feature conjunction learning in visual search tasks.
Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U
2016-06-01
Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The Competitive Influences of Perceptual Load and Working Memory Guidance on Selective Attention.
Tan, Jinfeng; Zhao, Yuanfang; Wang, Lijun; Tian, Xia; Cui, Yan; Yang, Qian; Pan, Weigang; Zhao, Xiaoyue; Chen, Antao
2015-01-01
The perceptual load theory in selective attention literature proposes that the interference from task-irrelevant distractor is eliminated when perceptual capacity is fully consumed by task-relevant information. However, the biased competition model suggests that the contents of working memory (WM) can guide attentional selection automatically, even when this guidance is detrimental to visual search. An intriguing but unsolved question is what will happen when selective attention is influenced by both perceptual load and WM guidance. To study this issue, behavioral performances and event-related potentials (ERPs) were recorded when participants were presented with a cue to either identify or hold in memory and had to perform a visual search task subsequently, under conditions of low or high perceptual load. Behavioural data showed that high perceptual load eliminated the attentional capture by WM. The ERP results revealed an obvious WM guidance effect in P1 component with invalid trials eliciting larger P1 than neutral trials, regardless of the level of perceptual load. The interaction between perceptual load and WM guidance was significant for the posterior N1 component. The memory guidance effect on N1 was eliminated by high perceptual load. Standardized Low Resolution Electrical Tomography Analysis (sLORETA) showed that the WM guidance effect and the perceptual load effect on attention can be localized into the occipital area and parietal lobe, respectively. Merely identifying the cue produced no effect on the P1 or N1 component. These results suggest that in selective attention, the information held in WM could capture attention at the early stage of visual processing in the occipital cortex. Interestingly, this initial capture of attention by WM could be modulated by the level of perceptual load and the parietal lobe mediates target selection at the discrimination stage.
The Competitive Influences of Perceptual Load and Working Memory Guidance on Selective Attention
Tan, Jinfeng; Zhao, Yuanfang; Wang, Lijun; Tian, Xia; Cui, Yan; Yang, Qian; Pan, Weigang; Zhao, Xiaoyue; Chen, Antao
2015-01-01
The perceptual load theory in selective attention literature proposes that the interference from task-irrelevant distractor is eliminated when perceptual capacity is fully consumed by task-relevant information. However, the biased competition model suggests that the contents of working memory (WM) can guide attentional selection automatically, even when this guidance is detrimental to visual search. An intriguing but unsolved question is what will happen when selective attention is influenced by both perceptual load and WM guidance. To study this issue, behavioral performances and event-related potentials (ERPs) were recorded when participants were presented with a cue to either identify or hold in memory and had to perform a visual search task subsequently, under conditions of low or high perceptual load. Behavioural data showed that high perceptual load eliminated the attentional capture by WM. The ERP results revealed an obvious WM guidance effect in P1 component with invalid trials eliciting larger P1 than neutral trials, regardless of the level of perceptual load. The interaction between perceptual load and WM guidance was significant for the posterior N1 component. The memory guidance effect on N1 was eliminated by high perceptual load. Standardized Low Resolution Electrical Tomography Analysis (sLORETA) showed that the WM guidance effect and the perceptual load effect on attention can be localized into the occipital area and parietal lobe, respectively. Merely identifying the cue produced no effect on the P1 or N1 component. These results suggest that in selective attention, the information held in WM could capture attention at the early stage of visual processing in the occipital cortex. Interestingly, this initial capture of attention by WM could be modulated by the level of perceptual load and the parietal lobe mediates target selection at the discrimination stage. PMID:26098079
Francis, Alexander L
2010-02-01
Perception of speech in competing speech is facilitated by spatial separation of the target and distracting speech, but this benefit may arise at either a perceptual or a cognitive level of processing. Load theory predicts different effects of perceptual and cognitive (working memory) load on selective attention in flanker task contexts, suggesting that this paradigm may be used to distinguish levels of interference. Two experiments examined interference from competing speech during a word recognition task under different perceptual and working memory loads in a dual-task paradigm. Listeners identified words produced by a talker of one gender while ignoring a talker of the other gender. Perceptual load was manipulated using a nonspeech response cue, with response conditional upon either one or two acoustic features (pitch and modulation). Memory load was manipulated with a secondary task consisting of one or six visually presented digits. In the first experiment, the target and distractor were presented at different virtual locations (0 degrees and 90 degrees , respectively), whereas in the second, all the stimuli were presented from the same apparent location. Results suggest that spatial cues improve resistance to distraction in part by reducing working memory demand.
The Role of Response Bias in Perceptual Learning
2015-01-01
Sensory judgments improve with practice. Such perceptual learning is often thought to reflect an increase in perceptual sensitivity. However, it may also represent a decrease in response bias, with unpracticed observers acting in part on a priori hunches rather than sensory evidence. To examine whether this is the case, 55 observers practiced making a basic auditory judgment (yes/no amplitude-modulation detection or forced-choice frequency/amplitude discrimination) over multiple days. With all tasks, bias was present initially, but decreased with practice. Notably, this was the case even on supposedly “bias-free,” 2-alternative forced-choice, tasks. In those tasks, observers did not favor the same response throughout (stationary bias), but did favor whichever response had been correct on previous trials (nonstationary bias). Means of correcting for bias are described. When applied, these showed that at least 13% of perceptual learning on a forced-choice task was due to reduction in bias. In other situations, changes in bias were shown to obscure the true extent of learning, with changes in estimated sensitivity increasing once bias was corrected for. The possible causes of bias and the implications for our understanding of perceptual learning are discussed. PMID:25867609
Neurally Constrained Modeling of Perceptual Decision Making
ERIC Educational Resources Information Center
Purcell, Braden A.; Heitz, Richard P.; Cohen, Jeremiah Y.; Schall, Jeffrey D.; Logan, Gordon D.; Palmeri, Thomas J.
2010-01-01
Stochastic accumulator models account for response time in perceptual decision-making tasks by assuming that perceptual evidence accumulates to a threshold. The present investigation mapped the firing rate of frontal eye field (FEF) visual neurons onto perceptual evidence and the firing rate of FEF movement neurons onto evidence accumulation to…
Cui, Xiaoyu; Gao, Chuanji; Zhou, Jianshe; Guo, Chunyan
2016-09-28
It has been widely shown that recognition memory includes two distinct retrieval processes: familiarity and recollection. Many studies have shown that recognition memory can be facilitated when there is a perceptual match between the studied and the tested items. Most event-related potential studies have explored the perceptual match effect on familiarity on the basis of the hypothesis that the specific event-related potential component associated with familiarity is the FN400 (300-500 ms mid-frontal effect). However, it is currently unclear whether the FN400 indexes familiarity or conceptual implicit memory. In addition, on the basis of the findings of a previous study, the so-called perceptual manipulations in previous studies may also involve some conceptual alterations. Therefore, we sought to determine the influence of perceptual manipulation by color changes on recognition memory when the perceptual or the conceptual processes were emphasized. Specifically, different instructions (perceptually or conceptually oriented) were provided to the participants. The results showed that color changes may significantly affect overall recognition memory behaviorally and that congruent items were recognized with a higher accuracy rate than incongruent items in both tasks, but no corresponding neural changes were found. Despite the evident familiarity shown in the two tasks (the behavioral performance of recognition memory was much higher than at the chance level), the FN400 effect was found in conceptually oriented tasks, but not perceptually oriented tasks. It is thus highly interesting that the FN400 effect was not induced, although color manipulation of recognition memory was behaviorally shown, as seen in previous studies. Our findings of the FN400 effect for the conceptual but not perceptual condition support the explanation that the FN400 effect indexes conceptual implicit memory.
Ginat-Frolich, Rivkah; Klein, Zohar; Katz, Omer; Shechner, Tomer
2017-06-01
Generalization is an adaptive learning mechanism, but it can be maladaptive when it occurs in excess. A novel perceptual discrimination training task was therefore designed to moderate fear overgeneralization. We hypothesized that improvement in basic perceptual discrimination would translate into lower fear overgeneralization in affective cues. Seventy adults completed a fear-conditioning task prior to being allocated into training or placebo groups. Predesignated geometric shape pairs were constructed for the training task. A target shape from each pair was presented. Thereafter, participants in the training group were shown both shapes and asked to identify the image that differed from the target. Placebo task participants only indicated the location of each shape on the screen. All participants then viewed new geometric pairs and indicated whether they were identical or different. Finally, participants completed a fear generalization test consisting of perceptual morphs ranging from the CS + to the CS-. Fear-conditioning was observed through physiological and behavioural measures. Furthermore, the training group performed better than the placebo group on the assessment task and exhibited decreased fear generalization in response to threat/safety cues. The findings offer evidence for the effectiveness of the novel discrimination training task, setting the stage for future research with clinical populations. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
McMurray, Bob; Jongman, Allard
2011-01-01
Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important are the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context dependent. This study assessed the…
Perceptual Visual Grouping under Inattention: Electrophysiological Functional Imaging
ERIC Educational Resources Information Center
Razpurker-Apfeld, Irene; Pratt, Hillel
2008-01-01
Two types of perceptual visual grouping, differing in complexity of shape formation, were examined under inattention. Fourteen participants performed a similarity judgment task concerning two successive briefly presented central targets surrounded by task-irrelevant simple and complex grouping patterns. Event-related potentials (ERPs) were…
Identification and discrimination of Spanish front vowels
NASA Astrophysics Data System (ADS)
Castellanos, Isabel; Lopez-Bascuas, Luis E.
2004-05-01
The idea that vowels are perceived less categorically than consonants is widely accepted. Ades [Psychol. Rev. 84, 524-530 (1977)] tried to explain this fact on the basis of the Durlach and Braida [J. Acoust. Soc. Am. 46, 372-383 (1969)] theory of intensity resolution. Since vowels seem to cover a broader perceptual range, context-coding noise for vowels should be greater than for consonants leading to a less categorical performance on the vocalic segments. However, relatively recent work by Macmillan et al. [J. Acoust. Soc. Am. 84, 1262-1280 (1988)] has cast doubt on the assumption of different perceptual ranges for vowels and consonants even though context variance is acknowledged to be greater for the former. A possibility is that context variance increases as number of long-term phonemic categories also increases. To test this hypothesis we focused on Spanish as the target language. Spanish has less vowel categories than English and the implication is that Spanish vowels will be more categorically perceived. Identification and discrimination experiments were conducted on a synthetic /i/-/e/ continuum and the obtained functions were studied to assess whether Spanish vowels are more categorically perceived than English vowels. The results are discussed in the context of different theories of speech perception.
Francis, Alexander L.; Kaganovich, Natalya; Driscoll-Huber, Courtney
2008-01-01
In English, voiced and voiceless syllable-initial stop consonants differ in both fundamental frequency at the onset of voicing (onset F0) and voice onset time (VOT). Although both correlates, alone, can cue the voicing contrast, listeners weight VOT more heavily when both are available. Such differential weighting may arise from differences in the perceptual distance between voicing categories along the VOT versus onset F0 dimensions, or it may arise from a bias to pay more attention to VOT than to onset F0. The present experiment examines listeners’ use of these two cues when classifying stimuli in which perceptual distance was artificially equated along the two dimensions. Listeners were also trained to categorize stimuli based on one cue at the expense of another. Equating perceptual distance eliminated the expected bias toward VOT before training, but successfully learning to base decisions more on VOT and less on onset F0 was easier than vice versa. Perceptual distance along both dimensions increased for both groups after training, but only VOT-trained listeners showed a decrease in Garner interference. Results lend qualified support to an attentional model of phonetic learning in which learning involves strategic redeployment of selective attention across integral acoustic cues. PMID:18681610
Tactile perceptual learning: learning curves and transfer to the contralateral finger.
Kaas, Amanda L; van de Ven, Vincent; Reithler, Joel; Goebel, Rainer
2013-02-01
Tactile perceptual learning has been shown to improve performance on tactile tasks, but there is no agreement about the extent of transfer to untrained skin locations. The lack of such transfer is often seen as a behavioral index of the contribution of early somatosensory brain regions. Moreover, the time course of improvements has never been described explicitly. Sixteen subjects were trained on the Ludvigh task (a tactile vernier task) on four subsequent days. On the fifth day, transfer of learning to the non-trained contralateral hand was tested. In five subjects, we explored to what extent training effects were retained approximately 1.5 years after the final training session, expecting to find long-term retention of learning effects after training. Results showed that tactile perceptual learning mainly occurred offline, between sessions. Training effects did not transfer initially, but became fully available to the untrained contralateral hand after a few additional training runs. After 1.5 years, training effects were not fully washed out and could be recuperated within a single training session. Interpreted in the light of theories of visual perceptual learning, these results suggest that tactile perceptual learning is not fundamentally different from visual perceptual learning, but might proceed at a slower pace due to procedural and task differences, thus explaining the apparent divergence in the amount of transfer and long-term retention.
A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making.
Prezenski, Sabine; Brechmann, André; Wolff, Susann; Russwinkel, Nele
2017-01-01
Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from behavioral, psychobiological and neurophysiological data may help to optimize future applications of this model such that it can be transferred to other domains of comparable dynamic decision tasks.
A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making
Prezenski, Sabine; Brechmann, André; Wolff, Susann; Russwinkel, Nele
2017-01-01
Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from behavioral, psychobiological and neurophysiological data may help to optimize future applications of this model such that it can be transferred to other domains of comparable dynamic decision tasks. PMID:28824512
Trzcinski, Natalie K; Gomez-Ramirez, Manuel; Hsiao, Steven S
2016-09-01
Continuous training enhances perceptual discrimination and promotes neural changes in areas encoding the experienced stimuli. This type of experience-dependent plasticity has been demonstrated in several sensory and motor systems. Particularly, non-human primates trained to detect consecutive tactile bar indentations across multiple digits showed expanded excitatory receptive fields (RFs) in somatosensory cortex. However, the perceptual implications of these anatomical changes remain undetermined. Here, we trained human participants for 9 days on a tactile task that promoted expansion of multi-digit RFs. Participants were required to detect consecutive indentations of bar stimuli spanning multiple digits. Throughout the training regime we tracked participants' discrimination thresholds on spatial (grating orientation) and temporal tasks on the trained and untrained hands in separate sessions. We hypothesized that training on the multi-digit task would decrease perceptual thresholds on tasks that require stimulus processing across multiple digits, while also increasing thresholds on tasks requiring discrimination on single digits. We observed an increase in orientation thresholds on a single digit. Importantly, this effect was selective for the stimulus orientation and hand used during multi-digit training. We also found that temporal acuity between digits improved across trained digits, suggesting that discriminating the temporal order of multi-digit stimuli can transfer to temporal discrimination of other tactile stimuli. These results suggest that experience-dependent plasticity following perceptual learning improves and interferes with tactile abilities in manners predictive of the task and stimulus features used during training. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Trzcinski, Natalie K; Gomez-Ramirez, Manuel; Hsiao, Steven S.
2016-01-01
Continuous training enhances perceptual discrimination and promotes neural changes in areas encoding the experienced stimuli. This type of experience-dependent plasticity has been demonstrated in several sensory and motor systems. Particularly, non-human primates trained to detect consecutive tactile bar indentations across multiple digits showed expanded excitatory receptive fields (RFs) in somatosensory cortex. However, the perceptual implications of these anatomical changes remain undetermined. Here, we trained human participants for nine days on a tactile task that promoted expansion of multi-digit RFs. Participants were required to detect consecutive indentations of bar stimuli spanning multiple digits. Throughout the training regime we tracked participants’ discrimination thresholds on spatial (grating orientation) and temporal tasks on the trained and untrained hands in separate sessions. We hypothesized that training on the multi-digit task would decrease perceptual thresholds on tasks that require stimulus processing across multiple digits, while also increasing thresholds on tasks requiring discrimination on single digits. We observed an increase in orientation thresholds on a single-digit. Importantly, this effect was selective for the stimulus orientation and hand used during multi-digit training. We also found that temporal acuity between digits improved across trained digits, suggesting that discriminating the temporal order of multi-digit stimuli can transfer to temporal discrimination of other tactile stimuli. These results suggest that experience-dependent plasticity following perceptual learning improves and interferes with tactile abilities in manners predictive of the task and stimulus features used during training. PMID:27422224
Wong, Becky; Szücs, Dénes
2013-11-01
We investigated whether the mere presentation of single-digit Arabic numbers activates their magnitude representations using a visually-presented symbolic same-different task for 20 adults and 15 children. Participants saw two single-digit Arabic numbers on a screen and judged whether the numbers were the same or different. We examined whether reaction time in this task was primarily driven by (objective or subjective) perceptual similarity, or by the numerical difference between the two digits. We reasoned that, if Arabic numbers automatically activate magnitude representations, a numerical function would best predict reaction time; but if Arabic numbers do not automatically activate magnitude representations, a perceptual function would best predict reaction time. Linear regressions revealed that a perceptual function, specifically, subjective visual similarity, was the best and only significant predictor of reaction time in adults and in children. These data strongly suggest that, in this task, single-digit Arabic numbers do not necessarily automatically activate magnitude representations in adults or in children. As the first study to date to explicitly study the developmental importance of perceptual factors in the symbolic same-different task, we found no significant differences between adults and children in their reliance on perceptual information in this task. Based on our findings, we propose that visual properties may play a key role in symbolic number judgements. © 2013. Published by Elsevier B.V. All rights reserved.
Sarabi, Mitra Taghizadeh; Aoki, Ryuta; Tsumura, Kaho; Keerativittayayut, Ruedeerat; Jimura, Koji; Nakahara, Kiyoshi
2018-01-01
The neural mechanisms underlying visual perceptual learning (VPL) have typically been studied by examining changes in task-related brain activation after training. However, the relationship between post-task "offline" processes and VPL remains unclear. The present study examined this question by obtaining resting-state functional magnetic resonance imaging (fMRI) scans of human brains before and after a task-fMRI session involving visual perceptual training. During the task-fMRI session, participants performed a motion coherence discrimination task in which they judged the direction of moving dots with a coherence level that varied between trials (20, 40, and 80%). We found that stimulus-induced activation increased with motion coherence in the middle temporal cortex (MT+), a feature-specific region representing visual motion. On the other hand, stimulus-induced activation decreased with motion coherence in the dorsal anterior cingulate cortex (dACC) and bilateral insula, regions involved in decision making under perceptual ambiguity. Moreover, by comparing pre-task and post-task rest periods, we revealed that resting-state functional connectivity (rs-FC) with the MT+ was significantly increased after training in widespread cortical regions including the bilateral sensorimotor and temporal cortices. In contrast, rs-FC with the MT+ was significantly decreased in subcortical regions including the thalamus and putamen. Importantly, the training-induced change in rs-FC was observed only with the MT+, but not with the dACC or insula. Thus, our findings suggest that perceptual training induces plastic changes in offline functional connectivity specifically in brain regions representing the trained visual feature, emphasising the distinct roles of feature-representation regions and decision-related regions in VPL.
Escaping the recent past: Which stimulus dimensions influence proactive interference?
Craig, Kimberly S.; Berman, Marc G.; Jonides, John; Lustig, Cindy
2013-01-01
Proactive interference occurs when information from the past disrupts current processing and is a major source of confusion and errors in short-term memory (Wickens, Born & Allen, 1963). The present investigation examines potential boundary conditions for interference, testing the hypothesis that potential competitors must be similar along task-relevant dimensions to influence proactive interference effects. We manipulated both the type of task being completed (Experiments 1, 2 and 3) and dimensions of similarity irrelevant to the current task (Experiments 4 and 5) to determine how the recent presentation of a probe item would affect the speed with which participants could reject that item. Experiments 1, 2 and 3 contrasted short-term memory judgments, which require temporal information, with semantic and perceptual judgments, for which temporal information is irrelevant. In Experiments 4 and 5, task-irrelevant information (perceptual similarity) was manipulated within the recent probes task. We found that interference from past items affected short-term memory (STM) task performance but did not affect performance in semantic or perceptual judgment tasks. Conversely, similarity along a nominally-irrelevant perceptual dimension did not affect the magnitude of interference in STM tasks. Results are consistent with the view that items in STM are represented by noisy codes consisting of multiple dimensions, and that interference occurs when items are similar to each other and thus compete along the dimensions relevant to target selection. PMID:23297049
Escaping the recent past: which stimulus dimensions influence proactive interference?
Craig, Kimberly S; Berman, Marc G; Jonides, John; Lustig, Cindy
2013-07-01
Proactive interference occurs when information from the past disrupts current processing and is a major source of confusion and errors in short-term memory (STM; Wickens, Born, & Allen, Journal of Verbal Learning and Verbal Behavior, 2:440-445, 1963). The present investigation examines potential boundary conditions for interference, testing the hypothesis that potential competitors must be similar along task-relevant dimensions to influence proactive interference effects. We manipulated both the type of task being completed (Experiments 1, 2, and 3) and dimensions of similarity irrelevant to the current task (Experiments 4 and 5) to determine how the recent presentation of a probe item would affect the speed with which participants could reject that item. Experiments 1, 2, and 3 contrasted STM judgments, which require temporal information, with semantic and perceptual judgments, for which temporal information is irrelevant. In Experiments 4 and 5, task-irrelevant information (perceptual similarity) was manipulated within the recent probes task. We found that interference from past items affected STM task performance but did not affect performance in semantic or perceptual judgment tasks. Conversely, similarity along a nominally irrelevant perceptual dimension did not affect the magnitude of interference in STM tasks. Results are consistent with the view that items in STM are represented by noisy codes consisting of multiple dimensions and that interference occurs when items are similar to each other and, thus, compete along the dimensions relevant to target selection.
Categorization difficulty modulates the mediated route for response selection in task switching.
Schneider, Darryl W
2017-12-22
Conflict during response selection in task switching is indicated by the response congruency effect: worse performance for incongruent targets (requiring different responses across tasks) than for congruent targets (requiring the same response). The effect can be explained by dual-task processing in a mediated route for response selection, whereby targets are categorized with respect to both tasks. In the present study, the author tested predictions for the modulation of response congruency effects by categorization difficulty derived from a relative-speed-of-processing hypothesis. Categorization difficulty was manipulated for the relevant and irrelevant task dimensions in a novel spatial task-switching paradigm that involved judging the locations of target dots in a grid, without repetition of dot configurations. Response congruency effects were observed and they varied systematically with categorization difficulty (e.g., being larger when irrelevant categorization was easy than when it was hard). These results are consistent with the relative-speed-of-processing hypothesis and suggest that task-switching models that implement variations of the mediated route for response selection need to address the time course of categorization.
McBride, Dawn M; Abney, Drew H
2012-01-01
We examined multi-process (MP) and transfer-appropriate processing descriptions of prospective memory (PM). Three conditions were compared that varied the overlap in processing type (perceptual/conceptual) between the ongoing and PM tasks such that two conditions involved a match of perceptual processing and one condition involved a mismatch in processing (conceptual ongoing task/perceptual PM task). One of the matched processing conditions also created a focal PM task, whereas the other two conditions were considered non-focal (Einstein & McDaniel, 2005). PM task accuracy and ongoing task completion speed in baseline and PM task conditions were measured. Accuracy results indicated a higher PM task completion rate for the focal condition than the non-focal conditions, a finding that is consistent with predictions made by the MP view. However, reaction time (RT) analyses indicated that PM task cost did not differ across conditions when practice effects are considered. Thus, the PM accuracy results are consistent with a MP description of PM, but RT results did not support the MP view predictions regarding PM cost.
Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun
2016-01-01
Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.
Perceptual learning: top to bottom.
Amitay, Sygal; Zhang, Yu-Xuan; Jones, Pete R; Moore, David R
2014-06-01
Perceptual learning has traditionally been portrayed as a bottom-up phenomenon that improves encoding or decoding of the trained stimulus. Cognitive skills such as attention and memory are thought to drive, guide and modulate learning but are, with notable exceptions, not generally considered to undergo changes themselves as a result of training with simple perceptual tasks. Moreover, shifts in threshold are interpreted as shifts in perceptual sensitivity, with no consideration for non-sensory factors (such as response bias) that may contribute to these changes. Accumulating evidence from our own research and others shows that perceptual learning is a conglomeration of effects, with training-induced changes ranging from the lowest (noise reduction in the phase locking of auditory signals) to the highest (working memory capacity) level of processing, and includes contributions from non-sensory factors that affect decision making even on a "simple" auditory task such as frequency discrimination. We discuss our emerging view of learning as a process that increases the signal-to-noise ratio associated with perceptual tasks by tackling noise sources and inefficiencies that cause performance bottlenecks, and present some implications for training populations other than young, smart, attentive and highly-motivated college students. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Gong, Liang; Wang, JiHua; Yang, XuDong; Feng, Lei; Li, Xiu; Gu, Cui; Wang, MeiHong; Hu, JiaYun; Cheng, Huaidong
2016-01-01
The latest neuroimaging studies about implicit memory (IM) have revealed that different IM types may be processed by different parts of the brain. However, studies have rarely examined what subtypes of IM processes are affected in patients with various brain injuries. Twenty patients with frontal lobe injury, 25 patients with occipital lobe injury, and 29 healthy controls (HC) were recruited for the study. Two subtypes of IM were investigated by using structurally parallel perceptual (picture identification task) and conceptual (category exemplar generation task) IM tests in the three groups, as well as explicit memory (EM) tests. The results indicated that the priming of conceptual IM and EM tasks in patients with frontal lobe injury was poorer than that observed in HC, while perceptual IM was identical between the two groups. By contrast, the priming of perceptual IM in patients with occipital lobe injury was poorer than that in HC, whereas the priming of conceptual IM and EM was similar to that in HC. This double dissociation between perceptual and conceptual IM across the brain areas implies that occipital lobes may participate in perceptual IM, while frontal lobes may be involved in processing conceptual memory. PMID:26793093
Integrated approaches to perceptual learning.
Jacobs, Robert A
2010-04-01
New technologies and new ways of thinking have recently led to rapid expansions in the study of perceptual learning. We describe three themes shared by many of the nine articles included in this topic on Integrated Approaches to Perceptual Learning. First, perceptual learning cannot be studied on its own because it is closely linked to other aspects of cognition, such as attention, working memory, decision making, and conceptual knowledge. Second, perceptual learning is sensitive to both the stimulus properties of the environment in which an observer exists and to the properties of the tasks that the observer needs to perform. Moreover, the environmental and task properties can be characterized through their statistical regularities. Finally, the study of perceptual learning has important implications for society, including implications for science education and medical rehabilitation. Contributed articles relevant to each theme are summarized. Copyright © 2010 Cognitive Science Society, Inc.
Wong, Becky; Szücs, Dénes
2013-01-01
We investigated whether the mere presentation of single-digit Arabic numbers activates their magnitude representations using a visually-presented symbolic same–different task for 20 adults and 15 children. Participants saw two single-digit Arabic numbers on a screen and judged whether the numbers were the same or different. We examined whether reaction time in this task was primarily driven by (objective or subjective) perceptual similarity, or by the numerical difference between the two digits. We reasoned that, if Arabic numbers automatically activate magnitude representations, a numerical function would best predict reaction time; but if Arabic numbers do not automatically activate magnitude representations, a perceptual function would best predict reaction time. Linear regressions revealed that a perceptual function, specifically, subjective visual similarity, was the best and only significant predictor of reaction time in adults and in children. These data strongly suggest that, in this task, single-digit Arabic numbers do not necessarily automatically activate magnitude representations in adults or in children. As the first study to date to explicitly study the developmental importance of perceptual factors in the symbolic same–different task, we found no significant differences between adults and children in their reliance on perceptual information in this task. Based on our findings, we propose that visual properties may play a key role in symbolic number judgements. PMID:24076332
[Perceptual categorization of emotional expression cued by ones back posture].
Sogon, S; Doi, K
1986-04-01
Subjects looked at 8 mm motion pictures of the bodily movement from rear view perspective of male and female communicators, who faced to emotionally-toned scenes. If the subjects detected some sign of emotional expression, they rated the relevance of expression on five point scale. Varimax rotated factor analysis yielded three factors: F1 rejection-acceptance, F2 avoidance-approach, and F3 sadness. Rejection was categorized as expressions of anger, disgust, and contempt, while anger was categorized when a clenched fist with forward and extended arm were observed. Disgust and contempt were categorized when stationary posture was observed. Acceptance was categorized when the signs of affection, anticipation, and acceptance were observed. Avoidance was categorized when signs of fear and surprise were observed. A typical fear was categorized when signs of freezing was observed, surprise was categorized by stepping back, and sadness was categorized by crouching and self attachment.
Characterizing Perceptual Learning with External Noise
ERIC Educational Resources Information Center
Gold, Jason M.; Sekuler, Allison B.; Bennett, Partrick J.
2004-01-01
Performance in perceptual tasks often improves with practice. This effect is known as "perceptual learning," and it has been the source of a great deal of interest and debate over the course of the last century. Here, we consider the effects of perceptual learning within the context of signal detection theory. According to signal detection theory,…
Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G
2013-11-01
The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar. Copyright © 2013 Elsevier Ltd. All rights reserved.
Analysis of a mammography teaching program based on an affordance design model.
Luo, Ping; Eikman, Edward A; Kealy, William; Qian, Wei
2006-12-01
The wide use of computer technology in education, particularly in mammogram reading, asks for e-learning evaluation. The existing media comparative studies, learner attitude evaluations, and performance tests are problematic. Based on an affordance design model, this study examined an existing e-learning program on mammogram reading. The selection criteria include content relatedness, representativeness, e-learning orientation, image quality, program completeness, and accessibility. A case study was conducted to examine the affordance features, functions, and presentations of the selected software. Data collection and analysis methods include interviews, protocol-based document analysis, and usability tests and inspection. Also some statistics were calculated. The examination of PBE identified that this educational software designed and programmed some tools. The learner can use these tools in the process of optimizing displays, scanning images, comparing different projections, marking the region of interests, constructing a descriptive report, assessing one's learning outcomes, and comparing one's decisions with the experts' decisions. Further, PBE provides some resources for the learner to construct one's knowledge and skills, including a categorized image library, a term-searching function, and some teaching links. Besides, users found it easy to navigate and carry out tasks. The users also reacted positively toward PBE's navigation system, instructional aids, layout, pace and flow of information, graphics, and other presentation design. The software provides learners with some cognitive tools, supporting their perceptual problem-solving processes and extending their capabilities. Learners can internalize the mental models in mammogram reading through multiple perceptual triangulations, sensitization of related features, semantic description of mammogram findings, and expert-guided semantic report construction. The design of these cognitive tools and the software interface matches the findings and principles in human learning and instructional design. Working with PBE's case-based simulations and categorized gallery, learners can enrich and transfer their experience to their jobs.
Neurological evidence linguistic processes precede perceptual simulation in conceptual processing.
Louwerse, Max; Hutchinson, Sterling
2012-01-01
There is increasing evidence from response time experiments that language statistics and perceptual simulations both play a role in conceptual processing. In an EEG experiment we compared neural activity in cortical regions commonly associated with linguistic processing and visual perceptual processing to determine to what extent symbolic and embodied accounts of cognition applied. Participants were asked to determine the semantic relationship of word pairs (e.g., sky - ground) or to determine their iconic relationship (i.e., if the presentation of the pair matched their expected physical relationship). A linguistic bias was found toward the semantic judgment task and a perceptual bias was found toward the iconicity judgment task. More importantly, conceptual processing involved activation in brain regions associated with both linguistic and perceptual processes. When comparing the relative activation of linguistic cortical regions with perceptual cortical regions, the effect sizes for linguistic cortical regions were larger than those for the perceptual cortical regions early in a trial with the reverse being true later in a trial. These results map upon findings from other experimental literature and provide further evidence that processing of concept words relies both on language statistics and on perceptual simulations, whereby linguistic processes precede perceptual simulation processes.
Neurological Evidence Linguistic Processes Precede Perceptual Simulation in Conceptual Processing
Louwerse, Max; Hutchinson, Sterling
2012-01-01
There is increasing evidence from response time experiments that language statistics and perceptual simulations both play a role in conceptual processing. In an EEG experiment we compared neural activity in cortical regions commonly associated with linguistic processing and visual perceptual processing to determine to what extent symbolic and embodied accounts of cognition applied. Participants were asked to determine the semantic relationship of word pairs (e.g., sky – ground) or to determine their iconic relationship (i.e., if the presentation of the pair matched their expected physical relationship). A linguistic bias was found toward the semantic judgment task and a perceptual bias was found toward the iconicity judgment task. More importantly, conceptual processing involved activation in brain regions associated with both linguistic and perceptual processes. When comparing the relative activation of linguistic cortical regions with perceptual cortical regions, the effect sizes for linguistic cortical regions were larger than those for the perceptual cortical regions early in a trial with the reverse being true later in a trial. These results map upon findings from other experimental literature and provide further evidence that processing of concept words relies both on language statistics and on perceptual simulations, whereby linguistic processes precede perceptual simulation processes. PMID:23133427
Perceptual other-race training reduces implicit racial bias.
Lebrecht, Sophie; Pierce, Lara J; Tarr, Michael J; Tanaka, James W
2009-01-01
Implicit racial bias denotes socio-cognitive attitudes towards other-race groups that are exempt from conscious awareness. In parallel, other-race faces are more difficult to differentiate relative to own-race faces--the "Other-Race Effect." To examine the relationship between these two biases, we trained Caucasian subjects to better individuate other-race faces and measured implicit racial bias for those faces both before and after training. Two groups of Caucasian subjects were exposed equally to the same African American faces in a training protocol run over 5 sessions. In the individuation condition, subjects learned to discriminate between African American faces. In the categorization condition, subjects learned to categorize faces as African American or not. For both conditions, both pre- and post-training we measured the Other-Race Effect using old-new recognition and implicit racial biases using a novel implicit social measure--the "Affective Lexical Priming Score" (ALPS). Subjects in the individuation condition, but not in the categorization condition, showed improved discrimination of African American faces with training. Concomitantly, subjects in the individuation condition, but not the categorization condition, showed a reduction in their ALPS. Critically, for the individuation condition only, the degree to which an individual subject's ALPS decreased was significantly correlated with the degree of improvement that subject showed in their ability to differentiate African American faces. Our results establish a causal link between the Other-Race Effect and implicit racial bias. We demonstrate that training that ameliorates the perceptual Other-Race Effect also reduces socio-cognitive implicit racial bias. These findings suggest that implicit racial biases are multifaceted, and include malleable perceptual skills that can be modified with relatively little training.
Wasserman, Edward A.; Brooks, Daniel I.; McMurray, Bob
2014-01-01
Might there be parallels between category learning in animals and word learning in children? To examine this possibility, we devised a new associative learning technique for teaching pigeons to sort 128 photographs of objects into 16 human language categories. We found that pigeons learned all 16 categories in parallel, they perceived the perceptual coherence of the different object categories, and they generalized their categorization behavior to novel photographs from the training categories. More detailed analyses of the factors that predict trial-by-trial learning implicated a number of factors that may shape learning. First, we found considerable trial-by-trial dependency of pigeons’ categorization responses, consistent with several recent studies that invoke this dependency to claim that humans acquire words via symbolic or inferential mechanisms; this finding suggests that such dependencies may also arise in associative systems. Second, our trial-by-trial analyses divulged seemingly irrelevant aspects of the categorization task, like the spatial location of the report responses, which influenced learning. Third, those trial-by-trial analyses also supported the possibility that learning may be determined both by strengthening correct stimulus-response associations and by weakening incorrect stimulus-response associations. The parallel between all these findings and important aspects of human word learning suggests that associative learning mechanisms may play a much stronger part in complex human behavior than is commonly believed. PMID:25497520
Limited transfer of long-term motion perceptual learning with double training.
Liang, Ju; Zhou, Yifeng; Fahle, Manfred; Liu, Zili
2015-01-01
A significant recent development in visual perceptual learning research is the double training technique. With this technique, Xiao, Zhang, Wang, Klein, Levi, and Yu (2008) have found complete transfer in tasks that had previously been shown to be stimulus specific. The significance of this finding is that this technique has since been successful in all tasks tested, including motion direction discrimination. Here, we investigated whether or not this technique could generalize to longer-term learning, using the method of constant stimuli. Our task was learning to discriminate motion directions of random dots. The second leg of training was contrast discrimination along a new average direction of the same moving dots. We found that, although exposure of moving dots along a new direction facilitated motion direction discrimination, this partial transfer was far from complete. We conclude that, although perceptual learning is transferrable under certain conditions, stimulus specificity also remains an inherent characteristic of motion perceptual learning.
Perceptual learning effect on decision and confidence thresholds.
Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano
2016-10-01
Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a "trained target". Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the "trained target": an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. Copyright © 2016 Elsevier Inc. All rights reserved.
Developmental study of visual perception of handwriting movement: influence of motor competencies?
Bidet-Ildei, Christel; Orliaguet, Jean-Pierre
2008-07-25
This paper investigates the influence of motor competencies for the visual perception of human movements in 6-10 years old children. To this end, we compared the kinematics of actual performed and perceptual preferred handwriting movements. The two children's tasks were (1) to write the letter e on a digitizer (handwriting task) and (2) to adjust the velocity of an e displayed on a screen so that it would correspond to "their preferred velocity" (perceptive task). In both tasks, the size of the letter (from 3.4 to 54.02 cm) was different on each trial. Results showed that irrespective of age and task, total movement time conforms to the isochrony principle, i.e., the tendency to maintain constant the duration of movement across changes of amplitude. However, concerning movement speed, there is no developmental correspondence between results obtained in the motor and the perceptive tasks. In handwriting task, movement time decreased with age but no effect of age was observed in the perceptive task. Therefore, perceptual preference of handwriting movement in children could not be strictly interpreted in terms of motor-perceptual coupling.
Perceptual Decision Making in Rodents, Monkeys, and Humans.
Hanks, Timothy D; Summerfield, Christopher
2017-01-04
Perceptual decision making is the process by which animals detect, discriminate, and categorize information from the senses. Over the past two decades, understanding how perceptual decisions are made has become a central theme in the neurosciences. Exceptional progress has been made by recording from single neurons in the cortex of the macaque monkey and using computational models from mathematical psychology to relate these neural data to behavior. More recently, however, the range of available techniques and paradigms has dramatically broadened, and researchers have begun to harness new approaches to explore how rodents and humans make perceptual decisions. The results have illustrated some striking convergences with findings from the monkey, but also raised new questions and provided new theoretical insights. In this review, we summarize key findings, and highlight open challenges, for understanding perceptual decision making in rodents, monkeys, and humans. Copyright © 2017 Elsevier Inc. All rights reserved.
Butterfill, Stephen A
2015-11-01
What evidence could bear on questions about whether humans ever perceptually experience any of another's mental states, and how might those questions be made precise enough to test experimentally? This paper focusses on emotions and their expression. It is proposed that research on perceptual experiences of physical properties provides one model for thinking about what evidence concerning expressions of emotion might reveal about perceptual experiences of others' mental states. This proposal motivates consideration of the hypothesis that categorical perception of expressions of emotion occurs, can be facilitated by information about agents' emotions, and gives rise to phenomenal expectations. It is argued that the truth of this hypothesis would support a modest version of the claim that humans sometimes perceptually experience some of another's mental states. Much available evidence is consistent with, but insufficient to establish, the truth of the hypothesis. We are probably not yet in a position to know whether humans ever perceptually experience others' mental states. Copyright © 2015 Elsevier Inc. All rights reserved.
Illusory conjunctions and perceptual grouping in a visual search task in schizophrenia.
Carr, V J; Dewis, S A; Lewin, T J
1998-07-27
This report describes part of a series of experiments, conducted within the framework of feature integration theory, to determine whether patients with schizophrenia show deficits in preattentive processing. Thirty subjects with a DSM-III-R diagnosis of schizophrenia and 30 age-, gender-, and education-matched normal control subjects completed two computerized experimental tasks, a visual search task assessing the frequency of illusory conjunctions (i.e. false perceptions) under conditions of divided attention (Experiment 3) and a task which examined the effects of perceptual grouping on illusory conjunctions (Experiment 4). We also assessed current symptomatology and its relationship to task performance. Contrary to our hypotheses, schizophrenia subjects did not show higher rates of illusory conjunctions, and the influence of perceptual grouping on the frequency of illusory conjunctions was similar for schizophrenia and control subjects. Nonetheless, specific predictions from feature integration theory about the impact of different target types (Experiment 3) and perceptual groups (Experiment 4) on the likelihood of forming an illusory conjunction were strongly supported, thereby confirming the integrity of the experimental procedures. Overall, these studies revealed no firm evidence that schizophrenia is associated with a preattentive abnormality in visual search using stimuli that differ on the basis of physical characteristics.
Perceptual load corresponds with factors known to influence visual search
Roper, Zachary J. J.; Cosman, Joshua D.; Vecera, Shaun P.
2014-01-01
One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a non-circular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spill-over to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. These results suggest that perceptual load might be defined in part by well-characterized, continuous factors that influence visual search. PMID:23398258
Marsh, Rachel; Alexander, Gerianne M; Packard, Mark G; Zhu, Hongtu; Peterson, Bradley S
2005-01-01
Procedural learning and memory systems likely comprise several skills that are differentially affected by various illnesses of the central nervous system, suggesting their relative functional independence and reliance on differing neural circuits. Gilles de la Tourette syndrome (GTS) is a movement disorder that involves disturbances in the structure and function of the striatum and related circuitry. Recent studies suggest that patients with GTS are impaired in performance of a probabilistic classification task that putatively involves the acquisition of stimulus-response (S-R)-based habits. Assessing the learning of perceptual-motor skills and probabilistic classification in the same samples of GTS and healthy control subjects may help to determine whether these various forms of procedural (habit) learning rely on the same or differing neuroanatomical substrates and whether those substrates are differentially affected in persons with GTS. Therefore, we assessed perceptual-motor skill learning using the pursuit-rotor and mirror tracing tasks in 50 patients with GTS and 55 control subjects who had previously been compared at learning a task of probabilistic classifications. The GTS subjects did not differ from the control subjects in performance of either the pursuit rotor or mirror-tracing tasks, although they were significantly impaired in the acquisition of a probabilistic classification task. In addition, learning on the perceptual-motor tasks was not correlated with habit learning on the classification task in either the GTS or healthy control subjects. These findings suggest that the differing forms of procedural learning are dissociable both functionally and neuroanatomically. The specific deficits in the probabilistic classification form of habit learning in persons with GTS are likely to be a consequence of disturbances in specific corticostriatal circuits, but not the same circuits that subserve the perceptual-motor form of habit learning.
Memory and learning with rapid audiovisual sequences
Keller, Arielle S.; Sekuler, Robert
2015-01-01
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193
Memory and learning with rapid audiovisual sequences.
Keller, Arielle S; Sekuler, Robert
2015-01-01
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.
Camilleri, Rebecca; Pavan, Andrea; Campana, Gianluca
2016-08-01
It has recently been demonstrated how perceptual learning, that is an improvement in a sensory/perceptual task upon practice, can be boosted by concurrent high-frequency transcranial random noise stimulation (tRNS). It has also been shown that perceptual learning can generalize and produce an improvement of visual functions in participants with mild refractive defects. By using three different groups of participants (single-blind study), we tested the efficacy of a short training (8 sessions) using a single Gabor contrast-detection task with concurrent hf-tRNS in comparison with the same training with sham stimulation or hf-tRNS with no concurrent training, in improving visual acuity (VA) and contrast sensitivity (CS) of individuals with uncorrected mild myopia. A short training with a contrast detection task is able to improve VA and CS only if coupled with hf-tRNS, whereas no effect on VA and marginal effects on CS are seen with the sole administration of hf-tRNS. Our results support the idea that, by boosting the rate of perceptual learning via the modulation of neuronal plasticity, hf-tRNS can be successfully used to reduce the duration of the perceptual training and/or to increase its efficacy in producing perceptual learning and generalization to improved VA and CS in individuals with uncorrected mild myopia. Copyright © 2016 Elsevier Ltd. All rights reserved.
Holistic face perception is modulated by experience-dependent perceptual grouping.
Curby, Kim M; Entenman, Robert J; Fleming, Justin T
2016-07-01
What role do general-purpose, experience-sensitive perceptual mechanisms play in producing characteristic features of face perception? We previously demonstrated that different-colored, misaligned framing backgrounds, designed to disrupt perceptual grouping of face parts appearing upon them, disrupt holistic face perception. In the current experiments, a similar part-judgment task with composite faces was performed: face parts appeared in either misaligned, different-colored rectangles or aligned, same-colored rectangles. To investigate whether experience can shape impacts of perceptual grouping on holistic face perception, a pre-task fostered the perception of either (a) the misaligned, differently colored rectangle frames as parts of a single, multicolored polygon or (b) the aligned, same-colored rectangle frames as a single square shape. Faces appearing in the misaligned, differently colored rectangles were processed more holistically by those in the polygon-, compared with the square-, pre-task group. Holistic effects for faces appearing in aligned, same-colored rectangles showed the opposite pattern. Experiment 2, which included a pre-task condition fostering the perception of the aligned, same-colored frames as pairs of independent rectangles, provided converging evidence that experience can modulate impacts of perceptual grouping on holistic face perception. These results are surprising given the proposed impenetrability of holistic face perception and provide insights into the elusive mechanisms underlying holistic perception.
Viola, Vanda; Tosoni, Annalisa; Brizi, Ambra; Salvato, Ilaria; Kruglanski, Arie W; Galati, Gaspare; Mannetti, Lucia
2015-01-01
The aim of this study was to assess the extent to which Need for Cognitive Closure (NCC), an individual-level epistemic motivation, can explain inter-individual variability in the cognitive effort invested on a perceptual decision making task (the random motion task). High levels of NCC are manifested in a preference for clarity, order and structure and a desire for firm and stable knowledge. The study evaluated how NCC moderates the impact of two variables known to increase the amount of cognitive effort invested on a task, namely task ambiguity (i.e., the difficulty of the perceptual discrimination) and outcome relevance (i.e., the monetary gain associated with a correct discrimination). Based on previous work and current design, we assumed that reaction times (RTs) on our motion discrimination task represent a valid index of effort investment. Task ambiguity was associated with increased cognitive effort in participants with low or medium NCC but, interestingly, it did not affect the RTs of participants with high NCC. A different pattern of association was observed for outcome relevance; high outcome relevance increased cognitive effort in participants with moderate or high NCC, but did not affect the performance of low NCC participants. In summary, the performance of individuals with low NCC was affected by task difficulty but not by outcome relevance, whereas individuals with high NCC were influenced by outcome relevance but not by task difficulty; only participants with medium NCC were affected by both task difficulty and outcome relevance. These results suggest that perceptual decision making is influenced by the interaction between context and NCC.
Selective attention to perceptual dimensions and switching between dimensions.
Meiran, Nachshon; Dimov, Eduard; Ganel, Tzvi
2013-02-01
In the present experiments, the question being addressed was whether switching attention between perceptual dimensions and selective attention to dimensions are processes that compete over a common resource? Attention to perceptual dimensions is usually studied by requiring participants to ignore a never-relevant dimension. Selection failure (Garner's Interference, GI) is indicated by poorer performance in the filtering condition (when this dimension varies) as compared with baseline (when it is fixed). Switching between perceptual dimensions is usually studied with the task switching paradigm. In the present experiments, attention switching was manipulated by using single-task blocks and blocks in which participants switched between tasks or dimensions in reaction to task cues, and attention to dimensions was assessed by including a third, never-relevant dimension that was either fixed or varied randomly. In Experiments 1 (long cue-target interval, CTI) and 2 (short CTI), the tasks involved shape and color and the never-relevant dimension (texture) was chosen to be separable from them. In Experiments 3 (long CTI) and 4 (short CTI), the tasks involved shape and brightness and the never-relevant dimension, saturation, was chosen to be separable from shape and integral with brightness. Task switching did not generate GI but a short CTI did. Thus, switching and filtering generally do not compete over central limited resources unless under tight time pressure. Experiment 3 shows GI in the brightness task but not in the shape task, suggesting that participants switched their attention between brightness and shape when they switched tasks. PsycINFO Database Record (c) 2013 APA, all rights reserved.
ERIC Educational Resources Information Center
Essock, Edward A.; Sinai, Michael J.; DeFord, Kevin; Hansen, Bruce C.; Srinivasan, Narayanan
2004-01-01
In this study the authors address the issue of how the perceptual usefulness of nonliteral imagery should be evaluated. Perceptual performance with nonliteral imagery of natural scenes obtained at night from infrared and image-intensified sensors and from multisensor fusion methods was assessed to relate performance on 2 basic perceptual tasks to…
Is Hand Selection Modulated by Cognitive-perceptual Load?
Liang, Jiali; Wilkinson, Krista; Sainburg, Robert L
2018-01-15
Previous studies proposed that selecting which hand to use for a reaching task appears to be modulated by a factor described as "task difficulty". However, what features of a task might contribute to greater or lesser "difficulty" in the context of hand selection decisions has yet to be determined. There has been evidence that biomechanical and kinematic factors such as movement smoothness and work can predict patterns of selection across the workspace, suggesting a role of predictive cost analysis in hand-selection. We hypothesize that this type of prediction for hand-selection should recruit substantial cognitive resources and thus should be influenced by cognitive-perceptual loading. We test this hypothesis by assessing the role of cognitive-perceptual loading on hand selection decisions, using a visual search task that presents different levels of difficulty (cognitive-perceptual load), as established in previous studies on overall response time and efficiency of visual search. Although the data are necessarily preliminary due to small sample size, our data suggested an influence of cognitive-perceptual load on hand selection, such that the dominant hand was selected more frequently as cognitive load increased. Interestingly, cognitive-perceptual loading also increased cross-midline reaches with both hands. Because crossing midline is more costly in terms of kinematic and kinetic factors, our findings suggest that cognitive processes are normally engaged to avoid costly actions, and that the choice not-to-cross midline requires cognitive resources. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
A new taxonomy for perceptual filling-in
Weil, Rimona S.; Rees, Geraint
2011-01-01
Perceptual filling-in occurs when structures of the visual system interpolate information across regions of visual space where that information is physically absent. It is a ubiquitous and heterogeneous phenomenon, which takes place in different forms almost every time we view the world around us, such as when objects are occluded by other objects or when they fall behind the blind spot. Yet, to date, there is no clear framework for relating these various forms of perceptual filling-in. Similarly, whether these and other forms of filling-in share common mechanisms is not yet known. Here we present a new taxonomy to categorize the different forms of perceptual filling-in. We then examine experimental evidence for the processes involved in each type of perceptual filling-in. Finally, we use established theories of general surface perception to show how contextualizing filling-in using this framework broadens our understanding of the possible shared mechanisms underlying perceptual filling-in. In particular, we consider the importance of the presence of boundaries in determining the phenomenal experience of perceptual filling-in. PMID:21059374
Central mechanisms of odour object perception
Gottfried, Jay A.
2013-01-01
The stimulus complexity of naturally occurring odours presents unique challenges for central nervous systems that are aiming to internalize the external olfactory landscape. One mechanism by which the brain encodes perceptual representations of behaviourally relevant smells is through the synthesis of different olfactory inputs into a unified perceptual experience — an odour object. Recent evidence indicates that the identification, categorization and discrimination of olfactory stimuli rely on the formation and modulation of odour objects in the piriform cortex. Convergent findings from human and rodent models suggest that distributed piriform ensemble patterns of olfactory qualities and categories are crucial for maintaining the perceptual constancy of ecologically inconstant stimuli. PMID:20700142
Vélez-Coto, María; Rodríguez-Fórtiz, María José; Rodriguez-Almendros, María Luisa; Cabrera-Cuevas, Marcelino; Rodríguez-Domínguez, Carlos; Ruiz-López, Tomás; Burgos-Pulido, Ángeles; Garrido-Jiménez, Inmaculada; Martos-Pérez, Juan
2017-05-01
People with low-functioning ASD and other disabilities often find it difficult to understand the symbols traditionally used in educational materials during the learning process. Technology-based interventions are becoming increasingly common, helping children with cognitive disabilities to perform academic tasks and improve their abilities and knowledge. Such children often find it difficult to perform certain tasks contained in educational materials since they lack necessary skills such as abstract reasoning. In order to help these children, the authors designed and created SIGUEME to train attention and the perceptual and visual cognitive skills required to work with and understand graphic materials and objects. A pre-test/post-test design was implemented to test SIGUEME. Seventy-four children with low-functioning ASD (age=13.47, SD=8.74) were trained with SIGUEME over twenty-five sessions and compared with twenty-eight children (age=12.61, SD=2.85) who had not received any intervention. There was a statistically significant improvement in the experimental group in Attention (W=-5.497, p<0.001). There was also a significant change in Association and Categorization (W=2.721, p=0.007) and Interaction (W=-3.287, p=0.001). SIGUEME is an effective tool for improving attention, categorization and interaction in low-functioning children with ASD. It is also a useful and powerful instrument for teachers, parents and educators by increasing the child's motivation and autonomy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Late Maturation of Auditory Perceptual Learning
ERIC Educational Resources Information Center
Huyck, Julia Jones; Wright, Beverly A.
2011-01-01
Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…
Responding to a Challenging Perceptual-Motor Task as a Function of Level of Experiential Avoidance
ERIC Educational Resources Information Center
Zettle, Robert D.; Petersen, Connie L.; Hocker, Tanya R.; Provines, Jessica L.
2007-01-01
Participants displaying high versus low levels of experiential avoidance as assessed by the Acceptance and Action Questionnaire (Hayes, Strosahl, et al., 2004) were compared in their reactions to and performance on a challenging perceptual-motor task. Participants were offered incentives for sorting colored straws into different colored containers…
Associative priming in a masked perceptual identification task: evidence for automatic processes.
Pecher, Diane; Zeelenberg, René; Raaijmakers, Jeroen G W
2002-10-01
Two experiments investigated the influence of automatic and strategic processes on associative priming effects in a perceptual identification task in which prime-target pairs are briefly presented and masked. In this paradigm, priming is defined as a higher percentage of correctly identified targets for related pairs than for unrelated pairs. In Experiment 1, priming was obtained for mediated word pairs. This mediated priming effect was affected neither by the presence of direct associations nor by the presentation time of the primes, indicating that automatic priming effects play a role in perceptual identification. Experiment 2 showed that the priming effect was not affected by the proportion (.90 vs. .10) of related pairs if primes were presented briefly to prevent their identification. However, a large proportion effect was found when primes were presented for 1000 ms so that they were clearly visible. These results indicate that priming in a masked perceptual identification task is the result of automatic processes and is not affected by strategies. The present paradigm provides a valuable alternative to more commonly used tasks such as lexical decision.
The involvement of central attention in visual search is determined by task demands.
Han, Suk Won
2017-04-01
Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.
Perceptual learning modifies the functional specializations of visual cortical areas.
Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang
2016-05-17
Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.
Perceptual Differences between Hippies and College Students
ERIC Educational Resources Information Center
Brothers, Robert; Gaines, Rosslyn
1973-01-01
Perceptual differences were investigated between 50 college students who were non-drug users and 50 hippies who used LSD. The major hypothesis predicted was that hippies would score differently from college students in a specific direction on each of the perceptual tasks. (Author)
Perceptual processing during trauma, priming and the development of intrusive memories
Sündermann, Oliver; Hauschildt, Marit; Ehlers, Anke
2013-01-01
Background Intrusive reexperiencing in posttraumatic stress disorder (PTSD) is commonly triggered by stimuli with perceptual similarity to those present during the trauma. Information processing theories suggest that perceptual processing during the trauma and enhanced perceptual priming contribute to the easy triggering of intrusive memories by these cues. Methods Healthy volunteers (N = 51) watched neutral and trauma picture stories on a computer screen. Neutral objects that were unrelated to the content of the stories briefly appeared in the interval between the pictures. Dissociation and data-driven processing (as indicators of perceptual processing) and state anxiety during the stories were assessed with self-report questionnaires. After filler tasks, participants completed a blurred object identification task to assess priming and a recognition memory task. Intrusive memories were assessed with telephone interviews 2 weeks and 3 months later. Results Neutral objects were more strongly primed if they occurred in the context of trauma stories than if they occurred during neutral stories, although the effect size was only moderate (ηp2=.08) and only significant when trauma stories were presented first. Regardless of story order, enhanced perceptual priming predicted intrusive memories at 2-week follow-up (N = 51), but not at 3 months (n = 40). Data-driven processing, dissociation and anxiety increases during the trauma stories also predicted intrusive memories. Enhanced perceptual priming and data-driven processing were associated with lower verbal intelligence. Limitations It is unclear to what extent these findings generalize to real-life traumatic events and whether they are specific to negative emotional events. Conclusions The results provide some support for the role of perceptual processing and perceptual priming in reexperiencing symptoms. PMID:23207970
Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun
2016-01-01
Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer. PMID:26873777
Oculomotor responses and visuospatial perceptual judgments compete for common limited resources
Tibber, Marc S.; Grant, Simon; Morgan, Michael J.
2010-01-01
While there is evidence for multiple spatial and attentional maps in the brain it is not clear to what extent visuoperceptual and oculomotor tasks rely on common neural representations and attentional mechanisms. Using a dual-task interference paradigm we tested the hypothesis that eye movements and perceptual judgments made to simultaneously presented visuospatial information compete for shared limited resources. Observers undertook judgments of stimulus collinearity (perceptual extrapolation) using a pointer and Gabor patch and/or performed saccades to a peripheral dot target while their eye movements were recorded. In addition, observers performed a non-spatial control task (contrast discrimination), matched for task difficulty and stimulus structure, which on the basis of previous studies was expected to represent a lesser load on putative shared resources. Greater mutual interference was indeed found between the saccade and extrapolation task pair than between the saccade and contrast discrimination task pair. These data are consistent with visuoperceptual and oculomotor responses competing for common limited resources as well as spatial tasks incurring a relatively high attentional cost. PMID:20053112
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
van Weelden, Lisanne; Schilperoord, Joost; Swerts, Marc; Pecher, Diane
2015-01-01
Visual information contributes fundamentally to the process of object categorization. The present study investigated whether the degree of activation of visual information in this process is dependent on the contextual relevance of this information. We used the Proactive Interference (PI-release) paradigm. In four experiments, we manipulated the information by which objects could be categorized and subsequently be retrieved from memory. The pattern of PI-release showed that if objects could be stored and retrieved both by (non-perceptual) semantic and (perceptual) shape information, then shape information was overruled by semantic information. If, however, semantic information could not be (satisfactorily) used to store and retrieve objects, then objects were stored in memory in terms of their shape. The latter effect was found to be strongest for objects from identical semantic categories.
Li, Xuan; Allen, Philip A; Lien, Mei-Ching; Yamamoto, Naohide
2017-02-01
Previous studies on perceptual learning, acquiring a new skill through practice, appear to stimulate brain plasticity and enhance performance (Fiorentini & Berardi, 1981). The present study aimed to determine (a) whether perceptual learning can be used to compensate for age-related declines in perceptual abilities, and (b) whether the effect of perceptual learning can be transferred to untrained stimuli and subsequently improve capacity of visual working memory (VWM). We tested both healthy younger and older adults in a 3-day training session using an orientation discrimination task. A matching-to-sample psychophysical method was used to measure improvements in orientation discrimination thresholds and reaction times (RTs). Results showed that both younger and older adults improved discrimination thresholds and RTs with similar learning rates and magnitudes. Furthermore, older adults exhibited a generalization of improvements to 3 untrained orientations that were close to the training orientation and benefited more compared with younger adults from the perceptual learning as they transferred learning effects to the VWM performance. We conclude that through perceptual learning, older adults can partially counteract age-related perceptual declines, generalize the learning effect to other stimulus conditions, and further overcome the limitation of using VWM capacity to perform a perceptual task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Shape equivalence under perspective and projective transformations.
Wagemans, J; Lamote, C; Van Gool, L
1997-06-01
When a planar shape is viewed obliquely, it is deformed by a perspective deformation. If the visual system were to pick up geometrical invariants from such projections, these would necessarily be invariant under the wider class of projective transformations. To what extent can the visual system tell the difference between perspective and nonperspective but still projective deformations of shapes? To investigate this, observers were asked to indicate which of two test patterns most resembled a standard pattern. The test patterns were related to the standard pattern by a perspective or projective transformation, or they were completely unrelated. Performance was slightly better in a matching task with perspective and unrelated test patterns (92.6%) than in a projective-random matching task (88.8%). In a direct comparison, participants had a small preference (58.5%) for the perspectively related patterns over the projectively related ones. Preferences were based on the values of the transformation parameters (slant and shear). Hence, perspective and projective transformations yielded perceptual differences, but they were not treated in a categorically different manner by the human visual system.
Anterior Temporal Lobe Morphometry Predicts Categorization Ability.
Garcin, Béatrice; Urbanski, Marika; Thiebaut de Schotten, Michel; Levy, Richard; Volle, Emmanuelle
2018-01-01
Categorization is the mental operation by which the brain classifies objects and events. It is classically assessed using semantic and non-semantic matching or sorting tasks. These tasks show a high variability in performance across healthy controls and the cerebral bases supporting this variability remain unknown. In this study we performed a voxel-based morphometry study to explore the relationships between semantic and shape categorization tasks and brain morphometric differences in 50 controls. We found significant correlation between categorization performance and the volume of the gray matter in the right anterior middle and inferior temporal gyri. Semantic categorization tasks were associated with more rostral temporal regions than shape categorization tasks. A significant relationship was also shown between white matter volume in the right temporal lobe and performance in the semantic tasks. Tractography revealed that this white matter region involved several projection and association fibers, including the arcuate fasciculus, inferior fronto-occipital fasciculus, uncinate fasciculus, and inferior longitudinal fasciculus. These results suggest that categorization abilities are supported by the anterior portion of the right temporal lobe and its interaction with other areas.
An information theory account of late frontoparietal ERP positivities in cognitive control.
Barceló, Francisco; Cooper, Patrick S
2018-03-01
ERP research on task switching has revealed distinct transient and sustained positive waveforms (latency circa 300-900 ms) while shifting task rules or stimulus-response (S-R) mappings. However, it remains unclear whether such switch-related positivities show similar scalp topography and index context-updating mechanisms akin to those posed for domain-general (i.e., classic P300) positivities in many task domains. To examine this question, ERPs were recorded from 31 young adults (18-30 years) while they were intermittently cued to switch or repeat their perceptual categorization of Gabor gratings varying in color and thickness (switch task), or else they performed two visually identical control tasks (go/no-go and oddball). Our task cueing paradigm examined two temporarily distinct stages of proactive rule updating and reactive rule execution. A simple information theory model helped us gauge cognitive demands under distinct temporal and task contexts in terms of low-level S-R pathways and higher-order rule updating operations. Task demands modulated domain-general (indexed by classic oddball P3) and switch positivities-indexed by both a cue-locked late positive complex and a sustained positivity ensuing task transitions. Topographic scalp analyses confirmed subtle yet significant split-second changes in the configuration of neural sources for both domain-general P3s and switch positivities as a function of both the temporal and task context. These findings partly meet predictions from information estimates, and are compatible with a family of P3-like potentials indexing functionally distinct neural operations within a common frontoparietal "multiple demand" system during the preparation and execution of simple task rules. © 2016 Society for Psychophysiological Research.
NASA Technical Reports Server (NTRS)
Clarke, M. M.; Garin, J.
1981-01-01
Operator perceptual cognitive styles as predictors of remote task performance were identified. Remote tasks which require the use of servo controlled master/slave manipulators and closed circuit television for teleoperator repair and maintenance of nuclear fuel recycling systems are examined. A useful procedure for identifying such perceptual styles is described.
Perceptual load corresponds with factors known to influence visual search.
Roper, Zachary J J; Cosman, Joshua D; Vecera, Shaun P
2013-10-01
One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a noncircular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spillover to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. We conclude that rather than be arbitrarily defined, perceptual load might be defined by well-characterized, continuous factors that influence visual search. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Perceptual context and individual differences in the language proficiency of preschool children.
Banai, Karen; Yifat, Rachel
2016-02-01
Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.
The Relationship Between Perceptual Egocentrism and Field Dependence in Early Childhood
ERIC Educational Resources Information Center
Bowd, Alan D.
1975-01-01
Kindergarten children were administered tests of inductive reasoning and field dependence and a series of perceptual egocentrism tasks. Results confirm a positive relation between field dependence and perceptual egocentrism; they also question the validity of the field-dependence construct in early childhood. (GO)
Enhancement and suppression in the visual field under perceptual load.
Parks, Nathan A; Beck, Diane M; Kramer, Arthur F
2013-01-01
The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task-greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs) in conjunction with time-domain event-related potentials (ERPs) to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG) was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2, 6, or 11°) during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3 Hz) was attenuated under high perceptual load (relative to low load) at the most proximal (2°) eccentricity but not at more eccentric locations (6 or 11°). Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.
Adaptive History Biases Result from Confidence-Weighted Accumulation of past Choices
2018-01-01
Perceptual decision-making is biased by previous events, including the history of preceding choices: observers tend to repeat (or alternate) their judgments of the sensory environment more often than expected by chance. Computational models postulate that these so-called choice history biases result from the accumulation of internal decision signals across trials. Here, we provide psychophysical evidence for such a mechanism and its adaptive utility. Male and female human observers performed different variants of a challenging visual motion discrimination task near psychophysical threshold. In a first experiment, we decoupled categorical perceptual choices and motor responses on a trial-by-trial basis. Choice history bias was explained by previous perceptual choices, not motor responses, highlighting the importance of internal decision signals in action-independent formats. In a second experiment, observers performed the task in stimulus environments containing different levels of autocorrelation and providing no external feedback about choice correctness. Despite performing under overall high levels of uncertainty, observers adjusted both the strength and the sign of their choice history biases to these environments. When stimulus sequences were dominated by either repetitions or alternations, the individual degree of this adjustment of history bias was about as good a predictor of individual performance as individual perceptual sensitivity. The history bias adjustment scaled with two proxies for observers' confidence about their previous choices (accuracy and reaction time). Together, our results are consistent with the idea that action-independent, confidence-modulated decision variables are accumulated across choices in a flexible manner that depends on decision-makers' model of their environment. SIGNIFICANCE STATEMENT Decisions based on sensory input are often influenced by the history of one's preceding choices, manifesting as a bias to systematically repeat (or alternate) choices. We here provide support for the idea that such choice history biases arise from the context-dependent accumulation of a quantity referred to as the decision variable: the variable's sign dictates the choice and its magnitude the confidence about choice correctness. We show that choices are accumulated in an action-independent format and a context-dependent manner, weighted by the confidence about their correctness. This confidence-weighted accumulation of choices enables decision-makers to flexibly adjust their behavior to different sensory environments. The bias adjustment can be as important for optimizing performance as one's sensitivity to the momentary sensory input. PMID:29371318
Adaptive History Biases Result from Confidence-weighted Accumulation of Past Choices.
Braun, Anke; Urai, Anne E; Donner, Tobias H
2018-01-25
Perceptual decision-making is biased by previous events, including the history of preceding choices: Observers tend to repeat (or alternate) their judgments of the sensory environment more often than expected by chance. Computational models postulate that these so-called choice history biases result from the accumulation of internal decision signals across trials. Here, we provide psychophysical evidence for such a mechanism and its adaptive utility. Male and female human observers performed different variants of a challenging visual motion discrimination task near psychophysical threshold. In a first experiment, we decoupled categorical perceptual choices and motor responses on a trial-by-trial basis. Choice history bias was explained by previous perceptual choices, not motor responses, highlighting the importance of internal decision signals in action-independent formats. In a second experiment, observers performed the task in stimulus environments containing different levels of auto-correlation and providing no external feedback about choice correctness. Despite performing under overall high levels of uncertainty, observers adjusted both the strength and the sign of their choice history biases to these environments. When stimulus sequences were dominated by either repetitions or alternations, the individual degree of this adjustment of history bias was about as good a predictor of individual performance as individual perceptual sensitivity. The history bias adjustment scaled with two proxies for observers' confidence about their previous choices (accuracy and reaction time). Taken together, our results are consistent with the idea that action-independent, confidence-modulated decision variables are accumulated across choices in a flexible manner that depends on decision-makers' model of their environment. Significance statement: Decisions based on sensory input are often influenced by the history of one's preceding choices, manifesting as a bias to systematically repeat (or alternate) choices. We here provide support for the idea that such choice history biases arise from the context-dependent accumulation of a quantity referred to as the decision variable: the variable's sign dictates the choice and its magnitude the confidence about choice correctness. We show that choices are accumulated in an action-independent format and a context-dependent manner, weighted by the confidence about their correctness. This confidence-weighted accumulation of choices enables decision-makers to flexibly adjust their behavior to different sensory environments. The bias adjustment can be as important for optimizing performance as one's sensitivity to the momentary sensory input. Copyright © 2018 Braun et al.
Macdonald, James S P; Lavie, Nilli
2008-10-01
Although the perceptual load theory of attention has stimulated a great deal of research, evidence for the role of perceptual load in determining perception has typically relied on indirect measures that infer perception from distractor effects on reaction times or neural activity (see N. Lavie, 2005, for a review). Here we varied the level of perceptual load in a letter-search task and assessed its effect on the conscious perception of a search-irrelevant shape stimulus appearing in the periphery, using a direct measure of awareness (present/absent reports). Detection sensitivity (d') was consistently reduced with high, compared to low, perceptual load but was unaffected by the level of working memory load. Because alternative accounts in terms of expectation, memory, response bias, and goal-neglect due to the more strenuous high load task were ruled out, these experiments clearly demonstrate that high perceptual load determines conscious perception, impairing the ability to merely detect the presence of a stimulus--a phenomenon of load induced blindness.
Park, Hame; Lueckmann, Jan-Matthis; von Kriegstein, Katharina; Bitzer, Sebastian; Kiebel, Stefan J.
2016-01-01
Decisions in everyday life are prone to error. Standard models typically assume that errors during perceptual decisions are due to noise. However, it is unclear how noise in the sensory input affects the decision. Here we show that there are experimental tasks for which one can analyse the exact spatio-temporal details of a dynamic sensory noise and better understand variability in human perceptual decisions. Using a new experimental visual tracking task and a novel Bayesian decision making model, we found that the spatio-temporal noise fluctuations in the input of single trials explain a significant part of the observed responses. Our results show that modelling the precise internal representations of human participants helps predict when perceptual decisions go wrong. Furthermore, by modelling precisely the stimuli at the single-trial level, we were able to identify the underlying mechanism of perceptual decision making in more detail than standard models. PMID:26752272
Wang, Rui; Zhang, Jun-Yun; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong
2014-01-01
Perceptual learning, a process in which training improves visual discrimination, is often specific to the trained retinal location, and this location specificity is frequently regarded as an indication of neural plasticity in the retinotopic visual cortex. However, our previous studies have shown that “double training” enables location-specific perceptual learning, such as Vernier learning, to completely transfer to a new location where an irrelevant task is practiced. Here we show that Vernier learning can be actuated by less location-specific orientation or motion-direction learning to transfer to completely untrained retinal locations. This “piggybacking” effect occurs even if both tasks are trained at the same retinal location. However, piggybacking does not occur when the Vernier task is paired with a more location-specific contrast-discrimination task. This previously unknown complexity challenges the current understanding of perceptual learning and its specificity/transfer. Orientation and motion-direction learning, but not contrast and Vernier learning, appears to activate a global process that allows learning transfer to untrained locations. Moreover, when paired with orientation or motion-direction learning, Vernier learning may be “piggybacked” by the activated global process to transfer to other untrained retinal locations. How this task-specific global activation process is achieved is as yet unknown. PMID:25398974
Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K.; Fröhlich, Flavio
2016-01-01
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition. PMID:27025995
Cardillo, Ramona; Mammarella, Irene C; Garcia, Ricardo Basso; Cornoldi, Cesare
2017-05-01
Visuo-constructive and perceptual abilities have been poorly investigated in children with learning disabilities. The present study focused on local or global visuospatial processing in children with nonverbal learning disability (NLD) and dyslexia compared with typically-developing (TD) controls. Participants were presented with a modified block design task (BDT), in both a typical visuo-constructive version that involves reconstructing figures from blocks, and a perceptual version in which respondents must rapidly match unfragmented figures with a corresponding fragmented target figure. The figures used in the tasks were devised by manipulating two variables: the perceptual cohesiveness and the task uncertainty, stimulating global or local processes. Our results confirmed that children with NLD had more problems with the visuo-constructive version of the task, whereas those with dyslexia showed only a slight difficulty with the visuo-constructive version, but were in greater difficulty with the perceptual version, especially in terms of response times. These findings are interpreted in relation to the slower visual processing speed of children with dyslexia, and to the visuo-constructive problems and difficulty in using flexibly-experienced global vs local processes of children with NLD. The clinical and educational implications of these findings are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K; Fröhlich, Flavio
2016-03-30
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition.
How does reading direction modulate perceptual asymmetry effects?
Chung, Harry K S; Liu, Joyce Y W; Hsiao, Janet H
2017-08-01
Left-side bias effects refer to a bias towards the left side of the stimulus/space in perceptual/visuospatial judgments, and are argued to reflect dominance of right hemisphere processing. It remains unclear whether reading direction can also account for the bias effect. Previous studies comparing readers of languages read from left to right with those read from right to left (e.g., French vs. Hebrew) have obtained inconsistent results. As a language that can be read from left to right or from right to left, Chinese provides a unique opportunity for a within-culture examination of reading direction effects. Chinese participants performed a perceptual judgment task (with both face and Chinese character stimuli; Experiment 1) and two visuospatial attention tasks (the greyscales and line bisection tasks; Experiment 2) once before and once after a reading task, in which they read Chinese passages either from left to right or from right to left for about 20 min. After reading from right to left, participants showed significantly reduced left-side bias in Chinese character perceptual judgments but not in the other three tasks. This effect suggests that the role of reading direction on different forms of left-side bias may differ, and its modulation may be stimulus-specific.
Hindi Attar, Catherine; Müller, Matthias M
2012-01-01
A number of studies have shown that emotionally arousing stimuli are preferentially processed in the human brain. Whether or not this preference persists under increased perceptual load associated with a task at hand remains an open question. Here we manipulated two possible determinants of the attentional selection process, perceptual load associated with a foreground task and the emotional valence of concurrently presented task-irrelevant distractors. As a direct measure of sustained attentional resource allocation in early visual cortex we used steady-state visual evoked potentials (SSVEPs) elicited by distinct flicker frequencies of task and distractor stimuli. Subjects either performed a detection (low load) or discrimination (high load) task at a centrally presented symbol stream that flickered at 8.6 Hz while task-irrelevant neutral or unpleasant pictures from the International Affective Picture System (IAPS) flickered at a frequency of 12 Hz in the background of the stream. As reflected in target detection rates and SSVEP amplitudes to both task and distractor stimuli, unpleasant relative to neutral background pictures more strongly withdrew processing resources from the foreground task. Importantly, this finding was unaffected by the factor 'load' which turned out to be a weak modulator of attentional processing in human visual cortex.
Qi, Geqi; Li, Xiujun; Yan, Tianyi; Wang, Bin; Yang, Jiajia; Wu, Jinglong; Guo, Qiyong
2014-04-30
Visual word expertise is typically associated with enhanced ventral occipito-temporal (vOT) cortex activation in response to written words. Previous study utilized a passive viewing task and found that vOT response to written words was significantly stronger in literate compared to the illiterate subjects. However, recent neuroimaging findings have suggested that vOT response properties are highly dependent upon the task demand. Thus, it is unknown whether literate adults would show stronger vOT response to written words compared to illiterate adults during other cognitive tasks, such as perceptual matching. We addressed this issue by comparing vOT activations between literate and illiterate adults during a Chinese character and simple figure matching task. Unlike passive viewing, a perceptual matching task requires active shape comparison, therefore minimizing automatic word processing bias. We found that although the literate group performed better at Chinese character matching task, the two subject groups showed similar strong vOT responses during this task. Overall, the findings indicate that the vOT response to written words is not affected by expertise during a perceptual matching task, suggesting that the association between visual word expertise and vOT response may depend on the task demand. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Social anxiety under load: the effects of perceptual load in processing emotional faces.
Soares, Sandra C; Rocha, Marta; Neiva, Tiago; Rodrigues, Paulo; Silva, Carlos F
2015-01-01
Previous studies in the social anxiety arena have shown an impaired attentional control system, similar to that found in trait anxiety. However, the effect of task demands on social anxiety in socially threatening stimuli, such as angry faces, remains unseen. In the present study, 54 university students scoring high and low in the Social Interaction and Performance Anxiety and Avoidance Scale (SIPAAS) questionnaire, participated in a target letter discrimination task while task-irrelevant face stimuli (angry, disgust, happy, and neutral) were simultaneously presented. The results showed that high (compared to low) socially anxious individuals were more prone to distraction by task-irrelevant stimuli, particularly under high perceptual load conditions. More importantly, for such individuals, the accuracy proportions for angry faces significantly differed between the low and high perceptual load conditions, which is discussed in light of current evolutionary models of social anxiety.
Motor demands impact speed of information processing in Autism Spectrum Disorders
Kenworthy, Lauren; Yerys, Benjamin E.; Weinblatt, Rachel; Abrams, Danielle N.; Wallace, Gregory L.
2015-01-01
Objective The apparent contradiction between preserved or even enhanced perceptual processing speed on inspection time tasks in autism spectrum disorders (ASD) and impaired performance on complex processing speed tasks that require motor output (e.g. Wechsler Processing Speed Index) has not yet been systematically investigated. This study investigates whether adding motor output demands to an inspection time task impairs ASD performance compared to that of typically developing control (TDC) children. Method The performance of children with ASD (n=28; mean FSIQ=115) and TDC (n=25; mean FSIQ=122) children was compared on processing speed tasks with increasing motor demand. Correlations were run between ASD task performance and Autism Diagnostic Observation Schedule (ADOS) Communication scores. Results Performance by the ASD and TDC groups on a simple perceptual processing speed task with minimal motor demand was equivalent, though it diverged (ASD worse than TDC) on two tasks with the same stimuli, but increased motor output demands. ASD performance on the moderate but not the high speeded motor output demand task was negatively correlated with ADOS communication symptoms. Conclusions These data address the apparent contradiction between preserved inspection time in the context of slowed “processing speed” in ASD. They show that processing speed is preserved when motor demands are minimized, but that increased motor output demands interfere with the ability to act on perceptual processing of simple stimuli. Reducing motor demands (e.g. through the use of computers) may increase the capacity of people with ASD to demonstrate good perceptual processing in a variety of educational, vocational and social settings. PMID:23937483
Time course influences transfer of visual perceptual learning across spatial location.
Larcombe, S J; Kennard, C; Bridge, H
2017-06-01
Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Theory of mind and perceptual context-processing in schizophrenia.
Uhlhaas, Peter J; Phillips, William A; Schenkel, Lindsay S; Silverstein, Steven M
2006-07-01
A series of studies have suggested that schizophrenia patients are deficient in theory of mind (ToM). However, the cognitive mechanisms underlying ToM deficits in schizophrenia are largely unknown. The present study examined the hypothesis that impaired ToM in schizophrenia can be understood as a deficit in context processing. Disorganised schizophrenia patients (N = 12), nondisorganised schizophrenia patients (N = 36), and nonpsychotic psychiatric patients (N = 26) were tested on three ToM tasks and a visual size perception task, a measure of perceptual context processing. In addition, statistical analyses were carried out which compared chronic, treatment-refractory schizophrenia patients (N = 28) to those with an episodic course of illness (N = 20). Overall, ToM performance was linked to deficits in context processing in schizophrenia patients. Statistical comparisons showed that disorganised as well as chronic schizophrenia patients were more impaired in ToM but more accurate in a visual size perception task where perceptual context is misleading. This pattern of results is interpreted as indicating a possible link between deficits in ToM and perceptual context processing, which together with deficits in perceptual grouping, are part of a broader dysfunction in cognitive coordination in schizophrenia.
Effects of lorazepam on visual perceptual abilities.
Pompéia, S; Pradella-Hallinan, M; Manzano, G M; Bueno, O F A
2008-04-01
To evaluate the effects of an acute dose of the benzodiazepine (BZ) lorazepam in young healthy volunteers on five distinguishable visual perception abilities determined by previous factor-analytic studies. This was a double-blind, cross-over design study of acute oral doses of lorazepam (2 mg) and placebo in young healthy volunteers. We focused on a set of paper-and-pencil tests of visual perceptual abilities that load on five correlated but distinguishable factors (Spatial Visualization, Spatial Relations, Perceptual Speed, Closure Speed, and Closure Flexibility). Some other tests (DSST, immediate and delayed recall of prose; measures of subjective mood alterations) were used to control for the classic BZ-induced effects. Lorazepam impaired performance in the DSST and delayed recall of prose, increased subjective sedation and impaired tasks of all abilities except Spatial Visualization and Closure Speed. Only impairment in Perceptual Speed (Identical Pictures task) and delayed recall of prose were not explained by sedation. Acute administration of lorazepam, in a dose that impaired episodic memory, selectively affected different visual perceptual abilities before and after controlling for sedation. Central executive demands and sedation did not account for results, so impairment in the Identical Pictures task may be attributed to lorazepam's visual processing alterations. 2008 John Wiley & Sons, Ltd.
Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception
ERIC Educational Resources Information Center
Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake
2006-01-01
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…
Krans, Julie; Langner, Oliver; Reinecke, Andrea; Pearson, David G
2013-12-01
The present study addressed the role of context information and dual-task interference during the encoding of negative pictures on intrusion development and voluntary recall. Healthy participants were shown negative pictures with or without context information. Pictures were either viewed alone or concurrently with a visuospatial or verbal task. Participants reported their intrusive images of the pictures in a diary. At follow-up, perceptual and contextual memory was tested. Participants in the context group reported more intrusive images and perceptual voluntary memory than participants in the no context group. No effects of the concurrent tasks were found on intrusive image frequency, but perceptual and contextual memory was affected according to the cognitive load of the task. The analogue method cannot be generalized to real-life trauma and the secondary tasks may differ in cognitive load. The findings challenge a dual memory model of PTSD but support an account in which retrieval strategy, rather than encoding processes, accounts for the experience of involuntary versus voluntary recall. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dilution: atheoretical burden or just load? A reply to Tsal and Benoni (2010).
Lavie, Nilli; Torralbo, Ana
2010-12-01
Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere "dilution") for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load.
Lavie, Nilli; Torralbo, Ana
2010-01-01
Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere “dilution”) for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load. PMID:21133554
Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis
Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.
2016-01-01
Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815
Perceptual Organization and Operative Thought: A Study of Coherence in Memory.
ERIC Educational Resources Information Center
Heindel, Patricia; Kose, Gary
Examined in three studies were the influence of perceptual organization on children's memory and the relationship between operational thought and memory performance. In the first study, 72 children at 5, 7, and 9 years of age were given a series of Piagetian tasks and a memory task. Subjects were presented with 10 color-shape pairs depicted in…
Perceptual Averaging in Individuals with Autism Spectrum Disorder.
Corbett, Jennifer E; Venuti, Paola; Melcher, David
2016-01-01
There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value) of a visual feature (e.g., mean size) appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD) have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles ( mean task ) despite poor accuracy in recalling individual circle sizes ( member task ). In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.
Effect of perceptual load on semantic access by speech in children.
Jerger, Susan; Damian, Markus F; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervé
2013-04-01
To examine whether semantic access by speech requires attention in children. Children (N = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal task had a low load, and the multimodal task had a high load (i.e., respectively naming pictures displayed on a blank screen vs. below the talker's face on his T-shirt). Semantic content of distractors was manipulated to be related vs. unrelated to the picture (e.g., picture "dog" with distractors "bear" vs. "cheese"). If irrelevant semantic content manipulation influences naming times on both tasks despite variations in loads, Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity-limited attentional resources; if, however, irrelevant content influences naming only on the cross-modal task (low load), the perceptual load model proposes that semantic access is dependent on attentional resources exhausted by the higher load task. Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds but only on the cross-modal task in 4- to 5-year-olds. The addition of visual speech did not influence results on the multimodal task. Younger and older children differ in dependence on attentional resources for semantic access by speech.
Metacognitive deficits in categorization tasks in a population with impaired inner speech.
Langland-Hassan, Peter; Gauker, Christopher; Richardson, Michael J; Dietz, Aimee; Faries, Frank R
2017-11-01
This study examines the relation of language use to a person's ability to perform categorization tasks and to assess their own abilities in those categorization tasks. A silent rhyming task was used to confirm that a group of people with post-stroke aphasia (PWA) had corresponding covert language production (or "inner speech") impairments. The performance of the PWA was then compared to that of age- and education-matched healthy controls on three kinds of categorization tasks and on metacognitive self-assessments of their performance on those tasks. The PWA showed no deficits in their ability to categorize objects for any of the three trial types (visual, thematic, and categorial). However, on the categorial trials, their metacognitive assessments of whether they had categorized correctly were less reliable than those of the control group. The categorial trials were distinguished from the others by the fact that the categorization could not be based on some immediately perceptible feature or on the objects' being found together in a type of scenario or setting. This result offers preliminary evidence for a link between covert language use and a specific form of metacognition. Copyright © 2017 Elsevier B.V. All rights reserved.
Cognitive load effects on early visual perceptual processing.
Liu, Ping; Forte, Jason; Sewell, David; Carter, Olivia
2018-05-01
Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.
Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter
2008-03-01
We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.
Perceptual load-dependent neural correlates of distractor interference inhibition.
Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M; Potenza, Marc N
2011-01-18
The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load.
Some factors underlying individual differences in speech recognition on PRESTO: a first report.
Tamati, Terrin N; Gilbert, Jaimie L; Pisoni, David B
2013-01-01
Previous studies investigating speech recognition in adverse listening conditions have found extensive variability among individual listeners. However, little is currently known about the core underlying factors that influence speech recognition abilities. To investigate sensory, perceptual, and neurocognitive differences between good and poor listeners on the Perceptually Robust English Sentence Test Open-set (PRESTO), a new high-variability sentence recognition test under adverse listening conditions. Participants who fell in the upper quartile (HiPRESTO listeners) or lower quartile (LoPRESTO listeners) on key word recognition on sentences from PRESTO in multitalker babble completed a battery of behavioral tasks and self-report questionnaires designed to investigate real-world hearing difficulties, indexical processing skills, and neurocognitive abilities. Young, normal-hearing adults (N = 40) from the Indiana University community participated in the current study. Participants' assessment of their own real-world hearing difficulties was measured with a self-report questionnaire on situational hearing and hearing health history. Indexical processing skills were assessed using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Neurocognitive abilities were measured with the Auditory Digit Span Forward (verbal short-term memory) and Digit Span Backward (verbal working memory) tests, the Stroop Color and Word Test (attention/inhibition), the WordFam word familiarity test (vocabulary size), the Behavioral Rating Inventory of Executive Function-Adult Version (BRIEF-A) self-report questionnaire on executive function, and two performance subtests of the Wechsler Abbreviated Scale of Intelligence (WASI) Performance Intelligence Quotient (IQ; nonverbal intelligence). Scores on self-report questionnaires and behavioral tasks were tallied and analyzed by listener group (HiPRESTO and LoPRESTO). The extreme groups did not differ overall on self-reported hearing difficulties in real-world listening environments. However, an item-by-item analysis of questions revealed that LoPRESTO listeners reported significantly greater difficulty understanding speakers in a public place. HiPRESTO listeners were significantly more accurate than LoPRESTO listeners at gender discrimination and regional dialect categorization, but they did not differ on talker discrimination accuracy or response time, or gender discrimination response time. HiPRESTO listeners also had longer forward and backward digit spans, higher word familiarity ratings on the WordFam test, and lower (better) scores for three individual items on the BRIEF-A questionnaire related to cognitive load. The two groups did not differ on the Stroop Color and Word Test or either of the WASI performance IQ subtests. HiPRESTO listeners and LoPRESTO listeners differed in indexical processing abilities, short-term and working memory capacity, vocabulary size, and some domains of executive functioning. These findings suggest that individual differences in the ability to encode and maintain highly detailed episodic information in speech may underlie the variability observed in speech recognition performance in adverse listening conditions using high-variability PRESTO sentences in multitalker babble. American Academy of Audiology.
Herzmann, Grit
2016-07-01
The N250 and N250r (r for repetition, signaling a difference measure of priming) has been proposed to reflect the activation of perceptual memory representations for individual faces. Increased N250r and N250 amplitudes have been associated with higher levels of familiarity and expertise, respectively. In contrast to these observations, the N250 amplitude has been found to be larger for other-race than own-race faces in recognition memory tasks. This study investigated if these findings were due to increased identity-specific processing demands for other-race relative to own-race faces and whether or not similar results would be obtained for the N250 in a repetition priming paradigm. Only Caucasian participants were available for testing and completed two tasks with Caucasian, African-American, and Chinese faces. In a repetition priming task, participants decided whether or not sequentially presented faces were of the same identity (individuation task) or same race (categorization task). Increased N250 amplitudes were found for African-American and Chinese faces relative to Caucasian faces, replicating previous results in recognition memory tasks. Contrary to the expectation that increased N250 amplitudes for other-race face would be confined to the individuation task, both tasks showed similar results. This could be due to the fact that face identity information needed to be maintained across the sequential presentation of prime and target in both tasks. Increased N250 amplitudes for other-race faces are taken to represent increased neural demands on the identity-specific processing of other-race faces, which are typically processed less holistically and less on the level of the individual. Copyright © 2016 Elsevier B.V. All rights reserved.
Auditory Perceptual Abilities Are Associated with Specific Auditory Experience
Zaltz, Yael; Globerson, Eitan; Amir, Noam
2017-01-01
The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318
Subjective Confidence in Perceptual Judgments: A Test of the Self-Consistency Model
ERIC Educational Resources Information Center
Koriat, Asher
2011-01-01
Two questions about subjective confidence in perceptual judgments are examined: the bases for these judgments and the reasons for their accuracy. Confidence in perceptual judgments has been claimed to rest on qualitatively different processes than confidence in memory tasks. However, predictions from a self-consistency model (SCM), which had been…
Verbal Counting Moderates Perceptual Biases Found in Children's Cardinality Judgments
ERIC Educational Resources Information Center
Posid, Tasha; Cordes, Sara
2015-01-01
A crucial component of numerical understanding is one's ability to abstract numerical properties regardless of varying perceptual attributes. Evidence from numerical match-to-sample tasks suggests that children find it difficult to match sets based on number in the face of varying perceptual attributes, yet it is unclear whether these findings are…
Connell, Louise; Lynott, Dermot
2014-04-01
How does the meaning of a word affect how quickly we can recognize it? Accounts of visual word recognition allow semantic information to facilitate performance but have neglected the role of modality-specific perceptual attention in activating meaning. We predicted that modality-specific semantic information would differentially facilitate lexical decision and reading aloud, depending on how perceptual attention is implicitly directed by each task. Large-scale regression analyses showed the perceptual modalities involved in representing a word's referent concept influence how easily that word is recognized. Both lexical decision and reading-aloud tasks direct attention toward vision, and are faster and more accurate for strongly visual words. Reading aloud additionally directs attention toward audition and is faster and more accurate for strongly auditory words. Furthermore, the overall semantic effects are as large for reading aloud as lexical decision and are separable from age-of-acquisition effects. These findings suggest that implicitly directing perceptual attention toward a particular modality facilitates representing modality-specific perceptual information in the meaning of a word, which in turn contributes to the lexical decision or reading-aloud response.
Disentangling perceptual from motor implicit sequence learning with a serial color-matching task.
Gheysen, Freja; Gevers, Wim; De Schutter, Erik; Van Waelvelde, Hilde; Fias, Wim
2009-08-01
This paper contributes to the domain of implicit sequence learning by presenting a new version of the serial reaction time (SRT) task that allows unambiguously separating perceptual from motor learning. Participants matched the colors of three small squares with the color of a subsequently presented large target square. An identical sequential structure was tied to the colors of the target square (perceptual version, Experiment 1) or to the manual responses (motor version, Experiment 2). Short blocks of sequenced and randomized trials alternated and hence provided a continuous monitoring of the learning process. Reaction time measurements demonstrated clear evidence of independently learning perceptual and motor serial information, though revealed different time courses between both learning processes. No explicit awareness of the serial structure was needed for either of the two types of learning to occur. The paradigm introduced in this paper evidenced that perceptual learning can occur with SRT measurements and opens important perspectives for future imaging studies to answer the ongoing question, which brain areas are involved in the implicit learning of modality specific (motor vs. perceptual) or general serial order.
Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco
2011-04-26
Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.
Perceptual and Gaze Biases during Face Processing: Related or Not?
Samson, Hélène; Fiori-Duharcourt, Nicole; Doré-Mazars, Karine; Lemoine, Christelle; Vergilino-Perez, Dorine
2014-01-01
Previous studies have demonstrated a left perceptual bias while looking at faces, due to the fact that observers mainly use information from the left side of a face (from the observer's point of view) to perform a judgment task. Such a bias is consistent with the right hemisphere dominance for face processing and has sometimes been linked to a left gaze bias, i.e. more and/or longer fixations on the left side of the face. Here, we recorded eye-movements, in two different experiments during a gender judgment task, using normal and chimeric faces which were presented above, below, right or left to the central fixation point or on it (central position). Participants performed the judgment task by remaining fixated on the fixation point or after executing several saccades (up to three). A left perceptual bias was not systematically found as it depended on the number of allowed saccades and face position. Moreover, the gaze bias clearly depended on the face position as the initial fixation was guided by face position and landed on the closest half-face, toward the center of gravity of the face. The analysis of the subsequent fixations revealed that observers move their eyes from one side to the other. More importantly, no apparent link between gaze and perceptual biases was found here. This implies that we do not look necessarily toward the side of the face that we use to make a gender judgment task. Despite the fact that these results may be limited by the absence of perceptual and gaze biases in some conditions, we emphasized the inter-individual differences observed in terms of perceptual bias, hinting at the importance of performing individual analysis and drawing attention to the influence of the method used to study this bias. PMID:24454927
Distinguishing bias from sensitivity effects in multialternative detection tasks.
Sridharan, Devarajan; Steinmetz, Nicholas A; Moore, Tirin; Knudsen, Eric I
2014-08-21
Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. © 2014 ARVO.
Distinguishing bias from sensitivity effects in multialternative detection tasks
Sridharan, Devarajan; Steinmetz, Nicholas A.; Moore, Tirin; Knudsen, Eric I.
2014-01-01
Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. PMID:25146574
Vakil, Eli; Lev-Ran Galon, Carmit
2014-01-01
Existing literature presents a complex and inconsistent picture of the specific deficiencies involved in skill learning following traumatic brain injury (TBI). In an attempt to address this difficulty, individuals with moderate to severe TBI (n = 29) and a control group (n = 29) were tested with two different skill-learning tasks: conceptual (i.e., Tower of Hanoi Puzzle, TOHP) and perceptual (i.e., mirror reading, MR). Based on previous studies of the effect of divided attention on these tasks and findings regarding the effect of TBI on conceptual and perceptual priming tasks, it was predicted that the group with TBI would show impaired baseline performance compared to controls in the TOHP task though their learning rate would be maintained, while both baseline performance and learning rate on the MR task would be maintained. Consistent with our predictions, overall baseline performance of the group with TBI was impaired in the TOHP test, while the learning rate was not. The learning rate on the MR task was preserved but, contrary to our prediction, response time of the group with TBI was slower than that of controls. The pattern of results observed in the present study was interpreted to possibly reflect an impairment of both the frontal lobes as well as that of diffuse axonal injury, which is well documented as being affected by TBI. The former impairment affects baseline performance of the conceptual learning skill, while the latter affects the overall slower performance of the perceptual learning skill.
Perceptual context effects of speech and nonspeech sounds: the role of auditory categories.
Aravamudhan, Radhika; Lotto, Andrew J; Hawks, John W
2008-09-01
Williams [(1986). "Role of dynamic information in the perception of coarticulated vowels," Ph.D. thesis, University of Connecticut, Standford, CT] demonstrated that nonspeech contexts had no influence on pitch judgments of nonspeech targets, whereas context effects were obtained when instructed to perceive the sounds as speech. On the other hand, Holt et al. [(2000). "Neighboring spectral content influences vowel identification," J. Acoust. Soc. Am. 108, 710-722] showed that nonspeech contexts were sufficient to elicit context effects in speech targets. The current study was to test a hypothesis that could explain the varying effectiveness of nonspeech contexts: Context effects are obtained only when there are well-established perceptual categories for the target stimuli. Experiment 1 examined context effects in speech and nonspeech signals using four series of stimuli: steady-state vowels that perceptually spanned from /inverted ohm/-/I/ in isolation and in the context of /w/ (with no steady-state portion) and two nonspeech sine-wave series that mimicked the acoustics of the speech series. In agreement with previous work context effects were obtained for speech contexts and targets but not for nonspeech analogs. Experiment 2 tested predictions of the hypothesis by testing for nonspeech context effects after the listeners had been trained to categorize the sounds. Following training, context-dependent categorization was obtained for nonspeech stimuli in the training group. These results are presented within a general perceptual-cognitive framework for speech perception research.
Perceptual context effects of speech and nonspeech sounds: The role of auditory categories
Aravamudhan, Radhika; Lotto, Andrew J.; Hawks, John W.
2008-01-01
Williams [(1986). “Role of dynamic information in the perception of coarticulated vowels,” Ph.D. thesis, University of Connecticut, Standford, CT] demonstrated that nonspeech contexts had no influence on pitch judgments of nonspeech targets, whereas context effects were obtained when instructed to perceive the sounds as speech. On the other hand, Holt et al. [(2000). “Neighboring spectral content influences vowel identification,” J. Acoust. Soc. Am. 108, 710–722] showed that nonspeech contexts were sufficient to elicit context effects in speech targets. The current study was to test a hypothesis that could explain the varying effectiveness of nonspeech contexts: Context effects are obtained only when there are well-established perceptual categories for the target stimuli. Experiment 1 examined context effects in speech and nonspeech signals using four series of stimuli: steady-state vowels that perceptually spanned from ∕ʊ∕-∕ɪ∕ in isolation and in the context of ∕w∕ (with no steady-state portion) and two nonspeech sine-wave series that mimicked the acoustics of the speech series. In agreement with previous work context effects were obtained for speech contexts and targets but not for nonspeech analogs. Experiment 2 tested predictions of the hypothesis by testing for nonspeech context effects after the listeners had been trained to categorize the sounds. Following training, context-dependent categorization was obtained for nonspeech stimuli in the training group. These results are presented within a general perceptual-cognitive framework for speech perception research. PMID:19045660
Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E
2012-12-01
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.
Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.
2015-01-01
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402
Soto, Fabian A.; Waldschmidt, Jennifer G.; Helie, Sebastien; Ashby, F. Gregory
2013-01-01
Previous evidence suggests that relatively separate neural networks underlie initial learning of rule-based and information-integration categorization tasks. With the development of automaticity, categorization behavior in both tasks becomes increasingly similar and exclusively related to activity in cortical regions. The present study uses multi-voxel pattern analysis to directly compare the development of automaticity in different categorization tasks. Each of three groups of participants received extensive training in a different categorization task: either an information-integration task, or one of two rule-based tasks. Four training sessions were performed inside an MRI scanner. Three different analyses were performed on the imaging data from a number of regions of interest (ROIs). The common patterns analysis had the goal of revealing ROIs with similar patterns of activation across tasks. The unique patterns analysis had the goal of revealing ROIs with dissimilar patterns of activation across tasks. The representational similarity analysis aimed at exploring (1) the similarity of category representations across ROIs and (2) how those patterns of similarities compared across tasks. The results showed that common patterns of activation were present in motor areas and basal ganglia early in training, but only in the former later on. Unique patterns were found in a variety of cortical and subcortical areas early in training, but they were dramatically reduced with training. Finally, patterns of representational similarity between brain regions became increasingly similar across tasks with the development of automaticity. PMID:23333700
Distance-dependent processing of pictures and words.
Amit, Elinor; Algom, Daniel; Trope, Yaacov
2009-08-01
A series of 8 experiments investigated the association between pictorial and verbal representations and the psychological distance of the referent objects from the observer. The results showed that people better process pictures that represent proximal objects and words that represent distal objects than pictures that represent distal objects and words that represent proximal objects. These results were obtained with various psychological distance dimensions (spatial, temporal, and social), different tasks (classification and categorization), and different measures (speed of processing and selective attention). The authors argue that differences in the processing of pictures and words emanate from the physical similarity of pictures, but not words, to the referents. Consequently, perceptual analysis is commonly applied to pictures but not to words. Pictures thus impart a sense of closeness to the referent objects and are preferably used to represent such objects, whereas words do not convey proximity and are preferably used to represent distal objects in space, time, and social perspective.
Influence of musical expertise on segmental and tonal processing in Mandarin Chinese.
Marie, Céline; Delogu, Franco; Lampis, Giulia; Belardinelli, Marta Olivetti; Besson, Mireille
2011-10-01
A same-different task was used to test the hypothesis that musical expertise improves the discrimination of tonal and segmental (consonant, vowel) variations in a tone language, Mandarin Chinese. Two four-word sequences (prime and target) were presented to French musicians and nonmusicians unfamiliar with Mandarin, and event-related brain potentials were recorded. Musicians detected both tonal and segmental variations more accurately than nonmusicians. Moreover, tonal variations were associated with higher error rate than segmental variations and elicited an increased N2/N3 component that developed 100 msec earlier in musicians than in nonmusicians. Finally, musicians also showed enhanced P3b components to both tonal and segmental variations. These results clearly show that musical expertise influenced the perceptual processing as well as the categorization of linguistic contrasts in a foreign language. They show positive music-to-language transfer effects and open new perspectives for the learning of tone languages.
Rapid race perception despite individuation and accuracy goals.
Kubota, Jennifer T; Ito, Tiffany
2017-08-01
Perceivers rapidly process social category information and form stereotypic impressions of unfamiliar others. However, a goal to individuate a target or to accurately predict their behavior can result in individuated impressions. It is unknown how the combination of both accuracy and individuation goals affects perceptual category processing. To explore this, participants were given both the goal to individuate targets and accurately predict behavior. We then recorded event-related brain potentials while participants viewed photos of black and white males along with four pieces of individuating information in the form of descriptions of past behavior. Even with explicit individuation and accuracy task goals, participants rapidly differentiated targets by race within 200 ms. Importantly, this rapid categorical processing did not influence behavioral outcomes as participants made individuated predictions. These findings indicate that individuals engage in category processing even when provided with individuation and accuracy goals, but that this processing does not necessarily result in category-based judgments.
Dambacher, Michael; Hübner, Ronald; Schlösser, Jan
2011-01-01
The influence of monetary incentives on performance has been widely investigated among various disciplines. While the results reveal positive incentive effects only under specific conditions, the exact nature, and the contribution of mediating factors are largely unexplored. The present study examined influences of payoff schemes as one of these factors. In particular, we manipulated penalties for errors and slow responses in a speeded categorization task. The data show improved performance for monetary over symbolic incentives when (a) penalties are higher for slow responses than for errors, and (b) neither slow responses nor errors are punished. Conversely, payoff schemes with stronger punishment for errors than for slow responses resulted in worse performance under monetary incentives. The findings suggest that an emphasis of speed is favorable for positive influences of monetary incentives, whereas an emphasis of accuracy under time pressure has the opposite effect. PMID:21980316
Decision making in recurrent neuronal circuits.
Wang, Xiao-Jing
2008-10-23
Decision making has recently emerged as a central theme in neurophysiological studies of cognition, and experimental and computational work has led to the proposal of a cortical circuit mechanism of elemental decision computations. This mechanism depends on slow recurrent synaptic excitation balanced by fast feedback inhibition, which not only instantiates attractor states for forming categorical choices but also long transients for gradually accumulating evidence in favor of or against alternative options. Such a circuit endowed with reward-dependent synaptic plasticity is able to produce adaptive choice behavior. While decision threshold is a core concept for reaction time tasks, it can be dissociated from a general decision rule. Moreover, perceptual decisions and value-based economic choices are described within a unified framework in which probabilistic choices result from irregular neuronal activity as well as iterative interactions of a decision maker with an uncertain environment or other unpredictable decision makers in a social group.
Gender and Prior Science Achievement Affect Categorization on a Procedural Learning Task
ERIC Educational Resources Information Center
Hong, Jon-Chao; Lu, Chow-Chin; Wang, Jen-Lian; Liao, Shin; Wu, Ming-Ray; Hwang, Ming-Yueh; Lin, Pei-Hsin
2013-01-01
Categorization is one of the main mental processes by which perception and conception develop. Nevertheless, categorization receives little attention with the development of critical thinking in Taiwan elementary schools. Thus, the present study investigates the effect that individual differences have on performing categorization tasks.…
ERIC Educational Resources Information Center
Lopez, Beatriz; Leekam, Susan R.; Arts, Gerda R. J.
2008-01-01
This study aimed to test the assumption drawn from weak central coherence theory that a central cognitive mechanism is responsible for integrating information at both conceptual and perceptual levels. A visual semantic memory task and a face recognition task measuring use of holistic information were administered to 15 children with autism and 16…
The organization of perception and action in complex control skills
NASA Technical Reports Server (NTRS)
Miller, Richard A.; Jagacinski, Richard J.
1989-01-01
An attempt was made to describe the perceptual, cognitive, and action processes that account for highly skilled human performance in complex task environments. In order to study such a performance in a controlled setting, a laboratory task was constructed and three experiments were performed using human subjects. A general framework was developed for describing the organization of perceptual, cognitive, and action process.
ERIC Educational Resources Information Center
Rodenborn, Leo V., Jr.
The project's purpose was to determine whether attention to the task during testing was a confounding variable in measures of visual perception ability. Samples of 30 perceptually handicapped (PH) and 30 normal subjects (N) were randomly selected from children so classified on the Frostig DTVP, providing they had IQ scores between 85 and 115 on…
Saneyoshi, Ayako; Michimata, Chikashi
2009-12-01
Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to a different position on geon A. The Categorical task consisted of the original and the categorically transformed objects. The Coordinate task consisted of the original and the coordinately transformed objects. The original object was presented to the central visual field, followed by a comparison object presented to the right or left visual half-fields (RVF and LVF). The results showed an RVF advantage for the Categorical task and an LVF advantage for the Coordinate task. The possibility that categorical and coordinate spatial processing subsystems would be basic computational elements for between- and within-category object recognition was discussed.
Jensen, Greg; Terrace, Herbert
2017-01-01
Humans are highly adept at categorizing visual stimuli, but studies of human categorization are typically validated by verbal reports. This makes it difficult to perform comparative studies of categorization using non-human animals. Interpretation of comparative studies is further complicated by the possibility that animal performance may merely reflect reinforcement learning, whereby discrete features act as discriminative cues for categorization. To assess and compare how humans and monkeys classified visual stimuli, we trained 7 rhesus macaques and 41 human volunteers to respond, in a specific order, to four simultaneously presented stimuli at a time, each belonging to a different perceptual category. These exemplars were drawn at random from large banks of images, such that the stimuli presented changed on every trial. Subjects nevertheless identified and ordered these changing stimuli correctly. Three monkeys learned to order naturalistic photographs; four others, close-up sections of paintings with distinctive styles. Humans learned to order both types of stimuli. All subjects classified stimuli at levels substantially greater than that predicted by chance or by feature-driven learning alone, even when stimuli changed on every trial. However, humans more closely resembled monkeys when classifying the more abstract painting stimuli than the photographic stimuli. This points to a common classification strategy in both species, one that humans can rely on in the absence of linguistic labels for categories. PMID:28961270
Motivation alters response bias and neural activation patterns in a perceptual decision-making task.
Reckless, G E; Bolstad, I; Nakstad, P H; Andreassen, O A; Jensen, J
2013-05-15
Motivation has been demonstrated to affect individuals' response strategies in economic decision-making, however, little is known about how motivation influences perceptual decision-making behavior or its related neural activity. Given the important role motivation plays in shaping our behavior, a better understanding of this relationship is needed. A block-design, continuous performance, perceptual decision-making task where participants were asked to detect a picture of an animal among distractors was used during functional magnetic resonance imaging (fMRI). The effect of positive and negative motivation on sustained activity within regions of the brain thought to underlie decision-making was examined by altering the monetary contingency associated with the task. In addition, signal detection theory was used to investigate the effect of motivation on detection sensitivity, response bias and response time. While both positive and negative motivation resulted in increased sustained activation in the ventral striatum, fusiform gyrus, left dorsolateral prefrontal cortex (DLPFC) and ventromedial prefrontal cortex, only negative motivation resulted in the adoption of a more liberal, closer to optimal response bias. This shift toward a liberal response bias correlated with increased activation in the left DLPFC, but did not result in improved task performance. The present findings suggest that motivation alters aspects of the way perceptual decisions are made. Further, this altered response behavior is reflected in a change in left DLPFC activation, a region involved in the computation of perceptual decisions. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Fize, Denis; Fabre-Thorpe, Michele; Richard, Ghislaine; Doyon, Bernard; Thorpe, Simon J.
2005-01-01
Humans are fast and accurate at performing an animal categorization task with natural photographs briefly flashed centrally. Here, this central categorization task is compared to a three position task in which photographs could appear randomly either centrally, or at 3.6 [degrees] eccentricity (right or left) of the fixation point. A mild…
Racial Categorization Predicts Implicit Racial Bias in Preschool Children.
Setoh, Peipei; Lee, Kristy J J; Zhang, Lijun; Qian, Miao K; Quinn, Paul C; Heyman, Gail D; Lee, Kang
2017-06-12
This research investigated the relation between racial categorization and implicit racial bias in majority and minority children. Chinese and Indian 3- to 7-year-olds from Singapore (N = 158) categorized Chinese and Indian faces by race and had their implicit and explicit racial biases measured. Majority Chinese children, but not minority Indian children, showed implicit bias favoring own race. Regardless of ethnicity, children's racial categorization performance correlated positively with implicit racial bias. Also, Chinese children, but not Indian children, displayed explicit bias favoring own race. Furthermore, children's explicit bias was unrelated to racial categorization performance and implicit bias. The findings support a perceptual-social linkage in the emergence of implicit racial bias and have implications for designing programs to promote interracial harmony. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
Visual texture perception via graph-based semi-supervised learning
NASA Astrophysics Data System (ADS)
Zhang, Qin; Dong, Junyu; Zhong, Guoqiang
2018-04-01
Perceptual features, for example direction, contrast and repetitiveness, are important visual factors for human to perceive a texture. However, it needs to perform psychophysical experiment to quantify these perceptual features' scale, which requires a large amount of human labor and time. This paper focuses on the task of obtaining perceptual features' scale of textures by small number of textures with perceptual scales through a rating psychophysical experiment (what we call labeled textures) and a mass of unlabeled textures. This is the scenario that the semi-supervised learning is naturally suitable for. This is meaningful for texture perception research, and really helpful for the perceptual texture database expansion. A graph-based semi-supervised learning method called random multi-graphs, RMG for short, is proposed to deal with this task. We evaluate different kinds of features including LBP, Gabor, and a kind of unsupervised deep features extracted by a PCA-based deep network. The experimental results show that our method can achieve satisfactory effects no matter what kind of texture features are used.
Perceptual Learning via Modification of Cortical Top-Down Signals
Schäfer, Roland; Vasilaki, Eleni; Senn, Walter
2007-01-01
The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning. PMID:17715996
Beyond perceptual load and dilution: a review of the role of working memory in selective attention
de Fockert, Jan W.
2013-01-01
The perceptual load and dilution models differ fundamentally in terms of the proposed mechanism underlying variation in distractibility during different perceptual conditions. However, both models predict that distracting information can be processed beyond perceptual processing under certain conditions, a prediction that is well-supported by the literature. Load theory proposes that in such cases, where perceptual task aspects do not allow for sufficient attentional selectivity, the maintenance of task-relevant processing depends on cognitive control mechanisms, including working memory. The key prediction is that working memory plays a role in keeping clear processing priorities in the face of potential distraction, and the evidence reviewed and evaluated in a meta-analysis here supports this claim, by showing that the processing of distracting information tends to be enhanced when load on a concurrent task of working memory is high. Low working memory capacity is similarly associated with greater distractor processing in selective attention, again suggesting that the unavailability of working memory during selective attention leads to an increase in distractibility. Together, these findings suggest that selective attention against distractors that are processed beyond perception depends on the availability of working memory. Possible mechanisms for the effects of working memory on selective attention are discussed. PMID:23734139
Beyond perceptual load and dilution: a review of the role of working memory in selective attention.
de Fockert, Jan W
2013-01-01
The perceptual load and dilution models differ fundamentally in terms of the proposed mechanism underlying variation in distractibility during different perceptual conditions. However, both models predict that distracting information can be processed beyond perceptual processing under certain conditions, a prediction that is well-supported by the literature. Load theory proposes that in such cases, where perceptual task aspects do not allow for sufficient attentional selectivity, the maintenance of task-relevant processing depends on cognitive control mechanisms, including working memory. The key prediction is that working memory plays a role in keeping clear processing priorities in the face of potential distraction, and the evidence reviewed and evaluated in a meta-analysis here supports this claim, by showing that the processing of distracting information tends to be enhanced when load on a concurrent task of working memory is high. Low working memory capacity is similarly associated with greater distractor processing in selective attention, again suggesting that the unavailability of working memory during selective attention leads to an increase in distractibility. Together, these findings suggest that selective attention against distractors that are processed beyond perception depends on the availability of working memory. Possible mechanisms for the effects of working memory on selective attention are discussed.
Ellenbogen, Ravid; Meiran, Nachshon
2011-02-01
The backward-compatibility effect (BCE) is a major index of parallel processing in dual tasks and is related to the dependency of Task 1 performance on Task 2 response codes (Hommel, 1998). The results of four dual-task experiments showed that a BCE occurs when the stimuli of both tasks are included in the same visual object (Experiments 1 and 2) or belong to the same perceptual event (Experiments 3 and 4). Thus, the BCE may be modulated by factors that influence whether both task stimuli are included in the same perceptual event (objects, as studied in cognitive experiments, being special cases of events). As with objects, drawing attention to a (selected) event results in the processing of its irrelevant features and may interfere with task execution. (c) 2010 APA, all rights reserved.
Evidence for perceptual deficits in associative visual (prosop)agnosia: a single-case study.
Delvenne, Jean François; Seron, Xavier; Coyette, Françoise; Rossion, Bruno
2004-01-01
Associative visual agnosia is classically defined as normal visual perception stripped of its meaning [Archiv für Psychiatrie und Nervenkrankheiten 21 (1890) 22/English translation: Cognitive Neuropsychol. 5 (1988) 155]: these patients cannot access to their stored visual memories to categorize the objects nonetheless perceived correctly. However, according to an influential theory of visual agnosia [Farah, Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision, MIT Press, Cambridge, MA, 1990], visual associative agnosics necessarily present perceptual deficits that are the cause of their impairment at object recognition Here we report a detailed investigation of a patient with bilateral occipito-temporal lesions strongly impaired at object and face recognition. NS presents normal drawing copy, and normal performance at object and face matching tasks as used in classical neuropsychological tests. However, when tested with several computer tasks using carefully controlled visual stimuli and taking both his accuracy rate and response times into account, NS was found to have abnormal performances at high-level visual processing of objects and faces. Albeit presenting a different pattern of deficits than previously described in integrative agnosic patients such as HJA and LH, his deficits were characterized by an inability to integrate individual parts into a whole percept, as suggested by his failure at processing structurally impossible three-dimensional (3D) objects, an absence of face inversion effects and an advantage at detecting and matching single parts. Taken together, these observations question the idea of separate visual representations for object/face perception and object/face knowledge derived from investigations of visual associative (prosop)agnosia, and they raise some methodological issues in the analysis of single-case studies of (prosop)agnosic patients.
Mills, Caitlin; Raffaelli, Quentin; Irving, Zachary C; Stan, Dylan; Christoff, Kalina
2018-02-01
Mind wandering is frequently defined as task-unrelated or perceptually decoupled thought. However, these definitions may not capture the dynamic features of a wandering mind, such as its tendency to 'move freely'. Here we test the relationship between three theoretically dissociable dimensions of thought: freedom of movement in thought, task-relatedness, and perceptual decoupling (i.e., lack of awareness of surroundings). Using everyday life experience sampling, thought probes were randomly delivered to participants' phones for ten days. Results revealed weak intra-individual correlations between freedom of movement in thought and task-unrelatedness, as well as perceptual decoupling. Within our dataset, over 40% of thoughts would have been misclassified under the assumption that off-task thought is inherently freely moving. Overall, freedom of movement appears to be an independent dimension of thought that is not captured by the two most common measures of mind wandering. Future work focusing on the dynamics of thought may be crucial for improving our understanding of the wandering mind. Copyright © 2017 Elsevier Inc. All rights reserved.
Transfer of motor and perceptual skills from basketball to darts
Rienhoff, Rebecca; Hopwood, Melissa J.; Fischer, Lennart; Strauss, Bernd; Baker, Joseph; Schorer, Jörg
2013-01-01
The quiet eye is a perceptual skill associated with expertise and superior performance; however, little is known about the transfer of quiet eye across domains. We attempted to replicate previous skill-based differences in quiet eye and investigated whether transfer of motor and perceptual skills occurs between similar tasks. Throwing accuracy and quiet eye duration for skilled and less-skilled basketball players were examined in basketball free throw shooting and the transfer task of dart throwing. Skilled basketball players showed significantly higher throwing accuracy and longer quiet eye duration in the basketball free throw task compared to their less-skilled counterparts. Further, skilled basketball players showed positive transfer from basketball to dart throwing in accuracy but not in quiet eye duration. Our results raise interesting questions regarding the measurement of transfer between skills. PMID:24062703
Not just for consumers: context effects are fundamental to decision making.
Trueblood, Jennifer S; Brown, Scott D; Heathcote, Andrew; Busemeyer, Jerome R
2013-06-01
Context effects--preference changes that depend on the availability of other options--have attracted a great deal of attention among consumer researchers studying high-level decision tasks. In the experiments reported here, we showed that these effects also arise in simple perceptual-decision-making tasks. This finding casts doubt on explanations limited to consumer choice and high-level decisions, and it indicates that context effects may be amenable to a general explanation at the level of the basic decision process. We demonstrated for the first time that three important context effects from the preferential-choice literature--similarity, attraction, and compromise effects--all occurred within a single perceptual-decision task. Not only do our results challenge previous explanations for context effects proposed by consumer researchers, but they also challenge the choice rules assumed in theories of perceptual decision making.
ERIC Educational Resources Information Center
Hicks, J.L.; Starns, J.J.
2005-01-01
We used implicit measures of memory to ascertain whether false memories for critical nonpresented items in the DRM paradigm (Deese, 1959; Roediger & McDermott, 1995) contain structural and perceptual detail. In Experiment 1, we manipulated presentation modality in a visual word-stem-completion task. Critical item priming was significant and…
A Model for the Transfer of Perceptual-Motor Skill Learning in Human Behaviors
ERIC Educational Resources Information Center
Rosalie, Simon M.; Muller, Sean
2012-01-01
This paper presents a preliminary model that outlines the mechanisms underlying the transfer of perceptual-motor skill learning in sport and everyday tasks. Perceptual-motor behavior is motivated by performance demands and evolves over time to increase the probability of success through adaptation. Performance demands at the time of an event…
ERIC Educational Resources Information Center
Ratcliff, Roger; Smith, Philip L.
2010-01-01
The authors report 9 new experiments and reanalyze 3 published experiments that investigate factors affecting the time course of perceptual processing and its effects on subsequent decision making. Stimuli in letter-discrimination and brightness-discrimination tasks were degraded with static and dynamic noise. The onset and the time course of…
Basic-level and superordinate-like categorical representations in early infancy.
Behl-Chadha, G
1996-08-01
A series of experiments using the paired-preference procedure examined 3- to 4-month-old infants' ability to form perceptually based categorical representations in the domains of natural kinds and artifacts and probed the underlying organizational structure of these representations. Experiments 1 and 2 found that infants could form categorical representations for chairs that excluded exemplars of couches, beds, and tables and also for couches that excluded exemplars of chairs, beds, and tables. Thus, the adult-like exclusivity shown by infants in the categorization of various animal pictures at the basic-level extends to the domain of artifacts as well--an ecologically significant ability given the numerous artifacts that populate the human environment. Experiments 3 and 4 examined infants' ability to form superordinate-like or global categorical representations for mammals and furniture. It was found that infants could form a global representation for mammals that included novel mammals and excluded other non-mammalian animals such as birds and fish as well as items from cross-ontological categories such as furniture. In addition, it was found that infants formed a representation for furniture that included novel categories of furniture and excluded exemplars from the cross-ontological category of mammals; however, it was less clear if infants' global representation for furniture also excluded other artifacts such as vehicles and thus the category of furniture may have been less exclusively represented. Overall, the present findings, by showing the availability of perceptually driven basic and superordinate-like representations in early infancy that closely correspond to adult conceptual categories, underscore the importance of these early representations for later conceptual representations.
More than meets the eye: context effects in word identification.
Masson, M E; Borowsky, R
1998-11-01
The influence of semantic context on word identification was examined using masked target displays. Related prime words enhanced a signal detection measure of sensitivity in making lexical decisions and in determining whether a probe word matched the target word. When line drawings were used as primes, a similar benefit was obtained with the probe task. Although these results suggest that contextual information affects perceptual encoding, this conclusion is questioned on the grounds that sensitivity in these tasks may be determined by independent contributions of perceptual and contextual information. The plausibility of this view is supported by a simulation of the experiments using a connectionist model in which perceptual and semantic information make independent contributions to word identification. The model also predicts results with two other analytic methods that have been used to argue for priming effects on perceptual encoding.
Prolonged maturation of auditory perception and learning in gerbils
Sarro, Emma C.; Sanes, Dan H.
2011-01-01
In humans, auditory perception reaches maturity over a broad age range, extending through adolescence. Despite this slow maturation, children are considered to be outstanding learners, suggesting that immature perceptual skills might actually be advantageous to improvement on an acoustic task as a result of training (perceptual learning). Previous non-human studies have not employed an identical task when comparing perceptual performance of young and mature subjects, making it difficult to assess learning. Here, we used an identical procedure on juvenile and adult gerbils to examine the perception of amplitude modulation (AM), a stimulus feature that is an important component of most natural sounds. On average, Adult animals could detect smaller fluctuations in amplitude (i.e. smaller modulation depths) than Juveniles, indicating immature perceptual skills in Juveniles. However, the population variance was much greater for Juveniles, a few animals displaying adult-like AM detection. To determine whether immature perceptual skills facilitated learning, we compared naïve performance on the AM detection task with the amount of improvement following additional training. The amount of improvement in Adults correlated with naïve performance: those with the poorest naïve performance improved the most. In contrast, the naïve performance of Juveniles did not predict the amount of learning. Those Juveniles with immature AM detection thresholds did not display greater learning than Adults. Furthermore, for several of the Juveniles with adult-like thresholds, AM detection deteriorated with repeated testing. Thus, immature perceptual skills in young animals were not associated with greater learning. PMID:20506133
Schizotypal Perceptual Aberrations of Time: Correlation between Score, Behavior and Brain Activity
Arzy, Shahar; Mohr, Christine; Molnar-Szakacs, Istvan; Blanke, Olaf
2011-01-01
A fundamental trait of the human self is its continuum experience of space and time. Perceptual aberrations of this spatial and temporal continuity is a major characteristic of schizophrenia spectrum disturbances – including schizophrenia, schizotypal personality disorder and schizotypy. We have previously found the classical Perceptual Aberration Scale (PAS) scores, related to body and space, to be positively correlated with both behavior and temporo-parietal activation in healthy participants performing a task involving self-projection in space. However, not much is known about the relationship between temporal perceptual aberration, behavior and brain activity. To this aim, we composed a temporal Perceptual Aberration Scale (tPAS) similar to the traditional PAS. Testing on 170 participants suggested similar performance for PAS and tPAS. We then correlated tPAS and PAS scores to participants' performance and neural activity in a task of self-projection in time. tPAS scores correlated positively with reaction times across task conditions, as did PAS scores. Evoked potential mapping and electrical neuroimaging showed self-projection in time to recruit a network of brain regions at the left anterior temporal cortex, right temporo-parietal junction, and occipito-temporal cortex, and duration of activation in this network positively correlated with tPAS and PAS scores. These data demonstrate that schizotypal perceptual aberrations of both time and space, as reflected by tPAS and PAS scores, are positively correlated with performance and brain activation during self-projection in time in healthy individuals along the schizophrenia spectrum. PMID:21267456
Wiese, Holger; Schweinberger, Stefan R; Neumann, Markus F
2008-11-01
We used repetition priming to investigate implicit and explicit processes of unfamiliar face categorization. During prime and test phases, participants categorized unfamiliar faces according to either age or gender. Faces presented at test were either new or primed in a task-congruent (same task during priming and test) or incongruent (different tasks) condition. During age categorization, reaction times revealed significant priming for both priming conditions, and event-related potentials yielded an increased N170 over the left hemisphere as a result of priming. During gender categorization, congruent faces elicited priming and a latency decrease in the right N170. Accordingly, information about age is extracted irrespective of processing demands, and priming facilitates the extraction of feature information reflected in the left N170 effect. By contrast, priming of gender categorization may depend on whether the task at initial presentation requires configural processing.
Perceptual Load-Dependent Neural Correlates of Distractor Interference Inhibition
Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M.; Potenza, Marc N.
2011-01-01
Background The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. Methodology/Principal Findings We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Conclusions Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load. PMID:21267080
Perceptual learning in children with visual impairment improves near visual acuity.
Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N
2013-09-17
This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P < 0.001). Only the children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).
The World Hypotheses: Implications for Intercultural Communication Research.
ERIC Educational Resources Information Center
Ting-Toomey, Stella
The "sense making" process structures humans' categorization, perceptual, and expressive processes. These "sense making" references are ultimately derived from four distinct "root metaphors": mechanism or mechanistic thinking (machine), formism or formistic thinking (similarity), organicism or organistic thinking (organic process), and…
Direct real-time neural evidence for task-set inertia.
Evans, Lisa H; Herron, Jane E; Wilding, Edward L
2015-03-01
One influential explanation for the costs incurred when switching between tasks is that they reflect interference arising from completing the previous task-known as task-set inertia. We report a novel approach for assessing task-set inertia in a memory experiment using event-related potentials (ERPs). After a study phase, participants completed a test block in which they switched between a memory task (retrieving information from the study phase) and a perceptual task. These tasks alternated every two trials. An ERP index of the retrieval of study information was evident in the memory task. It was also present on the first trial of the perceptual task but was markedly attenuated on the second. Moreover, this task-irrelevant ERP activity was positively correlated with a behavioral cost associated with switching between tasks. This real-time measure of neural activity thus provides direct evidence of task-set inertia, its duration, and the functional role it plays in switch costs. © The Author(s) 2015.
Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania
2014-12-01
A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.
Perceptual Load Affects Eyewitness Accuracy and Susceptibility to Leading Questions.
Murphy, Gillian; Greene, Ciara M
2016-01-01
Load Theory (Lavie, 1995, 2005) states that the level of perceptual load in a task (i.e., the amount of information involved in processing task-relevant stimuli) determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The current study is the first to assess the effect of perceptual load on eyewitness memory. Across three experiments (two video-based and one in a driving simulator), the effect of perceptual load on eyewitness memory was assessed. The results showed that eyewitnesses were less accurate under high load, in particular for peripheral details. For example, memory for the central character in the video was not affected by load but memory for a witness who passed by the window at the edge of the scene was significantly worse under high load. High load memories were also more open to suggestion, showing increased susceptibility to leading questions. High visual perceptual load also affected recall for auditory information, illustrating a possible cross-modal perceptual load effect on memory accuracy. These results have implications for eyewitness memory researchers and forensic professionals.
Perceptual Load Affects Eyewitness Accuracy and Susceptibility to Leading Questions
Murphy, Gillian; Greene, Ciara M.
2016-01-01
Load Theory (Lavie, 1995, 2005) states that the level of perceptual load in a task (i.e., the amount of information involved in processing task-relevant stimuli) determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The current study is the first to assess the effect of perceptual load on eyewitness memory. Across three experiments (two video-based and one in a driving simulator), the effect of perceptual load on eyewitness memory was assessed. The results showed that eyewitnesses were less accurate under high load, in particular for peripheral details. For example, memory for the central character in the video was not affected by load but memory for a witness who passed by the window at the edge of the scene was significantly worse under high load. High load memories were also more open to suggestion, showing increased susceptibility to leading questions. High visual perceptual load also affected recall for auditory information, illustrating a possible cross-modal perceptual load effect on memory accuracy. These results have implications for eyewitness memory researchers and forensic professionals. PMID:27625628
Zooming in on the cause of the perceptual load effect in the go/no-go paradigm.
Chen, Zhe; Cave, Kyle R
2016-08-01
Perceptual load theory (Lavie, 2005) claims that attentional capacity that is not used for the current task is allocated to irrelevant distractors. It predicts that if the attentional demands of the current task are high, distractor interference will be low. One particularly powerful demonstration of perceptual load effects on distractor processing relies on a go/no-go cue that is interpreted by either simple feature detection or feature conjunction (Lavie, 1995). However, a possible alternative interpretation of these effects is that the differential degree of distractor processing is caused by how broadly attention is allocated (attentional zoom) rather than to perceptual load. In 4 experiments, we show that when stimuli are arranged to equalize the extent of spatial attention across conditions, distractor interference varies little whether cues are defined by a simple feature or a conjunction, and that the typical perceptual load effect emerges only when attentional zoom can covary with perceptual load. These results suggest that attentional zoom can account for the differential degree of distractor processing traditionally attributed to perceptual load in the go/no-go paradigm. They also provide new insight into how different factors interact to control distractor interference. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Pietraszewski, David; Cosmides, Leda; Tooby, John
2014-01-01
Humans in all societies form and participate in cooperative alliances. To successfully navigate an alliance-laced world, the human mind needs to detect new coalitions and alliances as they emerge, and predict which of many potential alliance categories are currently organizing an interaction. We propose that evolution has equipped the mind with cognitive machinery that is specialized for performing these functions: an alliance detection system. In this view, racial categories do not exist because skin color is perceptually salient; they are constructed and regulated by the alliance system in environments where race predicts social alliances and divisions. Early tests using adversarial alliances showed that the mind spontaneously detects which individuals are cooperating against a common enemy, implicitly assigning people to rival alliance categories based on patterns of cooperation and competition. But is social antagonism necessary to trigger the categorization of people by alliance--that is, do we cognitively link A and B into an alliance category only because they are jointly in conflict with C and D? We report new studies demonstrating that peaceful cooperation can trigger the detection of new coalitional alliances and make race fade in relevance. Alliances did not need to be marked by team colors or other perceptually salient cues. When race did not predict the ongoing alliance structure, behavioral cues about cooperative activities up-regulated categorization by coalition and down-regulated categorization by race, sometimes eliminating it. Alliance cues that sensitively regulated categorization by coalition and race had no effect on categorization by sex, eliminating many alternative explanations for the results. The results support the hypothesis that categorizing people by their race is a reversible product of a cognitive system specialized for detecting alliance categories and regulating their use. Common enemies are not necessary to erase important social boundaries; peaceful cooperation can have the same effect.
Pietraszewski, David; Cosmides, Leda; Tooby, John
2014-01-01
Humans in all societies form and participate in cooperative alliances. To successfully navigate an alliance-laced world, the human mind needs to detect new coalitions and alliances as they emerge, and predict which of many potential alliance categories are currently organizing an interaction. We propose that evolution has equipped the mind with cognitive machinery that is specialized for performing these functions: an alliance detection system. In this view, racial categories do not exist because skin color is perceptually salient; they are constructed and regulated by the alliance system in environments where race predicts social alliances and divisions. Early tests using adversarial alliances showed that the mind spontaneously detects which individuals are cooperating against a common enemy, implicitly assigning people to rival alliance categories based on patterns of cooperation and competition. But is social antagonism necessary to trigger the categorization of people by alliance—that is, do we cognitively link A and B into an alliance category only because they are jointly in conflict with C and D? We report new studies demonstrating that peaceful cooperation can trigger the detection of new coalitional alliances and make race fade in relevance. Alliances did not need to be marked by team colors or other perceptually salient cues. When race did not predict the ongoing alliance structure, behavioral cues about cooperative activities up-regulated categorization by coalition and down-regulated categorization by race, sometimes eliminating it. Alliance cues that sensitively regulated categorization by coalition and race had no effect on categorization by sex, eliminating many alternative explanations for the results. The results support the hypothesis that categorizing people by their race is a reversible product of a cognitive system specialized for detecting alliance categories and regulating their use. Common enemies are not necessary to erase important social boundaries; peaceful cooperation can have the same effect. PMID:24520394
Schertz, Jessamyn; Cho, Taehong; Lotto, Andrew; Warner, Natasha
2015-01-01
The current work examines native Korean speakers’ perception and production of stop contrasts in their native language (L1, Korean) and second language (L2, English), focusing on three acoustic dimensions that are all used, albeit to different extents, in both languages: voice onset time (VOT), f0 at vowel onset, and closure duration. Participants used all three cues to distinguish the L1 Korean three-way stop distinction in both production and perception. Speakers’ productions of the L2 English contrasts were reliably distinguished using both VOT and f0 (even though f0 is only a very weak cue to the English contrast), and, to a lesser extent, closure duration. In contrast to the relative homogeneity of the L2 productions, group patterns on a forced-choice perception task were less clear-cut, due to considerable individual differences in perceptual categorization strategies, with listeners using either primarily VOT duration, primarily f0, or both dimensions equally to distinguish the L2 English contrast. Differences in perception, which were stable across experimental sessions, were not predicted by individual variation in production patterns. This work suggests that reliance on multiple cues in representation of a phonetic contrast can form the basis for distinct individual cue-weighting strategies in phonetic categorization. PMID:26644630
Fuggetta, Giorgio; Rizzo, Silvia; Pobric, Gorana; Lavidor, Michal; Walsh, Vincent
2009-02-01
Transcranial magnetic stimulation (TMS) over the left hemisphere has been shown to disrupt semantic processing but, to date, there has been no direct demonstration of the electrophysiological correlates of this interference. To gain insight into the neural basis of semantic systems, and in particular, study the temporal and functional organization of object categorization processing, we combined repetitive TMS (rTMS) and ERPs. Healthy volunteers performed a picture-word matching task in which Snodgrass drawings of natural (e.g., animal) and artifactual (e.g., tool) categories were associated with a word. When short trains of high-frequency rTMS were applied over Wernicke's area (in the region of the CP5 electrode) immediately before the stimulus onset, we observed delayed response times to artifactual items, and thus, an increased dissociation between natural and artifactual domains. This behavioral effect had a direct ERP correlate. In the response period, the stimuli from the natural domain elicited a significant larger late positivity complex than those from the artifactual domain. These differences were significant over the centro-parietal region of the right hemisphere. These findings demonstrate that rTMS interferes with post-perceptual categorization processing of natural and artifactual stimuli that involve separate subsystems in distinct cortical areas.
Perceptual-motor regulation in locomotor pointing while approaching a curb.
Andel, Steven van; Cole, Michael H; Pepping, Gert-Jan
2018-02-01
Locomotor pointing is a task that has been the focus of research in the context of sport (e.g. long jumping and cricket) as well as normal walking. Collectively, these studies have produced a broad understanding of locomotor pointing, but generalizability has been limited to laboratory type tasks and/or tasks with high spatial demands. The current study aimed to generalize previous findings in locomotor pointing to the common daily task of approaching and stepping on to a curb. Sixteen people completed 33 repetitions of a task that required them to walk up to and step onto a curb. Information about their foot placement was collected using a combination of measures derived from a pressure-sensitive walkway and video data. Variables related to perceptual-motor regulation were analyzed on an inter-trial, intra-step and inter-step level. Similar to previous studies, analysis of the foot placements showed that, variability in foot placement decreased as the participants drew closer to the curb. Regulation seemed to be initiated earlier in this study compared to previous studies, as shown by a decreasing variability in foot placement as early as eight steps before reaching the curb. Furthermore, it was shown that when walking up to the curb, most people regulated their walk in a way so as to achieve minimal variability in the foot placement on top of the curb, rather than a placement in front of the curb. Combined, these results showed a strong perceptual-motor coupling in the task of approaching and stepping up a curb, rendering this task a suitable test for perceptual-motor regulation in walking. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Casali, J. G.; Wierwille, W. W.
1984-01-01
A flight simulator-based study was conducted to examine fourteen distinct mental workload estimation measures, including opinion, secondary task, physiological, and primary task measures. Both the relative sensitivity of the measures to changes in mental workload and the differential intrusion of the changes on primary task performance were assessed. The flight task was varied in difficulty by manipulation of the presentation rate and complexity of a hazard-perception task that required each of 48 licensed pilots to rely heavily on their perceptual abilities. Three rating scales (Modified Cooper-Harper, Multi-descriptor, and Workload-Compensation-Interference/Technical Effectiveness), two secondary task measures (time estimation and tapping regularity), one physiological measure (respiration frequency), and one primary task measure (danger-condition response time) were reliable indicants of workload changes. Recommendations for applying the workload measures are presented.
Cerebellar tDCS Does Not Enhance Performance in an Implicit Categorization Learning Task.
Verhage, Marie C; Avila, Eric O; Frens, Maarten A; Donchin, Opher; van der Geest, Jos N
2017-01-01
Background: Transcranial Direct Current Stimulation (tDCS) is a form of non-invasive electrical stimulation that changes neuronal excitability in a polarity and site-specific manner. In cognitive tasks related to prefrontal and cerebellar learning, cortical tDCS arguably facilitates learning, but the few studies investigating cerebellar tDCS, however, are inconsistent. Objective: We investigate the effect of cerebellar tDCS on performance of an implicit categorization learning task. Methods: Forty participants performed a computerized version of an implicit categorization learning task where squares had to be sorted into two categories, according to an unknown but fixed rule that integrated both the size and luminance of the square. Participants did one round of categorization to familiarize themselves with the task and to provide a baseline of performance. After that, 20 participants received anodal tDCS (20 min, 1.5 mA) over the right cerebellum, and 19 participants received sham stimulation and simultaneously started a second session of the categorization task using a new rule. Results: As expected, subjects performed better in the second session than in the first, baseline session, showing increased accuracy scores and reduced reaction times. Over trials, participants learned the categorization rule, improving their accuracy and reaction times. However, we observed no effect of anodal tDCS stimulation on overall performance or on learning, compared to sham stimulation. Conclusion: These results suggest that cerebellar tDCS does not modulate performance and learning on an implicit categorization task.
Interference from familiar natural distractors is not eliminated by high perceptual load.
He, Chunhong; Chen, Antao
2010-05-01
A crucial prediction of perceptual load theory is that high perceptual load can eliminate interference from distractors. However, Lavie et al. (Psychol Sci 14:510-515, 2003) found that high perceptual load did not eliminate interference when the distractor was a face. The current experiments examined the interaction between familiarity and perceptual load in modulating interference in a name search task. The data reveal that high perceptual load eliminated the interference effect for unfamiliar distractors that were faces or objects, but did not eliminate the interference for familiar distractors that were faces or objects. Based on these results, we proposed that the processing of familiar and natural stimuli may be immune to the effect of perceptual load.
Perceptual switch rates with ambiguous structure-from-motion figures in bipolar disorder.
Krug, Kristine; Brunskill, Emma; Scarna, Antonina; Goodwin, Guy M; Parker, Andrew J
2008-08-22
Slowing of the rate at which a rivalrous percept switches from one configuration to another has been suggested as a potential trait marker for bipolar disorder. We measured perceptual alternations for a bistable, rotating, structure-from-motion cylinder in bipolar and control participants. In a control task, binocular depth rendered the direction of cylinder rotation unambiguous to monitor participants' performance and attention during the experimental task. A particular direction of rotation was perceptually stable, on average, for 33.5s in participants without psychiatric diagnosis. Euthymic, bipolar participants showed a slightly slower rate of switching between the two percepts (percept duration 42.3s). Under a parametric analysis of the best-fitting model for individual participants, this difference was statistically significant. However, the variability within groups was high, so this difference in average switch rates was not big enough to serve as a trait marker for bipolar disorder. We also found that low-level visual capacities, such as stereo threshold, influence perceptual switch rates. We suggest that there is no single brain location responsible for perceptual switching in all different ambiguous figures and that perceptual switching is generated by the actions of local cortical circuitry.
A study of perceptual and verbal skills of disabled readers in grades 4, 5 and 6.
Solan, H A; Ficarra, A P
1990-08-01
This investigation addresses the role of the optometrist in diagnosing and treating children in grades 4, 5, and 6 who have been identified as reading disabled. Fifty-one subjects with average intelligence, but whose reading comprehension skills were below the 31st percentile (mean, 20th percentile), were evaluated using verbal and perceptual tests. When the performance of this experimental group was compared with the mean scores from standardized test norms for each of the various tasks, the disabled readers scored significantly lower in seven of the eight perceptual and five of the six verbal tasks. These results lend support to the hypothesis that both perceptual and verbal deficits are related to reading comprehension. Using step-wise multiple correlation analysis, three perceptual factors; eye-movements, Auditory-Visual Integration Test (AVIT), and grooved peg-board, contributed 38 percent of the variance whereas the addition of two verbal factors (digit span and token test) provided just 2 percent. That is, 38 percent of the variations in reading comprehension could be accounted for by variations in perceptual skills in the disabled readers. The results were interpreted in terms of spatial-simultaneous and verbal-successive processing skills.
Production-perception relationships during speech development
NASA Astrophysics Data System (ADS)
Menard, Lucie; Schwartz, Jean-Luc; Boe, Louis-Jean; Aubin, Jerome
2005-04-01
It has been shown that nonuniform growth of the supraglottal cavities, motor control development, and perceptual refinement shape the vowel systems during speech development. In this talk, we propose to investigate the role of perceptual constraints as a guide to the speakers task from birth to adulthood. Simulations with an articulatory-to-acoustic model, acoustic analyses of natural vowels, and results of perceptual tests provide evidence that the production-perception relationships evolve with age. At the perceptual level, results show that (i) linear combination of spectral peaks are good predictors of vowel targets, and (ii) focalization, defined as an acoustic pattern with close neighboring formants [J.-L. Schwartz, L.-J. Boe, N. Vallee, and C. Abry, J. Phonetics 25, 255-286 (1997)], is part of the speech task. At the production level, we propose that (i) frequently produced vowels in the baby's early sound inventory can in part be explained by perceptual templates, (ii) the achievement of these perceptual templates may require adaptive articulatory strategies for the child, compared with the adults, to cope with morphological differences. Results are discussed in the light of a perception for action control theory. [Work supported by the Social Sciences and Humanities Research Council of Canada.
Recalibration in functional perceptual-motor tasks: A systematic review.
Brand, Milou Tessa; de Oliveira, Rita Ferraz
2017-12-01
Skilled actions are the result of a perceptual-motor system being well-calibrated to the appropriate information variables. Changes to the perceptual or motor system initiates recalibration, which is the rescaling of the perceptual-motor system to informational variables. For example, a professional baseball player may need to rescale their throws due to fatigue. The aim of this systematic review is to analyse how recalibration can and has been measured and also to evaluate the literature on recalibration. Five databases were systematically screened to identify literature that reported experiments where a disturbance was applied to the perceptual-motor system in functional perceptual-motor tasks. Each of the 91 experiments reported the immediate effects of a disturbance and/or the effects of removing that disturbance after recalibration. The results showed that experiments applied disturbances to either perception or action, and used either direct or indirect measures of recalibration. In contrast with previous conclusions, active exploration was only sufficient for fast recalibration when the relevant information source was available. Further research into recalibration mechanisms should include the study of information sources as well as skill expertise. Copyright © 2017 Elsevier B.V. All rights reserved.
Classification images reveal decision variables and strategies in forced choice tasks
Pritchett, Lisa M.; Murray, Richard F.
2015-01-01
Despite decades of research, there is still uncertainty about how people make simple decisions about perceptual stimuli. Most theories assume that perceptual decisions are based on decision variables, which are internal variables that encode task-relevant information. However, decision variables are usually considered to be theoretical constructs that cannot be measured directly, and this often makes it difficult to test theories of perceptual decision making. Here we show how to measure decision variables on individual trials, and we use these measurements to test theories of perceptual decision making more directly than has previously been possible. We measure classification images, which are estimates of templates that observers use to extract information from stimuli. We then calculate the dot product of these classification images with the stimuli to estimate observers' decision variables. Finally, we reconstruct each observer's “decision space,” a map that shows the probability of the observer’s responses for all values of the decision variables. We use this method to examine decision strategies in two-alternative forced choice (2AFC) tasks, for which there are several competing models. In one experiment, the resulting decision spaces support the difference model, a classic theory of 2AFC decisions. In a second experiment, we find unexpected decision spaces that are not predicted by standard models of 2AFC decisions, and that suggest intrinsic uncertainty or soft thresholding. These experiments give new evidence regarding observers’ strategies in 2AFC tasks, and they show how measuring decision variables can answer long-standing questions about perceptual decision making. PMID:26015584
Dalla Bella, Simone; Sowiński, Jakub
2015-03-16
A set of behavioral tasks for assessing perceptual and sensorimotor timing abilities in the general population (i.e., non-musicians) is presented here with the goal of uncovering rhythm disorders, such as beat deafness. Beat deafness is characterized by poor performance in perceiving durations in auditory rhythmic patterns or poor synchronization of movement with auditory rhythms (e.g., with musical beats). These tasks include the synchronization of finger tapping to the beat of simple and complex auditory stimuli and the detection of rhythmic irregularities (anisochrony detection task) embedded in the same stimuli. These tests, which are easy to administer, include an assessment of both perceptual and sensorimotor timing abilities under different conditions (e.g., beat rates and types of auditory material) and are based on the same auditory stimuli, ranging from a simple metronome to a complex musical excerpt. The analysis of synchronized tapping data is performed with circular statistics, which provide reliable measures of synchronization accuracy (e.g., the difference between the timing of the taps and the timing of the pacing stimuli) and consistency. Circular statistics on tapping data are particularly well-suited for detecting individual differences in the general population. Synchronized tapping and anisochrony detection are sensitive measures for identifying profiles of rhythm disorders and have been used with success to uncover cases of poor synchronization with spared perceptual timing. This systematic assessment of perceptual and sensorimotor timing can be extended to populations of patients with brain damage, neurodegenerative diseases (e.g., Parkinson's disease), and developmental disorders (e.g., Attention Deficit Hyperactivity Disorder).
Dimension-Based Statistical Learning Affects Both Speech Perception and Production
ERIC Educational Resources Information Center
Lehet, Matthew; Holt, Lori L.
2017-01-01
Multiple acoustic dimensions signal speech categories. However, dimensions vary in their informativeness; some are more diagnostic of category membership than others. Speech categorization reflects these dimensional regularities such that diagnostic dimensions carry more "perceptual weight" and more effectively signal category membership…
Quantifying transfer after perceptual-motor sequence learning: how inflexible is implicit learning?
Sanchez, Daniel J; Yarnik, Eric N; Reber, Paul J
2015-03-01
Studies of implicit perceptual-motor sequence learning have often shown learning to be inflexibly tied to the training conditions during learning. Since sequence learning is seen as a model task of skill acquisition, limits on the ability to transfer knowledge from the training context to a performance context indicates important constraints on skill learning approaches. Lack of transfer across contexts has been demonstrated by showing that when task elements are changed following training, this leads to a disruption in performance. These results have typically been taken as suggesting that the sequence knowledge relies on integrated representations across task elements (Abrahamse, Jiménez, Verwey, & Clegg, Psychon Bull Rev 17:603-623, 2010a). Using a relatively new sequence learning task, serial interception sequence learning, three experiments are reported that quantify this magnitude of performance disruption after selectively manipulating individual aspects of motor performance or perceptual information. In Experiment 1, selective disruption of the timing or order of sequential actions was examined using a novel response manipulandum that allowed for separate analysis of these two motor response components. In Experiments 2 and 3, transfer was examined after selective disruption of perceptual information that left the motor response sequence intact. All three experiments provided quantifiable estimates of partial transfer to novel contexts that suggest some level of information integration across task elements. However, the ability to identify quantifiable levels of successful transfer indicates that integration is not all-or-none and that measurement sensitivity is a key in understanding sequence knowledge representations.
Evidence of different underlying processes in pattern recall and decision-making.
Gorman, Adam D; Abernethy, Bruce; Farrow, Damian
2015-01-01
The visual search characteristics of expert and novice basketball players were recorded during pattern recall and decision-making tasks to determine whether the two tasks shared common visual-perceptual processing strategies. The order in which participants entered the pattern elements in the recall task was also analysed to further examine the nature of the visual-perceptual strategies and the relative emphasis placed upon particular pattern features. The experts demonstrated superior performance across the recall and decision-making tasks [see also Gorman, A. D., Abernethy, B., & Farrow, D. (2012). Classical pattern recall tests and the prospective nature of expert performance. The Quarterly Journal of Experimental Psychology, 65, 1151-1160; Gorman, A. D., Abernethy, B., & Farrow, D. (2013a). Is the relationship between pattern recall and decision-making influenced by anticipatory recall? The Quarterly Journal of Experimental Psychology, 66, 2219-2236)] but a number of significant differences in the visual search data highlighted disparities in the processing strategies, suggesting that recall skill may utilize different underlying visual-perceptual processes than those required for accurate decision-making performance in the natural setting. Performance on the recall task was characterized by a proximal-to-distal order of entry of the pattern elements with participants tending to enter the players located closest to the ball carrier earlier than those located more distal to the ball carrier. The results provide further evidence of the underlying perceptual processes employed by experts when extracting visual information from complex and dynamic patterns.
Linguistic and Perceptual Mapping in Spatial Representations: An Attentional Account.
Valdés-Conroy, Berenice; Hinojosa, José A; Román, Francisco J; Romero-Ferreiro, Verónica
2018-03-01
Building on evidence for embodied representations, we investigated whether Spanish spatial terms map onto the NEAR/FAR perceptual division of space. Using a long horizontal display, we measured congruency effects during the processing of spatial terms presented in NEAR or FAR space. Across three experiments, we manipulated the task demands in order to investigate the role of endogenous attention in linguistic and perceptual space mapping. We predicted congruency effects only when spatial properties were relevant for the task (reaching estimation task, Experiment 1) but not when attention was allocated to other features (lexical decision, Experiment 2; and color, Experiment 3). Results showed faster responses for words presented in Near-space in all experiments. Consistent with our hypothesis, congruency effects were observed only when a reaching estimate was requested. Our results add important evidence for the role of top-down processing in congruency effects from embodied representations of spatial terms. Copyright © 2017 Cognitive Science Society, Inc.
Infant perceptual development for faces and spoken words: An integrated approach
Watson, Tamara L; Robbins, Rachel A; Best, Catherine T
2014-01-01
There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626
Eustache, F; Desgranges, B; Petit-Taboué, M C; de la Sayette, V; Piot, V; Sablé, C; Marchal, G; Baron, J C
1997-09-01
To assess explicit memory and two components of implicit memory--that is, perceptual-verbal skill learning and lexical-semantic priming effects--as well as resting cerebral blood flow (CBF) and oxygen metabolism (CMRO2) during the acute phase of transient global amnesia. In a 59 year old woman, whose amnestic episode fulfilled all current criteria for transient global amnesia, a neuropsychological protocol was administered, including word learning, story recall, categorical fluency, mirror reading, and word stem completion tasks. PET was performed using the (15)O steady state inhalation method, while the patient still exhibited severe anterograde amnesia and was interleaved with the cognitive tests. There was a clear cut dissociation between impaired long term episodic memory and preserved implicit memory for its two components. Categorical fluency was significantly altered, suggesting word retrieval strategy--rather than semantic memory--impairment. The PET study disclosed a reduced CMRO2 with relatively or fully preserved CBF in the left prefrontotemporal cortex and lentiform nucleus, and the reverse pattern over the left occipital cortex. The PET alterations with patchy CBF-CMRO2 uncoupling would be compatible with a migraine-like phenomenon and indicate that the isolated assessment of perfusion in transient global amnesia may be misleading. The pattern of metabolic depression, with sparing of the hippocampal area, is one among the distinct patterns of brain dysfunction that underlie the (apparently) uniform clinical presentation of transient global amnesia. The finding of a left prefrontal hypometabolism in the face of impaired episodic memory and altered verbal fluency would fit present day concepts from PET activation studies about the role of this area in episodic and semantic memory encoding/retrieval. Likewise, the changes affecting the lenticular nucleus but sparing the caudate would be consistent with the normal performance in perceptual-verbal skill learning. Finally, unaltered lexical-semantic priming effects, despite left temporal cortex hypometabolism, suggest that these processes are subserved by a more distributed neocortical network.
Hollingworth, Andrew; Hwang, Seongmin
2013-10-19
We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection.
Baumgarten, Thomas J; Königs, Sara; Schnitzler, Alfons; Lange, Joachim
2017-03-09
Despite being experienced as continuous, there is an ongoing debate if perception is an intrinsically discrete process, with incoming sensory information treated as a succession of single perceptual cycles. Here, we provide causal evidence that somatosensory perception is composed of discrete perceptual cycles. We used in humans an electrotactile temporal discrimination task preceded by a subliminal (i.e., below perceptual threshold) stimulus. Although not consciously perceived, subliminal stimuli are known to elicit neuronal activity in early sensory areas and modulate the phase of ongoing neuronal oscillations. We hypothesized that the subliminal stimulus indirectly, but systematically modulates the ongoing oscillatory phase in S1, thereby rhythmically shaping perception. The present results confirm that, without being consciously perceived, the subliminal stimulus critically influenced perception in the discrimination task. Importantly, perception was modulated rhythmically, in cycles corresponding to the beta-band (13-18 Hz). This can be compellingly explained by a model of discrete perceptual cycles.
Altering sensorimotor feedback disrupts visual discrimination of facial expressions.
Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula
2016-08-01
Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.
Williams, A Mark; Ericsson, K Anders
2005-06-01
The number of researchers studying perceptual-cognitive expertise in sport is increasing. The intention in this paper is to review the currently accepted framework for studying expert performance and to consider implications for undertaking research work in the area of perceptual-cognitive expertise in sport. The expert performance approach presents a descriptive and inductive approach for the systematic study of expert performance. The nature of expert performance is initially captured in the laboratory using representative tasks that identify reliably superior performance. Process-tracing measures are employed to determine the mechanisms that mediate expert performance on the task. Finally, the specific types of activities that lead to the acquisition and development of these mediating mechanisms are identified. General principles and mechanisms may be discovered and then validated by more traditional experimental designs. The relevance of this approach to the study of perceptual-cognitive expertise in sport is discussed and suggestions for future work highlighted.
Dopaminergic stimulation in unilateral neglect
Geminiani, G.; Bottini, G.; Sterzi, R.
1998-01-01
OBJECTIVE—To explore the hypothesis that dopaminergic circuits play a part in the premotor components of the unilateral neglect syndrome, the effects of acute dopaminergic stimulation in patients with neglect were studied. METHODS—Two tasks were evaluated before and after subcutaneous administration of apomorphine and placebo: a circle crossing test and a test of target exploration (a modified version of the bell test), performed both in perceptual (counting) and in perceptual-motor (pointing) conditions. SUBJECTS—Four patients with left neglect. RESULTS—After dopaminergic stimulation, a significant improvement was found compared with placebo administration and baseline evaluation, in the performance of the two tests. Three of the patients had a more marked improvement in the perceptual-motor condition (pointing) of the task than the perceptual condition (counting). CONCLUSIONS—The findings suggest that dopaminergic neuronal networks may mediate, in different ways, both perceptive and premotor components of the unilateral neglect syndrome. PMID:9728946
Gestalt Reasoning with Conjunctions and Disjunctions
Dumitru, Magda L.; Joergensen, Gitte H.
2016-01-01
Reasoning, solving mathematical equations, or planning written and spoken sentences all must factor in stimuli perceptual properties. Indeed, thinking processes are inspired by and subsequently fitted to concrete objects and situations. It is therefore reasonable to expect that the mental representations evoked when people solve these seemingly abstract tasks should interact with the properties of the manipulated stimuli. Here, we investigated the mental representations evoked by conjunction and disjunction expressions in language-picture matching tasks. We hypothesised that, if these representations have been derived using key Gestalt principles, reasoners should use perceptual compatibility to gauge the goodness of fit between conjunction/disjunction descriptions (e.g., the purple and/ or the green) and corresponding binary visual displays. Indeed, the results of three experimental studies demonstrate that reasoners associate conjunction descriptions with perceptually-dependent stimuli and disjunction descriptions with perceptually-independent stimuli, where visual dependency status follows the key Gestalt principles of common fate, proximity, and similarity. PMID:26986760
Gestalt Reasoning with Conjunctions and Disjunctions.
Dumitru, Magda L; Joergensen, Gitte H
2016-01-01
Reasoning, solving mathematical equations, or planning written and spoken sentences all must factor in stimuli perceptual properties. Indeed, thinking processes are inspired by and subsequently fitted to concrete objects and situations. It is therefore reasonable to expect that the mental representations evoked when people solve these seemingly abstract tasks should interact with the properties of the manipulated stimuli. Here, we investigated the mental representations evoked by conjunction and disjunction expressions in language-picture matching tasks. We hypothesised that, if these representations have been derived using key Gestalt principles, reasoners should use perceptual compatibility to gauge the goodness of fit between conjunction/disjunction descriptions (e.g., the purple and/ or the green) and corresponding binary visual displays. Indeed, the results of three experimental studies demonstrate that reasoners associate conjunction descriptions with perceptually-dependent stimuli and disjunction descriptions with perceptually-independent stimuli, where visual dependency status follows the key Gestalt principles of common fate, proximity, and similarity.
Angular relation of axes in perceptual space
NASA Technical Reports Server (NTRS)
Bucher, Urs
1992-01-01
The geometry of perceptual space needs to be known to model spatial orientation constancy or to create virtual environments. To examine one main aspect of this geometry, the angular relation between the three spatial axes was measured. Experiments were performed consisting of a perceptual task in which subjects were asked to set independently their apparent vertical and horizontal plane. The visual background provided no other stimuli to serve as optical direction cues. The task was performed in a number of different body tilt positions with pitches and rolls varied in steps of 30 degs. The results clearly show the distortion of orthogonality of the perceptual space for nonupright body positions. Large interindividual differences were found. Deviations from orthogonality up to 25 deg were detected in the pitch as well as in the roll direction. Implications of this nonorthogonality on further studies of spatial perception and on the construction of virtual environments for human interaction is also discussed.
Processing of unattended, simple negative pictures resists perceptual load.
Sand, Anders; Wiens, Stefan
2011-05-11
As researchers debate whether emotional pictures can be processed irrespective of spatial attention and perceptual load, negative and neutral pictures of simple figure-ground composition were shown at fixation and were surrounded by one, two, or three letters. When participants performed a picture discrimination task, there was evidence for motivated attention; that is, an early posterior negativity (EPN) and late positive potential (LPP) to negative versus neutral pictures. When participants performed a letter discrimination task, the EPN was unaffected whereas the LPP was reduced. Although performance decreased substantially with the number of letters (one to three), the LPP did not decrease further. Therefore, attention to simple, negative pictures at fixation seems to resist manipulations of perceptual load.
EFFECTS OF TALKER GENDER ON DIALECT CATEGORIZATION
CLOPPER, CYNTHIA G.; CONREY, BRIANNA; PISONI, DAVID B.
2011-01-01
The identification of the gender of an unfamiliar talker is an easy and automatic process for naïve adult listeners. Sociolinguistic research has consistently revealed gender differences in the production of linguistic variables. Research on the perception of dialect variation, however, has been limited almost exclusively to male talkers. In the present study, naïve participants were asked to categorize unfamiliar talkers by dialect using sentence-length utterances under three presentation conditions: male talkers only, female talkers only, and a mixed gender condition. The results revealed no significant differences in categorization performance across the three presentation conditions. However, a clustering analysis of the listeners’ categorization errors revealed significant effects of talker gender on the underlying perceptual similarity spaces. The present findings suggest that naïve listeners are sensitive to gender differences in speech production and are able to use those differences to reliably categorize unfamiliar male and female talkers by dialect. PMID:21423866
Categorization = Decision Making + Generalization
Seger, Carol A; Peterson, Erik J.
2013-01-01
We rarely, if ever, repeatedly encounter exactly the same situation. This makes generalization crucial for real world decision making. We argue that categorization, the study of generalizable representations, is a type of decision making, and that categorization learning research would benefit from approaches developed to study the neuroscience of decision making. Similarly, methods developed to examine generalization and learning within the field of categorization may enhance decision making research. We first discuss perceptual information processing and integration, with an emphasis on accumulator models. We then examine learning the value of different decision making choices via experience, emphasizing reinforcement learning modeling approaches. Next we discuss how value is combined with other factors in decision making, emphasizing the effects of uncertainty. Finally, we describe how a final decision is selected via thresholding processes implemented by the basal ganglia and related regions. We also consider how memory related functions in the hippocampus may be integrated with decision making mechanisms and contribute to categorization. PMID:23548891
Conceptual Structure within and between Modalities
Dilkina, Katia; Lambon Ralph, Matthew A.
2012-01-01
Current views of semantic memory share the assumption that conceptual representations are based on multimodal experience, which activates distinct modality-specific brain regions. This proposition is widely accepted, yet little is known about how each modality contributes to conceptual knowledge and how the structure of this contribution varies across these multiple information sources. We used verbal feature lists, features from drawings, and verbal co-occurrence statistics from latent semantic analysis to examine the informational structure in four domains of knowledge: perceptual, functional, encyclopedic, and verbal. The goals of the analysis were three-fold: (1) to assess the structure within individual modalities; (2) to compare structures between modalities; and (3) to assess the degree to which concepts organize categorically or randomly. Our results indicated significant and unique structure in all four modalities: perceptually, concepts organize based on prominent features such as shape, size, color, and parts; functionally, they group based on use and interaction; encyclopedically, they arrange based on commonality in location or behavior; and verbally, they group associatively or relationally. Visual/perceptual knowledge gives rise to the strongest hierarchical organization and is closest to classic taxonomic structure. Information is organized somewhat similarly in the perceptual and encyclopedic domains, which differs significantly from the structure in the functional and verbal domains. Notably, the verbal modality has the most unique organization, which is not at all categorical but also not random. The idiosyncrasy and complexity of conceptual structure across modalities raise the question of how all of these modality-specific experiences are fused together into coherent, multifaceted yet unified concepts. Accordingly, both methodological and theoretical implications of the present findings are discussed. PMID:23293593
Conceptual and perceptual encoding instructions differently affect event recall.
García-Bajos, Elvira; Migueles, Malen; Aizpurua, Alaitz
2014-11-01
When recalling an event, people usually retrieve the main facts and a reduced proportion of specific details. The objective of this experiment was to study the effects of conceptually and perceptually driven encoding in the recall of conceptual and perceptual information of an event. The materials selected for the experiment were two movie trailers. To enhance the encoding instructions, after watching the first trailer participants answered conceptual or perceptual questions about the event, while a control group answered general knowledge questions. After watching the second trailer, all of the participants completed a closed-ended recall task consisting of conceptual and perceptual items. Conceptual information was better recalled than perceptual details and participants made more perceptual than conceptual commission errors. Conceptually driven processing enhanced the recall of conceptual information, while perceptually driven processing not only did not improve the recall of descriptive details, but also damaged the standard conceptual/perceptual recall relationship.
Visual Perceptual Learning and Models.
Dosher, Barbara; Lu, Zhong-Lin
2017-09-15
Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.
He, Xun; Witzel, Christoph; Forder, Lewis; Clifford, Alexandra; Franklin, Anna
2014-04-01
Prior claims that color categories affect color perception are confounded by inequalities in the color space used to equate same- and different-category colors. Here, we equate same- and different-category colors in the number of just-noticeable differences, and measure event-related potentials (ERPs) to these colors on a visual oddball task to establish if color categories affect perceptual or post-perceptual stages of processing. Category effects were found from 200 ms after color presentation, only in ERP components that reflect post-perceptual processes (e.g., N2, P3). The findings suggest that color categories affect post-perceptual processing, but do not affect the perceptual representation of color.
Effect of perceptual load on conceptual processing: an extension of Vermeulen's theory.
Xie, Jiushu; Wang, Ruiming; Sun, Xun; Chang, Song
2013-10-01
The effect of color and shape load on conceptual processing was studied. Perceptual load effects have been found in visual and auditory conceptual processing, supporting the theory of embodied cognition. However, whether different types of visual concepts, such as color and shape, share the same perceptual load effects is unknown. In the current experiment, 32 participants were administered simultaneous perceptual and conceptual tasks to assess the relation between perceptual load and conceptual processing. Keeping color load in mind obstructed color conceptual processing. Hence, perceptual processing and conceptual load shared the same resources, suggesting embodied cognition. Color conceptual processing was not affected by shape pictures, indicating that different types of properties within vision were separate.
How Fast is Famous Face Recognition?
Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.
2012-01-01
The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503
The Role of Task-Related Learned Representations in Explaining Asymmetries in Task Switching
Barutchu, Ayla; Becker, Stefanie I.; Carter, Olivia; Hester, Robert; Levy, Neil L.
2013-01-01
Task switch costs often show an asymmetry, with switch costs being larger when switching from a difficult task to an easier task. This asymmetry has been explained by difficult tasks being represented more strongly and consequently requiring more inhibition prior to switching to the easier task. The present study shows that switch cost asymmetries observed in arithmetic tasks (addition vs. subtraction) do not depend on task difficulty: Switch costs of similar magnitudes were obtained when participants were presented with unsolvable pseudo-equations that did not differ in task difficulty. Further experiments showed that neither task switch costs nor switch cost asymmetries were due to perceptual factors (e.g., perceptual priming effects). These findings suggest that asymmetrical switch costs can be brought about by the association of some tasks with greater difficulty than others. Moreover, the finding that asymmetrical switch costs were observed (1) in the absence of a task switch proper and (2) without differences in task difficulty, suggests that present theories of task switch costs and switch cost asymmetries are in important ways incomplete and need to be modified. PMID:23613919
Kapatsinski, Vsevolod; Olejarczuk, Paul; Redford, Melissa A
2017-03-01
We report on rapid perceptual learning of intonation contour categories in adults and 9- to 11-year-old children. Intonation contours are temporally extended patterns, whose perception requires temporal integration and therefore poses significant working memory challenges. Both children and adults form relatively abstract representations of intonation contours: Previously encountered and novel exemplars are categorized together equally often, as long as distance from the prototype is controlled. However, age-related differences in categorization performance also exist. Given the same experience, adults form narrower categories than children. In addition, adults pay more attention to the end of the contour, while children appear to pay equal attention to the beginning and the end. The age range we examine appears to capture the tail-end of the developmental trajectory for learning intonation contour categories: There is a continuous effect of age on category breadth within the child group, but the oldest children (older than 10;3) are adult-like. Copyright © 2016 Cognitive Science Society, Inc.
Rossi-Pool, Román; Salinas, Emilio; Zainos, Antonio; Alvarez, Manuel; Vergara, José; Parga, Néstor; Romo, Ranulfo
2016-01-01
The problem of neural coding in perceptual decision making revolves around two fundamental questions: (i) How are the neural representations of sensory stimuli related to perception, and (ii) what attributes of these neural responses are relevant for downstream networks, and how do they influence decision making? We studied these two questions by recording neurons in primary somatosensory (S1) and dorsal premotor (DPC) cortex while trained monkeys reported whether the temporal pattern structure of two sequential vibrotactile stimuli (of equal mean frequency) was the same or different. We found that S1 neurons coded the temporal patterns in a literal way and only during the stimulation periods and did not reflect the monkeys’ decisions. In contrast, DPC neurons coded the stimulus patterns as broader categories and signaled them during the working memory, comparison, and decision periods. These results show that the initial sensory representation is transformed into an intermediate, more abstract categorical code that combines past and present information to ultimately generate a perceptually informed choice. PMID:27872293
Kapatsinski, Vsevolod; Olejarczuk, Paul; Redford, Melissa A.
2015-01-01
We report on rapid perceptual learning of intonation contour categories in adults and 9- to 11-year-old children. Intonation contours are temporally extended patterns whose perception requires temporal integration and therefore poses significant working memory challenges. Both children and adults form relatively abstract representations of intonation contours: previously encountered and novel exemplars are categorized together equally often, as long as distance from the prototype is controlled. However, age-related differences in categorization performance also exist. Given the same experience, adults form narrower categories than children. In addition, adults pay more attention to the end of the contour while children appear to pay equal attention to the beginning and the end. The age range we examine appears to capture the tail-end of the developmental trajectory for learning intonation contour categories: there is a continuous effect of age on category breadth within the child group, but the oldest children (older than 10;3) are adult-like. PMID:26901251
Pinal, Diego; Zurrón, Montserrat; Díaz, Fernando
2014-01-01
information encoding, maintenance, and retrieval; these are supported by brain activity in a network of frontal, parietal and temporal regions. Manipulation of WM load and duration of the maintenance period can modulate this activity. Although such modulations have been widely studied using the event-related potentials (ERP) technique, a precise description of the time course of brain activity during encoding and retrieval is still required. Here, we used this technique and principal component analysis to assess the time course of brain activity during encoding and retrieval in a delayed match to sample task. We also investigated the effects of memory load and duration of the maintenance period on ERP activity. Brain activity was similar during information encoding and retrieval and comprised six temporal factors, which closely matched the latency and scalp distribution of some ERP components: P1, N1, P2, N2, P300, and a slow wave. Changes in memory load modulated task performance and yielded variations in frontal lobe activation. Moreover, the P300 amplitude was smaller in the high than in the low load condition during encoding and retrieval. Conversely, the slow wave amplitude was higher in the high than in the low load condition during encoding, and the same was true for the N2 amplitude during retrieval. Thus, during encoding, memory load appears to modulate the processing resources for context updating and post-categorization processes, and during retrieval it modulates resources for stimulus classification and context updating. Besides, despite the lack of differences in task performance related to duration of the maintenance period, larger N2 amplitude and stronger activation of the left temporal lobe after long than after short maintenance periods were found during information retrieval. Thus, results regarding the duration of maintenance period were complex, and future work is required to test the time-based decay theory predictions.
Perceptual judgments made better by indirect interactions: Evidence from a joint localization task
Sebanz, Natalie; Knoblich, Günther
2017-01-01
Others’ perceptual judgments tend to have strong effects on our own, and can improve perceptual judgments when task partners engage in communication. The present study investigated whether individuals benefit from others’ perceptual judgments in indirect interactions, where outcomes of individual decisions can be observed in a shared environment. Participants located a target in a 2D projection of a 3D container either from two complementary viewpoints (Experiment 1), or from a single viewpoint (Experiment 2). Uncertainty about the target location was high on the front-back dimension and low on the left-right dimension. The results showed that pairs of participants benefitted from taking turns in providing judgments. When each member of the pair had access to one complementary perspective, the pair achieved the same level of accuracy as when the two individuals had access to both complimentary perspectives and better performance than when the two individuals had access to only one perspective. These findings demonstrate the important role of a shared environment for successful integration of perceptual information while highlighting limitations in assigning appropriate weights to others’ judgments. PMID:29095951
Mirams, Laura; Poliakoff, Ellen; Brown, Richard J; Lloyd, Donna M
2012-01-01
Evidence suggests that interoceptive and exteroceptive attention might have different perceptual effects. However, the effects of these different types of body-focused attention have never been directly compared. The current research investigated how interoceptive and exteroceptive attention affect subsequent performance on the somatic signal detection task (SSDT). In Experiment 1, 37 participants completed the SSDT under usual testing conditions and after performing an interoceptive heartbeat perception task. This task led to a more liberal response criterion, leading to increased touch reports in the presence and absence of a target vibration. This finding is consistent with suggestions that attending internally contributes to physical symptom reporting in patients with medically unexplained symptoms (MUS). In Experiment 2, 40 participants completed the SSDT before and after an exteroceptive grating orientation task. This task led to a more stringent response criterion, leading to decreased touch reports in the presence and absence of the target, possibly via a reduction in sensory noise. This work demonstrates that internal and external body-focused attention can have opposite effects on subsequent somatic perceptual decision making and suggests that attentional training could be useful for patients reporting MUS.
Gamma band activity and the P3 reflect post-perceptual processes, not visual awareness
Pitts, Michael A.; Padwal, Jennifer; Fennelly, Daniel; Martínez, Antígona; Hillyard, Steven A.
2014-01-01
A primary goal in cognitive neuroscience is to identify neural correlates of conscious perception (NCC). By contrasting conditions in which subjects are aware versus unaware of identical visual stimuli, a number of candidate NCCs have emerged, among them induced gamma band activity in the EEG and the P3 event-related potential. In most previous studies, however, the critical stimuli were always directly relevant to the subjects’ task, such that aware versus unaware contrasts may well have included differences in post-perceptual processing in addition to differences in conscious perception per se. Here, in a series of EEG experiments, visual awareness and task relevance were manipulated independently. Induced gamma activity and the P3 were absent for task-irrelevant stimuli regardless of whether subjects were aware of such stimuli. For task-relevant stimuli, gamma and the P3 were robust and dissociable, indicating that each reflects distinct post-perceptual processes necessary for carrying-out the task but not for consciously perceiving the stimuli. Overall, this pattern of results challenges a number of previous proposals linking gamma band activity and the P3 to conscious perception. PMID:25063731
Is Statistical Learning Constrained by Lower Level Perceptual Organization?
Emberson, Lauren L.; Liu, Ran; Zevin, Jason D.
2013-01-01
In order for statistical information to aid in complex developmental processes such as language acquisition, learning from higher-order statistics (e.g. across successive syllables in a speech stream to support segmentation) must be possible while perceptual abilities (e.g. speech categorization) are still developing. The current study examines how perceptual organization interacts with statistical learning. Adult participants were presented with multiple exemplars from novel, complex sound categories designed to reflect some of the spectral complexity and variability of speech. These categories were organized into sequential pairs and presented such that higher-order statistics, defined based on sound categories, could support stream segmentation. Perceptual similarity judgments and multi-dimensional scaling revealed that participants only perceived three perceptual clusters of sounds and thus did not distinguish the four experimenter-defined categories, creating a tension between lower level perceptual organization and higher-order statistical information. We examined whether the resulting pattern of learning is more consistent with statistical learning being “bottom-up,” constrained by the lower levels of organization, or “top-down,” such that higher-order statistical information of the stimulus stream takes priority over the perceptual organization, and perhaps influences perceptual organization. We consistently find evidence that learning is constrained by perceptual organization. Moreover, participants generalize their learning to novel sounds that occupy a similar perceptual space, suggesting that statistical learning occurs based on regions of or clusters in perceptual space. Overall, these results reveal a constraint on learning of sound sequences, such that statistical information is determined based on lower level organization. These findings have important implications for the role of statistical learning in language acquisition. PMID:23618755
Working memory delay period activity marks a domain-unspecific attention mechanism.
Katus, Tobias; Müller, Matthias M
2016-03-01
Working memory (WM) recruits neural circuits that also perform perception- and action-related functions. Among the functions that are shared between the domains of WM and perception is selective attention, which supports the maintenance of task-relevant information during the retention delay of WM tasks. The tactile contralateral delay activity (tCDA) component of the event-related potential (ERP) marks the attention-based rehearsal of tactile information in somatosensory brain regions. We tested whether the tCDA reflects the competition for shared attention resources between a WM task and a perceptual task under dual-task conditions. The two tasks were always performed on opposite hands. In different blocks, the WM task had higher or lower priority than the perceptual task. The tCDA's polarity consistently reflected the hand where the currently prioritized task was performed. This suggests that the process indexed by the tCDA is not specific to the domain of WM, but mediated by a domain-unspecific attention mechanism. The analysis of transient ERP components evoked by stimuli in the two tasks further supports the interpretation that the tCDA marks a goal-directed bias in the allocation of selective attention. Larger spatially selective modulations were obtained for stimulus material related to the high-, as compared to low-priority, task. While our results generally indicate functional overlap between the domains of WM and perception, we also found evidence suggesting that selection in internal (mnemonic) and external (perceptual) stimulus representations involves processes that are not active during shifts of preparatory attention. Copyright © 2016 Elsevier Inc. All rights reserved.
Neuroimaging Evidence for 2 Types of Plasticity in Association with Visual Perceptual Learning.
Shibata, Kazuhisa; Sasaki, Yuka; Kawato, Mitsuo; Watanabe, Takeo
2016-09-01
Visual perceptual learning (VPL) is long-term performance improvement as a result of perceptual experience. It is unclear whether VPL is associated with refinement in representations of the trained feature (feature-based plasticity), improvement in processing of the trained task (task-based plasticity), or both. Here, we provide empirical evidence that VPL of motion detection is associated with both types of plasticity which occur predominantly in different brain areas. Before and after training on a motion detection task, subjects' neural responses to the trained motion stimuli were measured using functional magnetic resonance imaging. In V3A, significant response changes after training were observed specifically to the trained motion stimulus but independently of whether subjects performed the trained task. This suggests that the response changes in V3A represent feature-based plasticity in VPL of motion detection. In V1 and the intraparietal sulcus, significant response changes were found only when subjects performed the trained task on the trained motion stimulus. This suggests that the response changes in these areas reflect task-based plasticity. These results collectively suggest that VPL of motion detection is associated with the 2 types of plasticity, which occur in different areas and therefore have separate mechanisms at least to some degree. © The Author 2016. Published by Oxford University Press.
Comparing the benefits of caffeine, naps and placebo on verbal, motor and perceptual memory.
Mednick, Sara C; Cai, Denise J; Kanady, Jennifer; Drummond, Sean P A
2008-11-03
Caffeine, the world's most common psychoactive substance, is used by approximately 90% of North Americans everyday. Little is known, however, about its benefits for memory. Napping has been shown to increase alertness and promote learning on some memory tasks. We directly compared caffeine (200mg) with napping (60-90min) and placebo on three distinct memory processes: declarative verbal memory, procedural motor skills, and perceptual learning. In the verbal task, recall and recognition for unassociated words were tested after a 7h retention period (with a between-session nap or drug intervention). A second, different, word list was administered post-intervention and memory was tested after a 20min retention period. The non-declarative tasks (finger tapping task (FTT) and texture discrimination task (TDT)) were trained before the intervention and then retested afterwards. Naps enhanced recall of words after a 7h and 20min retention interval relative to both caffeine and placebo. Caffeine significantly impaired motor learning compared to placebo and naps. Napping produced robust perceptual learning compared with placebo; however, naps and caffeine were not significantly different. These findings provide evidence of the limited benefits of caffeine for memory improvement compared with napping. We hypothesize that impairment from caffeine may be restricted to tasks that contain explicit information; whereas strictly implicit learning is less compromised.
Bayesian model of categorical effects in L1 and L2 speech perception
NASA Astrophysics Data System (ADS)
Kronrod, Yakov
In this dissertation I present a model that captures categorical effects in both first language (L1) and second language (L2) speech perception. In L1 perception, categorical effects range between extremely strong for consonants to nearly continuous perception of vowels. I treat the problem of speech perception as a statistical inference problem and by quantifying categoricity I obtain a unified model of both strong and weak categorical effects. In this optimal inference mechanism, the listener uses their knowledge of categories and the acoustics of the signal to infer the intended productions of the speaker. The model splits up speech variability into meaningful category variance and perceptual noise variance. The ratio of these two variances, which I call Tau, directly correlates with the degree of categorical effects for a given phoneme or continuum. By fitting the model to behavioral data from different phonemes, I show how a single parametric quantitative variation can lead to the different degrees of categorical effects seen in perception experiments with different phonemes. In L2 perception, L1 categories have been shown to exert an effect on how L2 sounds are identified and how well the listener is able to discriminate them. Various models have been developed to relate the state of L1 categories with both the initial and eventual ability to process the L2. These models largely lacked a formalized metric to measure perceptual distance, a means of making a-priori predictions of behavior for a new contrast, and a way of describing non-discrete gradient effects. In the second part of my dissertation, I apply the same computational model that I used to unify L1 categorical effects to examining L2 perception. I show that we can use the model to make the same type of predictions as other SLA models, but also provide a quantitative framework while formalizing all measures of similarity and bias. Further, I show how using this model to consider L2 learners at different stages of development we can track specific parameters of categories as they change over time, giving us a look into the actual process of L2 category development.
Schulte, Tilman; Müller-Oehring, Eva M; Chanraud, Sandra; Rosenbloom, Margaret J; Pfefferbaum, Adolf; Sullivan, Edith V
2011-11-01
Aging has readily observable effects on the ability to resolve conflict between competing stimulus attributes that are likely related to selective structural and functional brain changes. To identify age-related differences in neural circuits subserving conflict processing, we combined structural and functional MRI and a Stroop Match-to-Sample task involving perceptual cueing and repetition to modulate resources in healthy young and older adults. In our Stroop Match-to-Sample task, older adults handled conflict by activating a frontoparietal attention system more than young adults and engaged a visuomotor network more than young adults when processing repetitive conflict and when processing conflict following valid perceptual cueing. By contrast, young adults activated frontal regions more than older adults when processing conflict with perceptual cueing. These differential activation patterns were not correlated with regional gray matter volume despite smaller volumes in older than young adults. Given comparable performance in speed and accuracy of responding between both groups, these data suggest that successful aging is associated with functional reorganization of neural systems to accommodate functionally increasing task demands on perceptual and attentional operations. Copyright © 2009 Elsevier Inc. All rights reserved.
Clark, Kait; Fleck, Mathias S; Mitroff, Stephen R
2011-01-01
Recent research has shown that avid action video game players (VGPs) outperform non-video game players (NVGPs) on a variety of attentional and perceptual tasks. However, it remains unknown exactly why and how such differences arise; while some prior research has demonstrated that VGPs' improvements stem from enhanced basic perceptual processes, other work indicates that they can stem from enhanced attentional control. The current experiment used a change-detection task to explore whether top-down strategies can contribute to VGPs' improved abilities. Participants viewed alternating presentations of an image and a modified version of the image and were tasked with detecting and localizing the changed element. Consistent with prior claims of enhanced perceptual abilities, VGPs were able to detect the changes while requiring less exposure to the change than NVGPs. Further analyses revealed this improved change detection performance may result from altered strategy use; VGPs employed broader search patterns when scanning scenes for potential changes. These results complement prior demonstrations of VGPs' enhanced bottom-up perceptual benefits by providing new evidence of VGPs' potentially enhanced top-down strategic benefits. Copyright © 2010 Elsevier B.V. All rights reserved.
Repetition priming and change in functional ability in older persons without dementia
Fleischman, Debra A.; Bienias, Julia L.; Bennett, David A.
2008-01-01
Adverse consequences such as institutionalization and death are associated with compromised activities of daily living in aging, yet there is little known about risk factors for the development and progression of functional disability. Using generalized linear models, we examined the association between the ability to benefit from repetition and rate of change in functional ability in 160 nondemented elders participating in the Religious Orders Study. Three single-word repetition priming tasks were administered that varied in the degree to which visual-perceptual or conceptual processing is invoked. Decline in functional ability was less rapid, during follow-up of up to 10 years, in persons with better baseline priming performance on a task known to draw on both visual-perceptual and conceptual processing (word-stem completion). By contrast, change in functional ability was not associated with priming on tasks that are known to draw primarily on either visual-perceptual (threshold word-identification) or conceptual (category exemplar production) processing. The results are discussed in terms of a common biological substrate in the inferotemporal neocortex supporting efficient processing of meaningful visual-perceptual experience and proficient performance of activities of daily living. PMID:19210037
The Leuven Embedded Figures Test (L-EFT): measuring perception, intelligence or executive function?
Van der Hallen, Ruth; Wagemans, Johan; de-Wit, Lee; Chamberlain, Rebecca
2018-01-01
Performance on the Embedded Figures Test (EFT) has been interpreted as a reflection of local/global perceptual style, weak central coherence and/or field independence, as well as a measure of intelligence and executive function. The variable ways in which EFT findings have been interpreted demonstrate that the construct validity of this measure is unclear. In order to address this lack of clarity, we investigated to what extent performance on a new Embedded Figures Test (L-EFT) correlated with measures of intelligence, executive functions and estimates of local/global perceptual styles. In addition, we compared L-EFT performance to the original group EFT to directly contrast both tasks. Taken together, our results indicate that performance on the L-EFT does not correlate strongly with estimates of local/global perceptual style, intelligence or executive functions. Additionally, the results show that performance on the L-EFT is similarly associated with memory span and fluid intelligence as the group EFT. These results suggest that the L-EFT does not reflect a general perceptual or cognitive style/ability. These results further emphasize that empirical data on the construct validity of a task do not always align with the face validity of a task. PMID:29607257
Kühn, Simone; Schmiedek, Florian; Schott, Björn; Ratcliff, Roger; Heinze, Hans-Jochen; Düzel, Emrah; Lindenberger, Ulman; Lövden, Martin
2011-09-01
Perceptual decision-making performance depends on several cognitive and neural processes. Here, we fit Ratcliff's diffusion model to accuracy data and reaction-time distributions from one numerical and one verbal two-choice perceptual-decision task to deconstruct these performance measures into the rate of evidence accumulation (i.e., drift rate), response criterion setting (i.e., boundary separation), and peripheral aspects of performance (i.e., nondecision time). These theoretical processes are then related to individual differences in brain activation by means of multiple regression. The sample consisted of 24 younger and 15 older adults performing the task in fMRI before and after 100 daily 1-hr behavioral training sessions in a multitude of cognitive tasks. Results showed that individual differences in boundary separation were related to striatal activity, whereas differences in drift rate were related to activity in the inferior parietal lobe. These associations were not significantly modified by adult age or perceptual expertise. We conclude that the striatum is involved in regulating response thresholds, whereas the inferior parietal lobe might represent decision-making evidence related to letters and numbers.
Measuring Category Intuitiveness in Unconstrained Categorization Tasks
ERIC Educational Resources Information Center
Pothos, Emmanuel M.; Perlman, Amotz; Bailey, Todd M.; Kurtz, Ken; Edwards, Darren J.; Hines, Peter; McDonnell, John V.
2011-01-01
What makes a category seem natural or intuitive? In this paper, an unsupervised categorization task was employed to examine observer agreement concerning the categorization of nine different stimulus sets. The stimulus sets were designed to capture different intuitions about classification structure. The main empirical index of category…
Fu, Qiufang; Liu, Yong-Jin; Dienes, Zoltan; Wu, Jianhui; Chen, Wenfeng; Fu, Xiaolan
2017-01-01
It remains controversial whether visual awareness is correlated with early activation indicated by VAN (visual awareness negativity), as the recurrent process hypothesis theory proposes, or with later activation indicated by P3 or LP (late positive), as suggested by global workspace theories. To address this issue, a backward masking task was adopted, in which participants were first asked to categorize natural scenes of color photographs and line-drawings and then to rate the clarity of their visual experience on a Perceptual Awareness Scale (PAS). The interstimulus interval between the scene and the mask was manipulated. The behavioral results showed that categorization accuracy increased with PAS ratings for both color photographs and line-drawings, with no difference in accuracy between the two types of images for each rating, indicating that the experience rating reflected visibility. Importantly, the event-related potential (ERP) results revealed that for correct trials, the early posterior N1 and anterior P2 components changed with the PAS ratings for color photographs, but did not vary with the PAS ratings for line-drawings, indicating that the N1 and P2 do not always correlate with subjective visual awareness. Moreover, for both types of images, the anterior N2 and posterior VAN changed with the PAS ratings in a linear way, while the LP changed with the PAS ratings in a non-linear way, suggesting that these components relate to different types of subjective awareness. The results reconcile the apparently contradictory predictions of different theories and help to resolve the current debate on neural correlates of visual awareness.
Fu, Qiufang; Liu, Yong-Jin; Dienes, Zoltan; Wu, Jianhui; Chen, Wenfeng; Fu, Xiaolan
2017-01-01
It remains controversial whether visual awareness is correlated with early activation indicated by VAN (visual awareness negativity), as the recurrent process hypothesis theory proposes, or with later activation indicated by P3 or LP (late positive), as suggested by global workspace theories. To address this issue, a backward masking task was adopted, in which participants were first asked to categorize natural scenes of color photographs and line-drawings and then to rate the clarity of their visual experience on a Perceptual Awareness Scale (PAS). The interstimulus interval between the scene and the mask was manipulated. The behavioral results showed that categorization accuracy increased with PAS ratings for both color photographs and line-drawings, with no difference in accuracy between the two types of images for each rating, indicating that the experience rating reflected visibility. Importantly, the event-related potential (ERP) results revealed that for correct trials, the early posterior N1 and anterior P2 components changed with the PAS ratings for color photographs, but did not vary with the PAS ratings for line-drawings, indicating that the N1 and P2 do not always correlate with subjective visual awareness. Moreover, for both types of images, the anterior N2 and posterior VAN changed with the PAS ratings in a linear way, while the LP changed with the PAS ratings in a non-linear way, suggesting that these components relate to different types of subjective awareness. The results reconcile the apparently contradictory predictions of different theories and help to resolve the current debate on neural correlates of visual awareness. PMID:28261141
Perceptual decision related activity in the lateral geniculate nucleus
Jiang, Yaoguang; Yampolsky, Dmitry; Purushothaman, Gopathy
2015-01-01
Fundamental to neuroscience is the understanding of how the language of neurons relates to behavior. In the lateral geniculate nucleus (LGN), cells show distinct properties such as selectivity for particular wavelengths, increments or decrements in contrast, or preference for fine detail versus rapid motion. No studies, however, have measured how LGN cells respond when an animal is challenged to make a perceptual decision using information within the receptive fields of those LGN cells. In this study we measured neural activity in the macaque LGN during a two-alternative, forced-choice (2AFC) contrast detection task or during a passive fixation task and found that a small proportion (13.5%) of single LGN parvocellular (P) and magnocellular (M) neurons matched the psychophysical performance of the monkey. The majority of LGN neurons measured in both tasks were not as sensitive as the monkey. The covariation between neural response and behavior (quantified as choice probability) was significantly above chance during active detection, even when there was no external stimulus. Interneuronal correlations and task-related gain modulations were negligible under the same condition. A bottom-up pooling model that used sensory neural responses to compute perceptual choices in the absence of interneuronal correlations could fully explain these results at the level of the LGN, supporting the hypothesis that the perceptual decision pool consists of multiple sensory neurons and that response fluctuations in these neurons can influence perception. PMID:26019309
Memory systems, processes, and tasks: taxonomic clarification via factor analysis.
Bruss, Peter J; Mitchell, David B
2009-01-01
The nature of various memory systems was examined using factor analysis. We reanalyzed data from 11 memory tasks previously reported in Mitchell and Bruss (2003). Four well-defined factors emerged, closely resembling episodic and semantic memory and conceptual and perceptual implicit memory, in line with both memory systems and transfer-appropriate processing accounts. To explore taxonomic issues, we ran separate analyses on the implicit tasks. Using a cross-format manipulation (pictures vs. words), we identified 3 prototypical tasks. Word fragment completion and picture fragment identification tasks were "factor pure," tapping perceptual processes uniquely. Category exemplar generation revealed its conceptual nature, yielding both cross-format priming and a picture superiority effect. In contrast, word stem completion and picture naming were more complex, revealing attributes of both processes.
Are videogame training gains specific or general?
Oei, Adam C; Patterson, Michael D
2014-01-01
Many recent studies using healthy adults document enhancements in perception and cognition from playing commercial action videogames (AVGs). Playing action games (e.g., Call of Duty, Medal of Honor) is associated with improved bottom-up lower-level information processing skills like visual-perceptual and attentional processes. One proposal states a general improvement in the ability to interpret and gather statistical information to predict future actions which then leads to better performance across different perceptual/attentional tasks. Another proposal claims all the tasks are separately trained in the AVGs because the AVGs and laboratory tasks contain similar demands. We review studies of action and non-AVGs to show support for the latter proposal. To explain transfer in AVGs, we argue that the perceptual and attention tasks share common demands with the trained videogames (e.g., multiple object tracking (MOT), rapid attentional switches, and peripheral vision). In non-AVGs, several studies also demonstrate specific, limited transfer. One instance of specific transfer is the specific enhancement to mental rotation after training in games with a spatial emphasis (e.g., Tetris). In contrast, the evidence for transfer is equivocal where the game and task do not share common demands (e.g., executive functioning). Thus, the "common demands" hypothesis of transfer not only characterizes transfer effects in AVGs, but also non-action games. Furthermore, such a theory provides specific predictions, which can help in the selection of games to train human cognition as well as in the design of videogames purposed for human cognitive and perceptual enhancement. Finally this hypothesis is consistent with the cognitive training literature where most post-training gains are for tasks similar to the training rather than general, non-specific improvements.
Are videogame training gains specific or general?
Patterson, Michael D.
2014-01-01
Many recent studies using healthy adults document enhancements in perception and cognition from playing commercial action videogames (AVGs). Playing action games (e.g., Call of Duty, Medal of Honor) is associated with improved bottom-up lower-level information processing skills like visual-perceptual and attentional processes. One proposal states a general improvement in the ability to interpret and gather statistical information to predict future actions which then leads to better performance across different perceptual/attentional tasks. Another proposal claims all the tasks are separately trained in the AVGs because the AVGs and laboratory tasks contain similar demands. We review studies of action and non-AVGs to show support for the latter proposal. To explain transfer in AVGs, we argue that the perceptual and attention tasks share common demands with the trained videogames (e.g., multiple object tracking (MOT), rapid attentional switches, and peripheral vision). In non-AVGs, several studies also demonstrate specific, limited transfer. One instance of specific transfer is the specific enhancement to mental rotation after training in games with a spatial emphasis (e.g., Tetris). In contrast, the evidence for transfer is equivocal where the game and task do not share common demands (e.g., executive functioning). Thus, the “common demands” hypothesis of transfer not only characterizes transfer effects in AVGs, but also non-action games. Furthermore, such a theory provides specific predictions, which can help in the selection of games to train human cognition as well as in the design of videogames purposed for human cognitive and perceptual enhancement. Finally this hypothesis is consistent with the cognitive training literature where most post-training gains are for tasks similar to the training rather than general, non-specific improvements. PMID:24782722
Lifelong Bilingualism Maintains Neural Efficiency for Cognitive Control in Aging
Gold, Brian T.; Kim, Chobok; Johnson, Nathan F.; Kryscio, Richard J.; Smith, Charles D.
2013-01-01
Recent behavioral data have shown that lifelong bilingualism can maintain youthful cognitive control abilities in aging. Here, we provide the first direct evidence of a neural basis for the bilingual cognitive control boost in aging. Two experiments were conducted, using a perceptual task switching paradigm, and including a total of 110 participants. In Experiment 1, older adult bilinguals showed better perceptual switching performance than their monolingual peers. In Experiment 2, younger and older adult monolinguals and bilinguals completed the same perceptual task switching experiment while fMRI was performed. Typical age-related performance reductions and fMRI activation increases were observed. However, like younger adults, bilingual older adults outperformed their monolingual peers while displaying decreased activation in left lateral frontal cortex and cingulate cortex. Critically, this attenuation of age-related over-recruitment associated with bilingualism was directly correlated with better task switching performance. In addition, the lower BOLD response in frontal regions accounted for 82% of the variance in the bilingual task switching reaction time advantage. These results suggest that lifelong bilingualism offsets age-related declines in the neural efficiency for cognitive control processes. PMID:23303919
Divided attention disrupts perceptual encoding during speech recognition.
Mattys, Sven L; Palmer, Shekeila D
2015-03-01
Performing a secondary task while listening to speech has a detrimental effect on speech processing, but the locus of the disruption within the speech system is poorly understood. Recent research has shown that cognitive load imposed by a concurrent visual task increases dependency on lexical knowledge during speech processing, but it does not affect lexical activation per se. This suggests that "lexical drift" under cognitive load occurs either as a post-lexical bias at the decisional level or as a secondary consequence of reduced perceptual sensitivity. This study aimed to adjudicate between these alternatives using a forced-choice task that required listeners to identify noise-degraded spoken words with or without the addition of a concurrent visual task. Adding cognitive load increased the likelihood that listeners would select a word acoustically similar to the target even though its frequency was lower than that of the target. Thus, there was no evidence that cognitive load led to a high-frequency response bias. Rather, cognitive load seems to disrupt sublexical encoding, possibly by impairing perceptual acuity at the auditory periphery.
Two-item same/different discrimination in rhesus monkeys (Macaca mulatta).
Basile, Benjamin M; Moylan, Emily J; Charles, David P; Murray, Elisabeth A
2015-11-01
Almost all nonhuman animals can recognize when one item is the same as another item. It is less clear whether nonhuman animals possess abstract concepts of "same" and "different" that can be divorced from perceptual similarity. Pigeons and monkeys show inconsistent performance, and often surprising difficulty, in laboratory tests of same/different learning that involve only two items. Previous results from tests using multi-item arrays suggest that nonhumans compute sameness along a continuous scale of perceptual variability, which would explain the difficulty of making two-item same/different judgments. Here, we provide evidence that rhesus monkeys can learn a two-item same/different discrimination similar to those on which monkeys and pigeons have previously failed. Monkeys' performance transferred to novel stimuli and was not affected by perceptual variations in stimulus size, rotation, view, or luminance. Success without the use of multi-item arrays, and the lack of effect of perceptual variability, suggests a computation of sameness that is more categorical, and perhaps more abstract, than previously thought.
Ho, Tiffany C; Zhang, Shunan; Sacchet, Matthew D; Weng, Helen; Connolly, Colm G; Henje Blom, Eva; Han, Laura K M; Mobayed, Nisreen O; Yang, Tony T
2016-01-01
While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed.
Ho, Tiffany C.; Zhang, Shunan; Sacchet, Matthew D.; Weng, Helen; Connolly, Colm G.; Henje Blom, Eva; Han, Laura K. M.; Mobayed, Nisreen O.; Yang, Tony T.
2016-01-01
While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed. PMID:26869950