Similarity relations in visual search predict rapid visual categorization
Mohan, Krithika; Arun, S. P.
2012-01-01
How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.
2011-01-01
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344
Saneyoshi, Ayako; Michimata, Chikashi
2009-12-01
Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to a different position on geon A. The Categorical task consisted of the original and the categorically transformed objects. The Coordinate task consisted of the original and the coordinately transformed objects. The original object was presented to the central visual field, followed by a comparison object presented to the right or left visual half-fields (RVF and LVF). The results showed an RVF advantage for the Categorical task and an LVF advantage for the Coordinate task. The possibility that categorical and coordinate spatial processing subsystems would be basic computational elements for between- and within-category object recognition was discussed.
Scene and human face recognition in the central vision of patients with glaucoma
Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole
2018-01-01
Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572
The visual attention span deficit in dyslexia is visual and not verbal.
Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane
2012-06-01
The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.
Banno, Hayaki; Saiki, Jun
2015-03-01
Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures. © 2015 SAGE Publications.
Specific-Token Effects in Screening Tasks: Possible Implications for Aviation Security
ERIC Educational Resources Information Center
Smith, J. David; Redford, Joshua S.; Washburn, David A.; Taglialatela, Lauren A.
2005-01-01
Screeners at airport security checkpoints perform an important categorization task in which they search for threat items in complex x-ray images. But little is known about how the processes of categorization stand up to visual complexity. The authors filled this research gap with screening tasks in which participants searched for members of target…
Schmidt, K; Forkmann, K; Sinke, C; Gratz, M; Bitz, A; Bingel, U
2016-07-01
Compared to peripheral pain, trigeminal pain elicits higher levels of fear, which is assumed to enhance the interruptive effects of pain on concomitant cognitive processes. In this fMRI study we examined the behavioral and neural effects of trigeminal (forehead) and peripheral (hand) pain on visual processing and memory encoding. Cerebral activity was measured in 23 healthy subjects performing a visual categorization task that was immediately followed by a surprise recognition task. During the categorization task subjects received concomitant noxious electrical stimulation on the forehead or hand. Our data show that fear ratings were significantly higher for trigeminal pain. Categorization and recognition performance did not differ between pictures that were presented with trigeminal and peripheral pain. However, object categorization in the presence of trigeminal pain was associated with stronger activity in task-relevant visual areas (lateral occipital complex, LOC), memory encoding areas (hippocampus and parahippocampus) and areas implicated in emotional processing (amygdala) compared to peripheral pain. Further, individual differences in neural activation between the trigeminal and the peripheral condition were positively related to differences in fear ratings between both conditions. Functional connectivity between amygdala and LOC was increased during trigeminal compared to peripheral painful stimulation. Fear-driven compensatory resource activation seems to be enhanced for trigeminal stimuli, presumably due to their exceptional biological relevance. Copyright © 2016 Elsevier Inc. All rights reserved.
Feature saliency and feedback information interactively impact visual category learning
Hammer, Rubi; Sloutsky, Vladimir; Grill-Spector, Kalanit
2015-01-01
Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object’s features most relevant for categorization, while ‘filtering out’ irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously. PMID:25745404
ERIC Educational Resources Information Center
Roberson, Debi; Pak, Hyensou; Hanley, J. Richard
2008-01-01
In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is…
ERIC Educational Resources Information Center
Harel, Assaf; Bentin, Shlomo
2009-01-01
The type of visual information needed for categorizing faces and nonface objects was investigated by manipulating spatial frequency scales available in the image during a category verification task addressing basic and subordinate levels. Spatial filtering had opposite effects on faces and airplanes that were modulated by categorization level. The…
Sofer, Imri; Crouzet, Sébastien M.; Serre, Thomas
2015-01-01
Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability. PMID:26335683
A Behavioral Study of Distraction by Vibrotactile Novelty
ERIC Educational Resources Information Center
Parmentier, Fabrice B. R.; Ljungberg, Jessica K.; Elsley, Jane V.; Lindkvist, Markus
2011-01-01
Past research has demonstrated that the occurrence of unexpected task-irrelevant changes in the auditory or visual sensory channels captured attention in an obligatory fashion, hindering behavioral performance in ongoing auditory or visual categorization tasks and generating orientation and re-orientation electrophysiological responses. We report…
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane
2016-01-01
We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants' VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition.
Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane
2016-01-01
We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants’ VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition. PMID:26950210
Metacognitive deficits in categorization tasks in a population with impaired inner speech.
Langland-Hassan, Peter; Gauker, Christopher; Richardson, Michael J; Dietz, Aimee; Faries, Frank R
2017-11-01
This study examines the relation of language use to a person's ability to perform categorization tasks and to assess their own abilities in those categorization tasks. A silent rhyming task was used to confirm that a group of people with post-stroke aphasia (PWA) had corresponding covert language production (or "inner speech") impairments. The performance of the PWA was then compared to that of age- and education-matched healthy controls on three kinds of categorization tasks and on metacognitive self-assessments of their performance on those tasks. The PWA showed no deficits in their ability to categorize objects for any of the three trial types (visual, thematic, and categorial). However, on the categorial trials, their metacognitive assessments of whether they had categorized correctly were less reliable than those of the control group. The categorial trials were distinguished from the others by the fact that the categorization could not be based on some immediately perceptible feature or on the objects' being found together in a type of scenario or setting. This result offers preliminary evidence for a link between covert language use and a specific form of metacognition. Copyright © 2017 Elsevier B.V. All rights reserved.
Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T
2015-01-01
The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.
Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T.
2015-01-01
The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms. PMID:26733895
Semantic Categorization Precedes Affective Evaluation of Visual Scenes
ERIC Educational Resources Information Center
Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.
2010-01-01
We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…
Visual Categorization of Natural Movies by Rats
Vinken, Kasper; Vermaercke, Ben
2014-01-01
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598
Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.
Nummenmaa, Lauri; Calvo, Manuel G
2015-04-01
Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).
How Fast is Famous Face Recognition?
Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.
2012-01-01
The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503
A comparison of haptic material perception in blind and sighted individuals.
Baumgartner, Elisabeth; Wiebel, Christiane B; Gegenfurtner, Karl R
2015-10-01
We investigated material perception in blind participants to explore the influence of visual experience on material representations and the relationship between visual and haptic material perception. In a previous study with sighted participants, we had found participants' visual and haptic judgments of material properties to be very similar (Baumgartner, Wiebel, & Gegenfurtner, 2013). In a categorization task, however, visual exploration had led to higher categorization accuracy than haptic exploration. Here, we asked congenitally blind participants to explore different materials haptically and rate several material properties in order to assess the role of the visual sense for the emergence of haptic material perception. Principal components analyses combined with a procrustes superimposition showed that the material representations of blind and blindfolded sighted participants were highly similar. We also measured haptic categorization performance, which was equal for the two groups. We conclude that haptic material representations can emerge independently of visual experience, and that there are no advantages for either group of observers in haptic categorization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim
2012-01-01
The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.
Visual categorization of natural movies by rats.
Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P
2014-08-06
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.
Active sensing in the categorization of visual patterns
Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M
2016-01-01
Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546
Categorical clustering of the neural representation of color.
Brouwer, Gijs Joost; Heeger, David J
2013-09-25
Cortical activity was measured with functional magnetic resonance imaging (fMRI) while human subjects viewed 12 stimulus colors and performed either a color-naming or diverted attention task. A forward model was used to extract lower dimensional neural color spaces from the high-dimensional fMRI responses. The neural color spaces in two visual areas, human ventral V4 (V4v) and VO1, exhibited clustering (greater similarity between activity patterns evoked by stimulus colors within a perceptual category, compared to between-category colors) for the color-naming task, but not for the diverted attention task. Response amplitudes and signal-to-noise ratios were higher in most visual cortical areas for color naming compared to diverted attention. But only in V4v and VO1 did the cortical representation of color change to a categorical color space. A model is presented that induces such a categorical representation by changing the response gains of subpopulations of color-selective neurons.
Behavioral demand modulates object category representation in the inferior temporal cortex
Emadi, Nazli
2014-01-01
Visual object categorization is a critical task in our daily life. Many studies have explored category representation in the inferior temporal (IT) cortex at the level of single neurons and population. However, it is not clear how behavioral demands modulate this category representation. Here, we recorded from the IT single neurons in monkeys performing two different tasks with identical visual stimuli: passive fixation and body/object categorization. We found that category selectivity of the IT neurons was improved in the categorization compared with the passive task where reward was not contingent on image category. The category improvement was the result of larger rate enhancement for the preferred category and smaller response variability for both preferred and nonpreferred categories. These specific modulations in the responses of IT category neurons enhanced signal-to-noise ratio of the neural responses to discriminate better between the preferred and nonpreferred categories. Our results provide new insight into the adaptable category representation in the IT cortex, which depends on behavioral demands. PMID:25080572
Categorical Clustering of the Neural Representation of Color
Heeger, David J.
2013-01-01
Cortical activity was measured with functional magnetic resonance imaging (fMRI) while human subjects viewed 12 stimulus colors and performed either a color-naming or diverted attention task. A forward model was used to extract lower dimensional neural color spaces from the high-dimensional fMRI responses. The neural color spaces in two visual areas, human ventral V4 (V4v) and VO1, exhibited clustering (greater similarity between activity patterns evoked by stimulus colors within a perceptual category, compared to between-category colors) for the color-naming task, but not for the diverted attention task. Response amplitudes and signal-to-noise ratios were higher in most visual cortical areas for color naming compared to diverted attention. But only in V4v and VO1 did the cortical representation of color change to a categorical color space. A model is presented that induces such a categorical representation by changing the response gains of subpopulations of color-selective neurons. PMID:24068814
Transfer learning for visual categorization: a survey.
Shao, Ling; Zhu, Fan; Li, Xuelong
2015-05-01
Regular machine learning and data mining techniques study the training data for future inferences under a major assumption that the future data are within the same feature space or have the same distribution as the training data. However, due to the limited availability of human labeled training data, training data that stay in the same feature space or have the same distribution as the future data cannot be guaranteed to be sufficient enough to avoid the over-fitting problem. In real-world applications, apart from data in the target domain, related data in a different domain can also be included to expand the availability of our prior knowledge about the target future data. Transfer learning addresses such cross-domain learning problems by extracting useful information from data in a related domain and transferring them for being used in target tasks. In recent years, with transfer learning being applied to visual categorization, some typical problems, e.g., view divergence in action recognition tasks and concept drifting in image classification tasks, can be efficiently solved. In this paper, we survey state-of-the-art transfer learning algorithms in visual categorization applications, such as object recognition, image classification, and human action recognition.
Dorsal hippocampus is necessary for visual categorization in rats.
Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H
2018-02-23
The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for the trained, novel, relocation, and singleton stimuli. Hippocampus-mediated pattern completion and pattern separation mechanisms may be necessary for visual categorization involving overlapping irrelevant features. © 2018 Wiley Periodicals, Inc.
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus
2012-01-01
The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…
Neurophysiological Evidence for Categorical Perception of Color
ERIC Educational Resources Information Center
Holmes, Amanda; Franklin, Anna; Clifford, Alexandra; Davies, Ian
2009-01-01
The aim of this investigation was to examine the time course and the relative contributions of perceptual and post-perceptual processes to categorical perception (CP) of color. A visual oddball task was used with standard and deviant stimuli from same (within-category) or different (between-category) categories, with chromatic separations for…
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
Vanmarcke, Steven; Wagemans, Johan
2015-01-01
In everyday life, we are generally able to dynamically understand and adapt to socially (ir)elevant encounters, and to make appropriate decisions about these. All of this requires an impressive ability to directly filter and obtain the most informative aspects of a complex visual scene. Such rapid gist perception can be assessed in multiple ways. In the ultrafast categorization paradigm developed by Simon Thorpe et al. (1996), participants get a clear categorization task in advance and succeed at detecting the target object of interest (animal) almost perfectly (even with 20 ms exposures). Since this pioneering work, follow-up studies consistently reported population-level reaction time differences on different categorization tasks, indicating a superordinate advantage (animal versus dog) and effects of perceptual similarity (animals versus vehicles) and object category size (natural versus animal versus dog). In this study, we replicated and extended these separate findings by using a systematic collection of different categorization tasks (varying in presentation time, task demands, and stimuli) and focusing on individual differences in terms of e.g., gender and intelligence. In addition to replicating the main findings from the literature, we find subtle, yet consistent gender differences (women faster than men). PMID:26034569
Residual perception of biological motion in cortical blindness.
Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto
2016-12-01
From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Task and spatial frequency modulations of object processing: an EEG study.
Craddock, Matt; Martinovic, Jasna; Müller, Matthias M
2013-01-01
Visual object processing may follow a coarse-to-fine sequence imposed by fast processing of low spatial frequencies (LSF) and slow processing of high spatial frequencies (HSF). Objects can be categorized at varying levels of specificity: the superordinate (e.g. animal), the basic (e.g. dog), or the subordinate (e.g. Border Collie). We tested whether superordinate and more specific categorization depend on different spatial frequency ranges, and whether any such dependencies might be revealed by or influence signals recorded using EEG. We used event-related potentials (ERPs) and time-frequency (TF) analysis to examine the time course of object processing while participants performed either a grammatical gender-classification task (which generally forces basic-level categorization) or a living/non-living judgement (superordinate categorization) on everyday, real-life objects. Objects were filtered to contain only HSF or LSF. We found a greater positivity and greater negativity for HSF than for LSF pictures in the P1 and N1 respectively, but no effects of task on either component. A later, fronto-central negativity (N350) was more negative in the gender-classification task than the superordinate categorization task, which may indicate that this component relates to semantic or syntactic processing. We found no significant effects of task or spatial frequency on evoked or total gamma band responses. Our results demonstrate early differences in processing of HSF and LSF content that were not modulated by categorization task, with later responses reflecting such higher-level cognitive factors.
ERIC Educational Resources Information Center
Kisamore, April N.; Carr, James E.; LeBlanc, Linda A.
2011-01-01
It has been suggested that verbally sophisticated individuals engage in a series of precurrent behaviors (e.g., covert intraverbal behavior, grouping stimuli, visual imagining) to solve problems such as answering questions (Palmer, 1991; Skinner, 1953). We examined the effects of one problem solving strategy--visual imagining--on increasing…
Differences in perceptual learning transfer as a function of training task.
Green, C Shawn; Kattner, Florian; Siegel, Max H; Kersten, Daniel; Schrater, Paul R
2015-01-01
A growing body of research--including results from behavioral psychology, human structural and functional imaging, single-cell recordings in nonhuman primates, and computational modeling--suggests that perceptual learning effects are best understood as a change in the ability of higher-level integration or association areas to read out sensory information in the service of particular decisions. Work in this vein has argued that, depending on the training experience, the "rules" for this read-out can either be applicable to new contexts (thus engendering learning generalization) or can apply only to the exact training context (thus resulting in learning specificity). Here we contrast learning tasks designed to promote either stimulus-specific or stimulus-general rules. Specifically, we compare learning transfer across visual orientation following training on three different tasks: an orientation categorization task (which permits an orientation-specific learning solution), an orientation estimation task (which requires an orientation-general learning solution), and an orientation categorization task in which the relevant category boundary shifts on every trial (which lies somewhere between the two tasks above). While the simple orientation-categorization training task resulted in orientation-specific learning, the estimation and moving categorization tasks resulted in significant orientation learning generalization. The general framework tested here--that task specificity or generality can be predicted via an examination of the optimal learning solution--may be useful in building future training paradigms with certain desired outcomes.
Handwriting generates variable visual output to facilitate symbol learning.
Li, Julia X; James, Karin H
2016-03-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Handwriting generates variable visual input to facilitate symbol learning
Li, Julia X.; James, Karin H.
2015-01-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913
Vanmarcke, Steven; Calders, Filip; Wagemans, Johan
2016-01-01
Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur.
Calders, Filip; Wagemans, Johan
2016-01-01
Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur. PMID:27803794
Aphasic Patients Exhibit a Reversal of Hemispheric Asymmetries in Categorical Color Discrimination
ERIC Educational Resources Information Center
Paluy, Yulia; Gilbert, Aubrey L.; Baldo, Juliana V.; Dronkers, Nina F.; Ivry, Richard B.
2011-01-01
Patients with left hemisphere (LH) or right hemisphere (RH) brain injury due to stroke were tested on a speeded, color discrimination task in which two factors were manipulated: (1) the categorical relationship between the target and the distracters and (2) the visual field in which the target was presented. Similar to controls, the RH patients…
Zhong, Weifang; Li, You; Huang, Yulan; Li, He; Mo, Lei
2018-01-01
This study investigated whether and how a person's varied series of lexical categories corresponding to different discriminatory characteristics of the same colors affect his or her perception of colors. In three experiments, Chinese participants were primed to categorize four graduated colors-specifically dark green, light green, light blue, and dark blue-into green and blue; light color and dark color; and dark green, light green, light blue, and dark blue. The participants were then required to complete a visual search task. Reaction times in the visual search task indicated that different lateralized categorical perceptions (CPs) of color corresponded to the various priming situations. These results suggest that all of the lexical categories corresponding to different discriminatory characteristics of the same colors can influence people's perceptions of colors and that color perceptions can be influenced differently by distinct types of lexical categories depending on the context. Copyright © 2017 Cognitive Science Society, Inc.
Modelling eye movements in a categorical search task
Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris
2013-01-01
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720
Non-visual spatial tasks reveal increased interactions with stance postural control.
Woollacott, Marjorie; Vander Velde, Timothy
2008-05-07
The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.
The semantic category-based grouping in the Multiple Identity Tracking task.
Wei, Liuqing; Zhang, Xuemin; Li, Zhen; Liu, Jingyao
2018-01-01
In the Multiple Identity Tracking (MIT) task, categorical distinctions between targets and distractors have been found to facilitate tracking (Wei, Zhang, Lyu, & Li in Frontiers in Psychology, 7, 589, 2016). The purpose of this study was to further investigate the reasons for the facilitation effect, through six experiments. The results of Experiments 1-3 excluded the potential explanations of visual distinctiveness, attentional distribution strategy, and a working memory mechanism, respectively. When objects' visual information was preserved and categorical information was removed, the facilitation effect disappeared, suggesting that the visual distinctiveness between targets and distractors was not the main reason for the facilitation effect. Moreover, the facilitation effect was not the result of strategically shifting the attentional distribution, because the targets received more attention than the distractors in all conditions. Additionally, the facilitation effect did not come about because the identities of targets were encoded and stored in visual working memory to assist in the recovery from tracking errors; when working memory was disturbed by the object identities changing during tracking, the facilitation effect still existed. Experiments 4 and 5 showed that observers grouped targets together and segregated them from distractors on the basis of their categorical information. By doing this, observers could largely avoid distractor interference with tracking and improve tracking performance. Finally, Experiment 6 indicated that category-based grouping is not an automatic, but a goal-directed and effortful, strategy. In summary, the present findings show that a semantic category-based target-grouping mechanism exists in the MIT task, which is likely to be the major reason for the tracking facilitation effect.
Hellige, J B; Bloch, M I; Cowin, E L; Eng, T L; Eviatar, Z; Sergent, V
1994-09-01
Functional hemispheric asymmetries were examined for right- or left-handed men and women. Tasks involved (a) auditory processing of verbal material, (b) processing of emotions shown on faces, (c) processing of visual categorical and coordinate spatial relations, and (d) visual processing of verbal material. Similar performance asymmetries were found for the right-handed and left-handed groups, but the average asymmetries tended to be smaller for the left-handed group. For the most part, measures of performance asymmetry obtained from the different tasks did not correlate with each other, suggesting that individual subjects cannot be simply characterized as strongly or weakly lateralized. However, ear differences obtained in Task 1 did correlate significantly with certain visual field differences obtained in Task 4, suggesting that both tasks are sensitive to hemispheric asymmetry in similar phonetic or language-related processes.
ERP Evidence of Hemispheric Independence in Visual Word Recognition
ERIC Educational Resources Information Center
Nemrodov, Dan; Harpaz, Yuval; Javitt, Daniel C.; Lavidor, Michal
2011-01-01
This study examined the capability of the left hemisphere (LH) and the right hemisphere (RH) to perform a visual recognition task independently as formulated by the Direct Access Model (Fernandino, Iacoboni, & Zaidel, 2007). Healthy native Hebrew speakers were asked to categorize nouns and non-words (created from nouns by transposing two middle…
Iconic memory requires attention
Persuh, Marjan; Genzer, Boris; Melara, Robert D.
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features. PMID:22586389
Iconic memory requires attention.
Persuh, Marjan; Genzer, Boris; Melara, Robert D
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features.
Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.
Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi
2014-02-01
This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.
A physiologically based nonhomogeneous Poisson counter model of visual identification.
Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus; Kyllingsbæk, Søren
2018-04-30
A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are mutually confusable and hard to see. The model assumes that the visual system's initial sensory response consists in tentative visual categorizations, which are accumulated by leaky integration of both transient and sustained components comparable with those found in spike density patterns of early sensory neurons. The sensory response (tentative categorizations) feeds independent Poisson counters, each of which accumulates tentative object categorizations of a particular type to guide overt identification performance. We tested the model's ability to predict the effect of stimulus duration on observed distributions of responses in a nonspeeded (pure accuracy) identification task with eight response alternatives. The time courses of correct and erroneous categorizations were well accounted for when the event-rates of competing Poisson counters were allowed to vary independently over time in a way that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model provided an explanation for Bloch's law. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
ERIC Educational Resources Information Center
de la Rosa, Stephan; Choudhery, Rabia N.; Chatziastros, Astros
2011-01-01
Recent evidence suggests that the recognition of an object's presence and its explicit recognition are temporally closely related. Here we re-examined the time course (using a fine and a coarse temporal resolution) and the sensitivity of three possible component processes of visual object recognition. In particular, participants saw briefly…
Effects of age on cognitive control during semantic categorization.
Mudar, Raksha A; Chiang, Hsueh-Sheng; Maguire, Mandy J; Spence, Jeffrey S; Eroh, Justin; Kraut, Michael A; Hart, John
2015-01-01
We used event-related potentials (ERPs) to study age effects of perceptual (basic-level) vs. perceptual-semantic (superordinate-level) categorization on cognitive control using the go/nogo paradigm. Twenty-two younger (11 M; 21 ± 2.2 years) and 22 older adults (9 M; 63 ± 5.8 years) completed two visual go/nogo tasks. In the single-car task (SiC) (basic), go/nogo responses were made based on single exemplars of a car (go) and a dog (nogo). In the object animal task (ObA) (superordinate), responses were based on multiple exemplars of objects (go) and animals (nogo). Each task consisted of 200 trials: 160 (80%) 'go' trials that required a response through button pressing and 40 (20%) 'nogo' trials that required inhibition/withholding of a response. ERP data revealed significantly reduced nogo-N2 and nogo-P3 amplitudes in older compared to younger adults, whereas go-N2 and go-P3 amplitudes were comparable in both groups during both categorization tasks. Although the effects of categorization levels on behavioral data and P3 measures were similar in both groups with longer response times, lower accuracy scores, longer P3 latencies, and lower P3 amplitudes in ObA compared to SiC, N2 latency revealed age group differences moderated by the task. Older adults had longer N2 latency for ObA compared to SiC, in contrast, younger adults showed no N2 latency difference between SiC and ObA. Overall, these findings suggest that age differentially affects neural processing related to cognitive control during semantic categorization. Furthermore, in older adults, unlike in younger adults, levels of categorization modulate neural processing related to cognitive control even at the early stages (N2). Published by Elsevier B.V.
Out of sight, out of mind: Categorization learning and normal aging.
Schenk, Sabrina; Minda, John P; Lech, Robert K; Suchan, Boris
2016-10-01
The present combined EEG and eye tracking study examined the process of categorization learning at different age ranges and aimed to investigate to which degree categorization learning is mediated by visual attention and perceptual strategies. Seventeen young subjects and ten elderly subjects had to perform a visual categorization task with two abstract categories. Each category consisted of prototypical stimuli and an exception. The categorization of prototypical stimuli was learned very early during the experiment, while the learning of exceptions was delayed. The categorization of exceptions was accompanied by higher P150, P250 and P300 amplitudes. In contrast to younger subjects, elderly subjects had problems in the categorization of exceptions, but showed an intact categorization performance for prototypical stimuli. Moreover, elderly subjects showed higher fixation rates for important stimulus features and higher P150 amplitudes, which were positively correlated with the categorization performances. These results indicate that elderly subjects compensate for cognitive decline through enhanced perceptual and attentional processing of individual stimulus features. Additionally, a computational approach has been applied and showed a transition away from purely abstraction-based learning to an exemplar-based learning in the middle block for both groups. However, the calculated models provide a better fit for younger subjects than for elderly subjects. The current study demonstrates that human categorization learning is based on early abstraction-based processing followed by an exemplar-memorization stage. This strategy combination facilitates the learning of real world categories with a nuanced category structure. In addition, the present study suggests that categorization learning is affected by normal aging and modulated by perceptual processing and visual attention. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chen, Yi-Chuan; Spence, Charles
2013-01-01
The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.
The effect of concurrent semantic categorization on delayed serial recall.
Acheson, Daniel J; MacDonald, Maryellen C; Postle, Bradley R
2011-01-01
The influence of semantic processing on the serial ordering of items in short-term memory was explored using a novel dual-task paradigm. Participants engaged in 2 picture-judgment tasks while simultaneously performing delayed serial recall. List material varied in the presence of phonological overlap (Experiments 1 and 2) and in semantic content (concrete words in Experiment 1 and 3; nonwords in Experiments 2 and 3). Picture judgments varied in the extent to which they required accessing visual semantic information (i.e., semantic categorization and line orientation judgments). Results showed that, relative to line-orientation judgments, engaging in semantic categorization judgments increased the proportion of item-ordering errors for concrete lists but did not affect error proportions for nonword lists. Furthermore, although more ordering errors were observed for phonologically similar relative to dissimilar lists, no interactions were observed between the phonological overlap and picture-judgment task manipulations. These results demonstrate that lexical-semantic representations can affect the serial ordering of items in short-term memory. Furthermore, the dual-task paradigm provides a new method for examining when and how semantic representations affect memory performance.
The Effect of Concurrent Semantic Categorization on Delayed Serial Recall
Acheson, Daniel J.; MacDonald, Maryellen C.; Postle, Bradley R.
2010-01-01
The influence of semantic processing on the serial ordering of items in short-term memory was explored using a novel dual-task paradigm. Subjects engaged in two picture judgment tasks while simultaneously performing delayed serial recall. List material varied in the presence of phonological overlap (Experiments 1 and 2) and in semantic content (concrete words in Experiment 1 and 3; nonwords in Experiments 2 and 3). Picture judgments varied in the extent to which they required accessing visual semantic information (i.e., semantic categorization and line orientation judgments). Results showed that, relative to line orientation judgments, engaging in semantic categorization judgments increased the proportion of item ordering errors for concrete lists but did not affect error proportions for nonword lists. Furthermore, although more ordering errors were observed for phonologically similar relative to dissimilar lists, no interactions were observed between the phonological overlap and picture judgment task manipulations. These results thus demonstrate that lexical-semantic representations can affect the serial ordering of items in short-term memory. Furthermore, the dual-task paradigm provides a new method for examining when and how semantic representations affect memory performance. PMID:21058880
Semantic, perceptual and number space: relations between category width and spatial processing.
Brugger, Peter; Loetscher, Tobias; Graves, Roger E; Knoch, Daria
2007-05-17
Coarse semantic encoding and broad categorization behavior are the hallmarks of the right cerebral hemisphere's contribution to language processing. We correlated 40 healthy subjects' breadth of categorization as assessed with Pettigrew's category width scale with lateral asymmetries in perceptual and representational space. Specifically, we hypothesized broader category width to be associated with larger leftward spatial biases. For the 20 men, but not the 20 women, this hypothesis was confirmed both in a lateralized tachistoscopic task with chimeric faces and a random digit generation task; the higher a male participant's score on category width, the more pronounced were his left-visual field bias in the judgement of chimeric faces and his small-number preference in digit generation ("small" is to the left of "large" in number space). Subjects' category width was unrelated to lateral displacements in a blindfolded tactile-motor rod centering task. These findings indicate that visual-spatial functions of the right hemisphere should not be considered independent of the same hemisphere's contribution to language. Linguistic and spatial cognition may be more tightly interwoven than is currently assumed.
Ma, Bosen; Wang, Xiaoyun; Li, Degao
2015-01-01
To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.
Mattys, Sven L; Scharenborg, Odette
2014-03-01
This study investigates the extent to which age-related language processing difficulties are due to a decline in sensory processes or to a deterioration of cognitive factors, specifically, attentional control. Two facets of attentional control were examined: inhibition of irrelevant information and divided attention. Younger and older adults were asked to categorize the initial phoneme of spoken syllables ("Was it m or n?"), trying to ignore the lexical status of the syllables. The phonemes were manipulated to range in eight steps from m to n. Participants also did a discrimination task on syllable pairs ("Were the initial sounds the same or different?"). Categorization and discrimination were performed under either divided attention (concurrent visual-search task) or focused attention (no visual task). The results showed that even when the younger and older adults were matched on their discrimination scores: (1) the older adults had more difficulty inhibiting lexical knowledge than did younger adults, (2) divided attention weakened lexical inhibition in both younger and older adults, and (3) divided attention impaired sound discrimination more in older than younger listeners. The results confirm the independent and combined contribution of sensory decline and deficit in attentional control to language processing difficulties associated with aging. The relative weight of these variables and their mechanisms of action are discussed in the context of theories of aging and language. (c) 2014 APA, all rights reserved.
Direct versus indirect processing changes the influence of color in natural scene categorization.
Otsuka, Sachio; Kawaguchi, Jun
2009-10-01
We examined whether participants would use a negative priming (NP) paradigm to categorize color and grayscale images of natural scenes that were presented peripherally and were ignored. We focused on (1) attentional resources allocated to natural scenes and (2) direct versus indirect processing of them. We set up low and high attention-load conditions, based on the set size of the searched stimuli in the prime display (one and five). Participants were required to detect and categorize the target objects in natural scenes in a central visual search task, ignoring peripheral natural images in both the prime and probe displays. The results showed that, irrespective of attention load, NP was observed for color scenes but not for grayscale scenes. We did not observe any effect of color information in central visual search, where participants responded directly to natural scenes. These results indicate that, in a situation in which participants indirectly process natural scenes, color information is critical to object categorization, but when the scenes are processed directly, color information does not contribute to categorization.
Hügelschäfer, Sabine; Jaudas, Alexander; Achtziger, Anja
2016-10-15
Gender categorization is highly automatic. Studies measuring ERPs during the presentation of male and female faces in a categorization task showed that this categorization is extremely quick (around 130ms, indicated by the N170). We tested whether this automatic process can be controlled by goal intentions and implementation intentions. First, we replicated the N170 modulation on gender-incongruent faces as reported in previous research. This effect was only observed in a task in which faces had to be categorized according to gender, but not in a task that required responding to a visual feature added to the face stimuli (the color of a dot) while gender was irrelevant. Second, it turned out that the N170 modulation on gender-incongruent faces was altered if a goal intention was set that aimed at controlling a gender bias. We interpret this finding as an indicator of nonconscious goal pursuit. The N170 modulation was completely absent when this goal intention was furnished with an implementation intention. In contrast, intentions did not alter brain activity at a later time window (P300), which is associated with more complex and rather conscious processes. In line with previous research, the P300 was modulated by gender incongruency even if individuals were strongly involved in another task, demonstrating the automaticity of gender detection. We interpret our findings as evidence that automatic gender categorization that occurs at a very early processing stage can be effectively controlled by intentions. Copyright © 2016 Elsevier B.V. All rights reserved.
Search guidance is proportional to the categorical specificity of a target cue.
Schmidt, Joseph; Zelinsky, Gregory J
2009-10-01
Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.
ERIC Educational Resources Information Center
Gillebert, Celine R.; Op de Beeck, Hans P.; Panis, Sven; Wagemans, Johan
2009-01-01
There is substantial evidence that object representations in adults are dynamically updated by learning. However, it is not clear to what extent these effects are induced by active processing of visual objects in a particular task context on top of the effects of mere exposure to the same objects. Here we show that the task does matter. We…
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
Jackson, Jade; Rich, Anina N; Williams, Mark A; Woolgar, Alexandra
2017-02-01
Human cognition is characterized by astounding flexibility, enabling us to select appropriate information according to the objectives of our current task. A circuit of frontal and parietal brain regions, often referred to as the frontoparietal attention network or multiple-demand (MD) regions, are believed to play a fundamental role in this flexibility. There is evidence that these regions dynamically adjust their responses to selectively process information that is currently relevant for behavior, as proposed by the "adaptive coding hypothesis" [Duncan, J. An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2, 820-829, 2001]. Could this provide a neural mechanism for feature-selective attention, the process by which we preferentially process one feature of a stimulus over another? We used multivariate pattern analysis of fMRI data during a perceptually challenging categorization task to investigate whether the representation of visual object features in the MD regions flexibly adjusts according to task relevance. Participants were trained to categorize visually similar novel objects along two orthogonal stimulus dimensions (length/orientation) and performed short alternating blocks in which only one of these dimensions was relevant. We found that multivoxel patterns of activation in the MD regions encoded the task-relevant distinctions more strongly than the task-irrelevant distinctions: The MD regions discriminated between stimuli of different lengths when length was relevant and between the same objects according to orientation when orientation was relevant. The data suggest a flexible neural system that adjusts its representation of visual objects to preferentially encode stimulus features that are currently relevant for behavior, providing a neural mechanism for feature-selective attention.
Semantic processing and response inhibition.
Chiang, Hsueh-Sheng; Motes, Michael A; Mudar, Raksha A; Rao, Neena K; Mansinghani, Sethesh; Brier, Matthew R; Maguire, Mandy J; Kraut, Michael A; Hart, John
2013-11-13
The present study examined functional MRI (fMRI) BOLD signal changes in response to object categorization during response selection and inhibition. Young adults (N=16) completed a Go/NoGo task with varying object categorization requirements while fMRI data were recorded. Response inhibition elicited increased signal change in various brain regions, including medial frontal areas, compared with response selection. BOLD signal in an area within the right angular gyrus was increased when higher-order categorization was mandated. In addition, signal change during response inhibition varied with categorization requirements in the left inferior temporal gyrus (lIT). lIT-mediated response inhibition when inhibiting the response only required lower-order categorization, but lIT mediated both response selection and inhibition when selecting and inhibiting the response required higher-order categorization. The findings characterized mechanisms mediating response inhibition associated with semantic object categorization in the 'what' visual object memory system.
Task-Dependent Masked Priming Effects in Visual Word Recognition
Kinoshita, Sachiko; Norris, Dennis
2012-01-01
A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316
Accuracy and speed of material categorization in real-world images.
Sharan, Lavanya; Rosenholtz, Ruth; Adelson, Edward H
2014-08-13
It is easy to visually distinguish a ceramic knife from one made of steel, a leather jacket from one made of denim, and a plush toy from one made of plastic. Most studies of material appearance have focused on the estimation of specific material properties such as albedo or surface gloss, and as a consequence, almost nothing is known about how we recognize material categories like leather or plastic. We have studied judgments of high-level material categories with a diverse set of real-world photographs, and we have shown (Sharan, 2009) that observers can categorize materials reliably and quickly. Performance on our tasks cannot be explained by simple differences in color, surface shape, or texture. Nor can the results be explained by observers merely performing shape-based object recognition. Rather, we argue that fast and accurate material categorization is a distinct, basic ability of the visual system. © 2014 ARVO.
Accuracy and speed of material categorization in real-world images
Sharan, Lavanya; Rosenholtz, Ruth; Adelson, Edward H.
2014-01-01
It is easy to visually distinguish a ceramic knife from one made of steel, a leather jacket from one made of denim, and a plush toy from one made of plastic. Most studies of material appearance have focused on the estimation of specific material properties such as albedo or surface gloss, and as a consequence, almost nothing is known about how we recognize material categories like leather or plastic. We have studied judgments of high-level material categories with a diverse set of real-world photographs, and we have shown (Sharan, 2009) that observers can categorize materials reliably and quickly. Performance on our tasks cannot be explained by simple differences in color, surface shape, or texture. Nor can the results be explained by observers merely performing shape-based object recognition. Rather, we argue that fast and accurate material categorization is a distinct, basic ability of the visual system. PMID:25122216
Orthographic versus semantic matching in visual search for words within lists.
Léger, Laure; Rouet, Jean-François; Ros, Christine; Vibert, Nicolas
2012-03-01
An eye-tracking experiment was performed to assess the influence of orthographic and semantic distractor words on visual search for words within lists. The target word (e.g., "raven") was either shown to participants before the search (literal search) or defined by its semantic category (e.g., "bird", categorical search). In both cases, the type of words included in the list affected visual search times and eye movement patterns. In the literal condition, the presence of orthographic distractors sharing initial and final letters with the target word strongly increased search times. Indeed, the orthographic distractors attracted participants' gaze and were fixated for longer times than other words in the list. The presence of semantic distractors related to the target word also increased search times, which suggests that significant automatic semantic processing of nontarget words took place. In the categorical condition, semantic distractors were expected to have a greater impact on the search task. As expected, the presence in the list of semantic associates of the target word led to target selection errors. However, semantic distractors did not significantly increase search times any more, whereas orthographic distractors still did. Hence, the visual characteristics of nontarget words can be strong predictors of the efficiency of visual search even when the exact target word is unknown. The respective impacts of orthographic and semantic distractors depended more on the characteristics of lists than on the nature of the search task.
Perceptual Biases in Relation to Paranormal and Conspiracy Beliefs
van Elk, Michiel
2015-01-01
Previous studies have shown that one’s prior beliefs have a strong effect on perceptual decision-making and attentional processing. The present study extends these findings by investigating how individual differences in paranormal and conspiracy beliefs are related to perceptual and attentional biases. Two field studies were conducted in which visitors of a paranormal conducted a perceptual decision making task (i.e. the face / house categorization task; Experiment 1) or a visual attention task (i.e. the global / local processing task; Experiment 2). In the first experiment it was found that skeptics compared to believers more often incorrectly categorized ambiguous face stimuli as representing a house, indicating that disbelief rather than belief in the paranormal is driving the bias observed for the categorization of ambiguous stimuli. In the second experiment, it was found that skeptics showed a classical ‘global-to-local’ interference effect, whereas believers in conspiracy theories were characterized by a stronger ‘local-to-global interference effect’. The present study shows that individual differences in paranormal and conspiracy beliefs are associated with perceptual and attentional biases, thereby extending the growing body of work in this field indicating effects of cultural learning on basic perceptual processes. PMID:26114604
Perceptual Biases in Relation to Paranormal and Conspiracy Beliefs.
van Elk, Michiel
2015-01-01
Previous studies have shown that one's prior beliefs have a strong effect on perceptual decision-making and attentional processing. The present study extends these findings by investigating how individual differences in paranormal and conspiracy beliefs are related to perceptual and attentional biases. Two field studies were conducted in which visitors of a paranormal conducted a perceptual decision making task (i.e. the face/house categorization task; Experiment 1) or a visual attention task (i.e. the global/local processing task; Experiment 2). In the first experiment it was found that skeptics compared to believers more often incorrectly categorized ambiguous face stimuli as representing a house, indicating that disbelief rather than belief in the paranormal is driving the bias observed for the categorization of ambiguous stimuli. In the second experiment, it was found that skeptics showed a classical 'global-to-local' interference effect, whereas believers in conspiracy theories were characterized by a stronger 'local-to-global interference effect'. The present study shows that individual differences in paranormal and conspiracy beliefs are associated with perceptual and attentional biases, thereby extending the growing body of work in this field indicating effects of cultural learning on basic perceptual processes.
Pigeons and humans use action and pose information to categorize complex human behaviors.
Qadri, Muhammad A J; Cook, Robert G
2017-02-01
The biological mechanisms used to categorize and recognize behaviors are poorly understood in both human and non-human animals. Using animated digital models, we have recently shown that pigeons can categorize different locomotive animal gaits and types of complex human behaviors. In the current experiments, pigeons (go/no-go task) and humans (choice task) both learned to conditionally categorize two categories of human behaviors that did not repeat and were comprised of the coordinated motions of multiple limbs. These "martial arts" and "Indian dance" action sequences were depicted by a digital human model. Depending upon whether the model was in motion or not, each species was required to engage in different and opposing responses to the two behavioral categories. Both species learned to conditionally and correctly act on this dynamic and static behavioral information, indicating that both species use a combination of static pose cues that are available from stimulus onset in addition to less rapidly available action information in order to successfully discriminate between the behaviors. Human participants additionally demonstrated a bias towards the dynamic information in the display when re-learning the task. Theories that rely on generalized, non-specific visual mechanisms involving channels for motion and static cues offer a parsimonious account of how humans and pigeons recognize and categorize behaviors within and across species. Copyright © 2016 Elsevier Ltd. All rights reserved.
McMenamin, Brenton W.; Marsolek, Chad J.; Morseth, Brianna K.; Speer, MacKenzie F.; Burton, Philip C.; Burgund, E. Darcy
2016-01-01
Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities – an abstract category (AC) subsystem that operates effectively in the left hemisphere, and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad fronto-parietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue/probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations. PMID:26883940
McMenamin, Brenton W; Marsolek, Chad J; Morseth, Brianna K; Speer, MacKenzie F; Burton, Philip C; Burgund, E Darcy
2016-06-01
Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities-an abstract category (AC) subsystem that operates effectively in the left hemisphere and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad frontoparietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue-probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations.
Feldmann-Wüstefeld, Tobias; Uengoer, Metin; Schubö, Anna
2015-11-01
Besides visual salience and observers' current intention, prior learning experience may influence deployment of visual attention. Associative learning models postulate that observers pay more attention to stimuli previously experienced as reliable predictors of specific outcomes. To investigate the impact of learning experience on deployment of attention, we combined an associative learning task with a visual search task and measured event-related potentials of the EEG as neural markers of attention deployment. In the learning task, participants categorized stimuli varying in color/shape with only one dimension being predictive of category membership. In the search task, participants searched a shape target while disregarding irrelevant color distractors. Behavioral results showed that color distractors impaired performance to a greater degree when color rather than shape was predictive in the learning task. Neurophysiological results show that the amplified distraction was due to differential attention deployment (N2pc). Experiment 2 showed that when color was predictive for learning, color distractors captured more attention in the search task (ND component) and more suppression of color distractor was required (PD component). The present results thus demonstrate that priority in visual attention is biased toward predictive stimuli, which allows learning experience to shape selection. We also show that learning experience can overrule strong top-down control (blocked tasks, Experiment 3) and that learning experience has a longer-term effect on attention deployment (tasks on two successive days, Experiment 4). © 2015 Society for Psychophysiological Research.
Modulation of microsaccades by spatial frequency during object categorization.
Craddock, Matt; Oppermann, Frank; Müller, Matthias M; Martinovic, Jasna
2017-01-01
The organization of visual processing into a coarse-to-fine information processing based on the spatial frequency properties of the input forms an important facet of the object recognition process. During visual object categorization tasks, microsaccades occur frequently. One potential functional role of these eye movements is to resolve high spatial frequency information. To assess this hypothesis, we examined the rate, amplitude and speed of microsaccades in an object categorization task in which participants viewed object and non-object images and classified them as showing either natural objects, man-made objects or non-objects. Images were presented unfiltered (broadband; BB) or filtered to contain only low (LSF) or high spatial frequency (HSF) information. This allowed us to examine whether microsaccades were modulated independently by the presence of a high-level feature - the presence of an object - and by low-level stimulus characteristics - spatial frequency. We found a bimodal distribution of saccades based on their amplitude, with a split between smaller and larger microsaccades at 0.4° of visual angle. The rate of larger saccades (⩾0.4°) was higher for objects than non-objects, and higher for objects with high spatial frequency content (HSF and BB objects) than for LSF objects. No effects were observed for smaller microsaccades (<0.4°). This is consistent with a role for larger microsaccades in resolving HSF information for object identification, and previous evidence that more microsaccades are directed towards informative image regions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Szabo, Miruna; Deco, Gustavo; Fusi, Stefano; Del Giudice, Paolo; Mattia, Maurizio; Stetter, Martin
2006-05-01
Recent experiments on behaving monkeys have shown that learning a visual categorization task makes the neurons in infero-temporal cortex (ITC) more selective to the task-relevant features of the stimuli (Sigala and Logothetis in Nature 415 318-320, 2002). We hypothesize that such a selectivity modulation emerges from the interaction between ITC and other cortical area, presumably the prefrontal cortex (PFC), where the previously learned stimulus categories are encoded. We propose a biologically inspired model of excitatory and inhibitory spiking neurons with plastic synapses, modified according to a reward based Hebbian learning rule, to explain the experimental results and test the validity of our hypothesis. We assume that the ITC neurons, receiving feature selective inputs, form stronger connections with the category specific neurons to which they are consistently associated in rewarded trials. After learning, the top-down influence of PFC neurons enhances the selectivity of the ITC neurons encoding the behaviorally relevant features of the stimuli, as observed in the experiments. We conclude that the perceptual representation in visual areas like ITC can be strongly affected by the interaction with other areas which are devoted to higher cognitive functions.
2012-01-01
Background During normal semantic processing, the left hemisphere (LH) is suggested to restrict right hemisphere (RH) performance via interhemispheric suppression. However, a lesion in the LH or the use of concurrent tasks to overload the LH's attentional resource balance has been reported to result in RH disinhibition with subsequent improvements in RH performance. The current study examines variations in RH semantic processing in the context of unilateral LH lesions and the manipulation of the interhemispheric processing resource balance, in order to explore the relevance of RH disinhibition to hemispheric contributions to semantic processing following a unilateral LH lesion. Methods RH disinhibition was examined for nine participants with a single LH lesion and 13 matched controls using the dual task paradigm. Hemispheric performance on a divided visual field lexical decision semantic priming task was compared over three verbal memory load conditions, of zero-, two- and six-words. Related stimuli consisted of categorically related, associatively related, and categorically and associatively related prime-target pairs. Response time and accuracy data were recorded and analyzed using linear mixed model analysis, and planned contrasts were performed to compare priming effects in both visual fields, for each of the memory load conditions. Results Control participants exhibited significant bilateral visual field priming for all related conditions (p < .05), and a LH advantage over all three memory load conditions. Participants with LH lesions exhibited an improvement in RH priming performance as memory load increased, with priming for the categorically related condition occurring only in the 2- and 6-word memory conditions. RH disinhibition was also reflected for the LH damage (LHD) group by the removal of the LH performance advantage following the introduction of the memory load conditions. Conclusions The results from the control group are consistent with suggestions of an age related hemispheric asymmetry reduction and indicate that in healthy aging compensatory bilateral activation may reduce the impact of inhibition. In comparison, the results for the LHD group indicate that following a LH lesion RH semantic processing can be manipulated and enhanced by the introduction of a verbal memory task designed to engage LH resources and allow disinhibition of RH processing. PMID:22429687
Mo, Lei; Xu, Guiping; Kay, Paul; Tan, Li-Hai
2011-01-01
Previous studies have shown that the effect of language on categorical perception of color is stronger when stimuli are presented in the right visual field than in the left. To examine whether this lateralized effect occurs preattentively at an early stage of processing, we monitored the visual mismatch negativity, which is a component of the event-related potential of the brain to an unfamiliar stimulus among a temporally presented series of stimuli. In the oddball paradigm we used, the deviant stimuli were unrelated to the explicit task. A significant interaction between color-pair type (within-category vs. between-category) and visual field (left vs. right) was found. The amplitude of the visual mismatch negativity component evoked by the within-category deviant was significantly smaller than that evoked by the between-category deviant when displayed in the right visual field, but no such difference was observed for the left visual field. This result constitutes electroencephalographic evidence that the lateralized Whorf effect per se occurs out of awareness and at an early stage of processing. PMID:21844340
Bag-of-visual-ngrams for histopathology image classification
NASA Astrophysics Data System (ADS)
López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.
Wills, A J; Lea, Stephen E G; Leaver, Lisa A; Osthaus, Britta; Ryan, Catriona M E; Suret, Mark B; Bryant, Catherine M L; Chapman, Sue J A; Millar, Louise
2009-11-01
Pigeons (Columba livia), gray squirrels (Sciurus carolinensis), and undergraduates (Homo sapiens) learned discrimination tasks involving multiple mutually redundant dimensions. First, pigeons and undergraduates learned conditional discriminations between stimuli composed of three spatially separated dimensions, after first learning to discriminate the individual elements of the stimuli. When subsequently tested with stimuli in which one of the dimensions took an anomalous value, the majority of both species categorized test stimuli by their overall similarity to training stimuli. However some individuals of both species categorized them according to a single dimension. In a second set of experiments, squirrels, pigeons, and undergraduates learned go/no-go discriminations using multiple simultaneous presentations of stimuli composed of three spatially integrated, highly salient dimensions. The tendency to categorize test stimuli including anomalous dimension values unidimensionally was higher than in the first set of experiments and did not differ significantly between species. The authors conclude that unidimensional categorization of multidimensional stimuli is not diagnostic for analytic cognitive processing, and that any differences between human's and pigeons' behavior in such tasks are not due to special features of avian visual cognition.
Visual search and autism symptoms: What young children search for and co-occurring ADHD matter.
Doherty, Brianna R; Charman, Tony; Johnson, Mark H; Scerif, Gaia; Gliga, Teodora
2018-05-03
Superior visual search is one of the most common findings in the autism spectrum disorder (ASD) literature. Here, we ascertain how generalizable these findings are across task and participant characteristics, in light of recent replication failures. We tested 106 3-year-old children at familial risk for ASD, a sample that presents high ASD and ADHD symptoms, and 25 control participants, in three multi-target search conditions: easy exemplar search (look for cats amongst artefacts), difficult exemplar search (look for dogs amongst chairs/tables perceptually similar to dogs), and categorical search (look for animals amongst artefacts). Performance was related to dimensional measures of ASD and ADHD, in agreement with current research domain criteria (RDoC). We found that ASD symptom severity did not associate with enhanced performance in search, but did associate with poorer categorical search in particular, consistent with literature describing impairments in categorical knowledge in ASD. Furthermore, ASD and ADHD symptoms were both associated with more disorganized search paths across all conditions. Thus, ASD traits do not always convey an advantage in visual search; on the contrary, ASD traits may be associated with difficulties in search depending upon the nature of the stimuli (e.g., exemplar vs. categorical search) and the presence of co-occurring symptoms. © 2018 John Wiley & Sons Ltd.
Selection history alters attentional filter settings persistently and beyond top-down control.
Kadel, Hanna; Feldmann-Wüstefeld, Tobias; Schubö, Anna
2017-05-01
Visual selective attention is known to be guided by stimulus-based (bottom-up) and goal-oriented (top-down) control mechanisms. Recent work has pointed out that selection history (i.e., the bias to prioritize items that have been previously attended) can result in a learning experience that also has a substantial impact on subsequent attention guidance. The present study examined to what extent goal-oriented top-down control mechanisms interact with an observer's individual selection history in guiding attention. Selection history was manipulated in a categorization task in a between-subjects design, where participants learned that either color or shape was the response-relevant dimension. The impact of this experience was assessed in a compound visual search task with an additional color distractor. Top-down preparation for each search trial was enabled by a pretrial task cue (Experiment 1) or a fixed, predictable trial sequence (Experiment 2). Reaction times and ERPs served as indicators of attention deployment. Results showed that attention was captured by the color distractor when participants had learned that color predicted the correct response in the categorization learning task, suggesting that a bias for predictive stimulus features had developed. The possibility to prepare for the search task reduced the bias, but could not entirely overrule this selection history effect. In Experiment 3, both tasks were performed in separate sessions, and the bias still persisted. These results indicate that selection history considerably shapes selective attention and continues to do so persistently even when the task allowed for high top-down control. © 2017 Society for Psychophysiological Research.
The perception of visual emotion: comparing different measures of awareness.
Szczepanowski, Remigiusz; Traczyk, Jakub; Wierzchoń, Michał; Cleeremans, Axel
2013-03-01
Here, we explore the sensitivity of different awareness scales in revealing conscious reports on visual emotion perception. Participants were exposed to a backward masking task involving fearful faces and asked to rate their conscious awareness in perceiving emotion in facial expression using three different subjective measures: confidence ratings (CRs), with the conventional taxonomy of certainty, the perceptual awareness scale (PAS), through which participants categorize "raw" visual experience, and post-decision wagering (PDW), which involves economic categorization. Our results show that the CR measure was the most exhaustive and the most graded. In contrast, the PAS and PDW measures suggested instead that consciousness of emotional stimuli is dichotomous. Possible explanations of the inconsistency were discussed. Finally, our results also indicate that PDW biases awareness ratings by enhancing first-order accuracy of emotion perception. This effect was possibly a result of higher motivation induced by monetary incentives. Copyright © 2012 Elsevier Inc. All rights reserved.
Individual differences in spatial relation processing: effects of strategy, ability, and gender
van der Ham, Ineke J. M.; Borst, Gregoire
2011-01-01
Numerous studies have focused on the distinction between categorical and coordinate spatial relations. Categorical relations are propositional and abstract, and often related to a left hemisphere advantage. Coordinate relations specify the metric information of the relative locations of objects, and can be linked to right hemisphere processing. Yet, not all studies have reported such a clear double dissociation; in particular the categorical left hemisphere advantage is not always reported. In the current study we investigated whether verbal and spatial strategies, verbal and spatial cognitive abilities, and gender could account for the discrepancies observed in hemispheric lateralization of spatial relations. Seventy-five participants performed two visual half field, match-to-sample tasks (Van der Ham et al., 2007; 2009) to study the lateralization of categorical and coordinate relation processing. For each participant we determined the strategy they used in each of the two tasks. Consistent with previous findings, we found an overall categorical left hemisphere advantage and coordinate right hemisphere advantage. The lateralization pattern was affected selectively by the degree to which participants used a spatial strategy and by none of the other variables (i.e., verbal strategy, cognitive abilities, and gender). Critically, the categorical left hemisphere advantage was observed only for participants that relied strongly on a spatial strategy. This result is another piece of evidence that categorical spatial relation processing relies on spatial and not verbal processes. PMID:21353361
Werner, Sebastian; Noppeney, Uta
2010-02-17
Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.
Rapid natural scene categorization in the near absence of attention
Li, Fei Fei; VanRullen, Rufin; Koch, Christof; Perona, Pietro
2002-01-01
What can we see when we do not pay attention? It is well known that we can be “blind” even to major aspects of natural scenes when we attend elsewhere. The only tasks that do not need attention appear to be carried out in the early stages of the visual system. Contrary to this common belief, we report that subjects can rapidly detect animals or vehicles in briefly presented novel natural scenes while simultaneously performing another attentionally demanding task. By comparison, they are unable to discriminate large T's from L's, or bisected two-color disks from their mirror images under the same conditions. We conclude that some visual tasks associated with “high-level” cortical areas may proceed in the near absence of attention. PMID:12077298
Blue-green color categorization in Mandarin-English speakers.
Wuerger, Sophie; Xiao, Kaida; Mylonas, Dimitris; Huang, Qingmei; Karatzas, Dimosthenis; Hird, Emily; Paramei, Galina
2012-02-01
Observers are faster to detect a target among a set of distracters if the targets and distracters come from different color categories. This cross-boundary advantage seems to be limited to the right visual field, which is consistent with the dominance of the left hemisphere for language processing [Gilbert et al., Proc. Natl. Acad. Sci. USA 103, 489 (2006)]. Here we study whether a similar visual field advantage is found in the color identification task in speakers of Mandarin, a language that uses a logographic system. Forty late Mandarin-English bilinguals performed a blue-green color categorization task, in a blocked design, in their first language (L1: Mandarin) or second language (L2: English). Eleven color singletons ranging from blue to green were presented for 160 ms, randomly in the left visual field (LVF) or right visual field (RVF). Color boundary and reaction times (RTs) at the color boundary were estimated in L1 and L2, for both visual fields. We found that the color boundary did not differ between the languages; RTs at the color boundary, however, were on average more than 100 ms shorter in the English compared to the Mandarin sessions, but only when the stimuli were presented in the RVF. The finding may be explained by the script nature of the two languages: Mandarin logographic characters are analyzed visuospatially in the right hemisphere, which conceivably facilitates identification of color presented to the LVF. © 2012 Optical Society of America
Weighting Mean and Variability during Confidence Judgments
de Gardelle, Vincent; Mamassian, Pascal
2015-01-01
Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275
Memory and learning with rapid audiovisual sequences
Keller, Arielle S.; Sekuler, Robert
2015-01-01
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193
Memory and learning with rapid audiovisual sequences.
Keller, Arielle S; Sekuler, Robert
2015-01-01
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.
Hemispheric Predominance Assessment of Phonology and Semantics: A Divided Visual Field Experiment
ERIC Educational Resources Information Center
Cousin, Emilie; Peyrin, Carole; Baciu, Monica
2006-01-01
The aim of the present behavioural experiment was to evaluate the most lateralized among two phonological (phoneme vs. rhyme detection) and the most lateralized among two semantic ("living" vs. "edible" categorization) tasks, within the dominant hemisphere for language. The reason of addressing this question was a practical one: to evaluate the…
AutoBD: Automated Bi-Level Description for Scalable Fine-Grained Visual Categorization.
Yao, Hantao; Zhang, Shiliang; Yan, Chenggang; Zhang, Yongdong; Li, Jintao; Tian, Qi
Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.
Categorical Working Memory Representations are used in Delayed Estimation of Continuous Colors
Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J
2016-01-01
In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In two experiments we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. PMID:27797548
Hue distinctiveness overrides category in determining performance in multiple object tracking.
Sun, Mengdan; Zhang, Xuemin; Fan, Lingxia; Hu, Luming
2018-02-01
The visual distinctiveness between targets and distractors can significantly facilitate performance in multiple object tracking (MOT), in which color is a feature that has been commonly used. However, the processing of color can be more than "visual." Color is continuous in chromaticity, while it is commonly grouped into discrete categories (e.g., red, green). Evidence from color perception suggested that color categories may have a unique role in visual tasks independent of its chromatic appearance. Previous MOT studies have not examined the effect of chromatic and categorical distinctiveness on tracking separately. The current study aimed to reveal how chromatic (hue) and categorical distinctiveness of color between the targets and distractors affects tracking performance. With four experiments, we showed that tracking performance was largely facilitated by the increasing hue distance between the target set and the distractor set, suggesting that perceptual grouping was formed based on hue distinctiveness to aid tracking. However, we found no color categorical effect, because tracking performance was not significantly different when the targets and distractors were from the same or different categories. It was concluded that the chromatic distinctiveness of color overrides category in determining tracking performance, suggesting a dominant role of perceptual feature in MOT.
Categorical working memory representations are used in delayed estimation of continuous colors.
Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J
2017-01-01
In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember, and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work, we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In 2 experiments, we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Vesker, Michael; Bahn, Daniela; Kauschke, Christina; Tschense, Monika; Degé, Franziska; Schwarzer, Gudrun
2018-01-01
In order to assess how the perception of audible speech and facial expressions influence one another for the perception of emotions, and how this influence might change over the course of development, we conducted two cross-modal priming experiments with three age groups of children (6-, 9-, and 12-years old), as well as college-aged adults. In Experiment 1, 74 children and 24 adult participants were tasked with categorizing photographs of emotional faces as positive or negative as quickly as possible after being primed with emotion words presented via audio in valence-congruent and valence-incongruent trials. In Experiment 2, 67 children and 24 adult participants carried out a similar categorization task, but with faces acting as visual primes, and emotion words acting as auditory targets. The results of Experiment 1 showed that participants made more errors when categorizing positive faces primed by negative words versus positive words, and that 6-year-old children are particularly sensitive to positive word primes, giving faster correct responses regardless of target valence. Meanwhile, the results of Experiment 2 did not show any congruency effects for priming by facial expressions. Thus, audible emotion words seem to exert an influence on the emotional categorization of faces, while faces do not seem to influence the categorization of emotion words in a significant way.
Discrimination and categorization of emotional facial expressions and faces in Parkinson's disease.
Alonso-Recio, Laura; Martín, Pilar; Rubio, Sandra; Serrano, Juan M
2014-09-01
Our objective was to compare the ability to discriminate and categorize emotional facial expressions (EFEs) and facial identity characteristics (age and/or gender) in a group of 53 individuals with Parkinson's disease (PD) and another group of 53 healthy subjects. On the one hand, by means of discrimination and identification tasks, we compared two stages in the visual recognition process that could be selectively affected in individuals with PD. On the other hand, facial expression versus gender and age comparison permits us to contrast whether the emotional or non-emotional content influences the configural perception of faces. In Experiment I, we did not find differences between groups, either with facial expression or age, in discrimination tasks. Conversely, in Experiment II, we found differences between the groups, but only in the EFE identification task. Taken together, our results indicate that configural perception of faces does not seem to be globally impaired in PD. However, this ability is selectively altered when the categorization of emotional faces is required. A deeper assessment of the PD group indicated that decline in facial expression categorization is more evident in a subgroup of patients with higher global impairment (motor and cognitive). Taken together, these results suggest that the problems found in facial expression recognition may be associated with the progressive neuronal loss in frontostriatal and mesolimbic circuits, which characterizes PD. © 2013 The British Psychological Society.
Modelling individual difference in visual categorization.
Shen, Jianhong; Palmeri, Thomas J
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.
Modelling individual difference in visual categorization
Shen, Jianhong; Palmeri, Thomas J.
2016-01-01
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization. PMID:28154496
ERIC Educational Resources Information Center
Gao, Zaifeng; Bentin, Shlomo
2011-01-01
Face perception studies investigated how spatial frequencies (SF) are extracted from retinal display while forming a perceptual representation, or their selective use during task-imposed categorization. Here we focused on the order of encoding low-spatial frequencies (LSF) and high-spatial frequencies (HSF) from perceptual representations into…
The Role of Age and Executive Function in Auditory Category Learning
Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath
2015-01-01
Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987
Aphasic Patients Exhibit a Reversal of Hemispheric Asymmetries in Categorical Color Discrimination
Paluy, Yulia; Gilbert, Aubrey L.; Baldo, Juliana V.; Dronkers, Nina F.; Ivry, Richard B.
2010-01-01
Patients with left hemisphere (LH) or right hemisphere (RH) brain injury due to stroke were tested on a speeded, color discrimination task in which two factors were manipulated: 1) the categorical relationship between the target and the distracters and 2) the visual field in which the target was presented. Similar to controls, the RH patients were faster in detecting targets in the right visual field when the target and distracters had different color names compared to when their names were the same. This effect was absent in the LH patients, consistent with the hypothesis that injury to the left hemisphere handicaps the automatic activation of lexical codes. Moreover, the LH patients showed a reversed effect, such that the advantage of different target-distracter names was now evident for targets in the left visual field. This reversal may suggest a reorganization of the color lexicon in the right hemisphere following left hemisphere brain injury and/or the unmasking of a heightened right hemisphere sensitivity to color categories. PMID:21216454
Emadi, Nazli; Rajimehr, Reza; Esteky, Hossein
2014-01-01
Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (<8 Hz) oscillation in the spike train, prior and phase-locked to the stimulus onset, was correlated with increased gamma power and neuronal baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance. PMID:25404900
Chen, Yi-Chuan; Spence, Charles
2018-04-30
We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Distraction by emotional sounds: Disentangling arousal benefits and orienting costs.
Max, Caroline; Widmann, Andreas; Kotz, Sonja A; Schröger, Erich; Wetzel, Nicole
2015-08-01
Unexpectedly occurring task-irrelevant stimuli have been shown to impair performance. They capture attention away from the main task leaving fewer resources for target processing. However, the actual distraction effect depends on various variables; for example, only target-informative distractors have been shown to cause costs of attentional orienting. Furthermore, recent studies have shown that high arousing emotional distractors, as compared with low arousing neutral distractors, can improve performance by increasing alertness. We aimed to separate costs of attentional orienting and benefits of arousal by presenting negative and neutral environmental sounds (novels) as oddballs in an auditory-visual distraction paradigm. Participants categorized pictures while task-irrelevant sounds preceded visual targets in two conditions: (a) informative sounds reliably signaled onset and occurrence of visual targets, and (b) noninformative sounds occurred unrelated to visual targets. Results confirmed that only informative novels yield distraction. Importantly, irrespective of sounds' informational value participants responded faster in trials with high arousing negative as compared with moderately arousing neutral novels. That is, costs related to attentional orienting are modulated by information, whereas benefits related to emotional arousal are independent of a sound's informational value. This favors a nonspecific facilitating cross-modal influence of emotional arousal on visual task performance and suggests that behavioral distraction by noninformative novels is controlled after their motivational significance has been determined. (c) 2015 APA, all rights reserved).
Prototype learning and dissociable categorization systems in Alzheimer's disease.
Heindel, William C; Festa, Elena K; Ott, Brian R; Landy, Kelly M; Salmon, David P
2013-08-01
Recent neuroimaging studies suggest that prototype learning may be mediated by at least two dissociable memory systems depending on the mode of acquisition, with A/Not-A prototype learning dependent upon a perceptual representation system located within posterior visual cortex and A/B prototype learning dependent upon a declarative memory system associated with medial temporal and frontal regions. The degree to which patients with Alzheimer's disease (AD) can acquire new categorical information may therefore critically depend upon the mode of acquisition. The present study examined A/Not-A and A/B prototype learning in AD patients using procedures that allowed direct comparison of learning across tasks. Despite impaired explicit recall of category features in all tasks, patients showed differential patterns of category acquisition across tasks. First, AD patients demonstrated impaired prototype induction along with intact exemplar classification under incidental A/Not-A conditions, suggesting that the loss of functional connectivity within visual cortical areas disrupted the integration processes supporting prototype induction within the perceptual representation system. Second, AD patients demonstrated intact prototype induction but impaired exemplar classification during A/B learning under observational conditions, suggesting that this form of prototype learning is dependent on a declarative memory system that is disrupted in AD. Third, the surprisingly intact classification of both prototypes and exemplars during A/B learning under trial-and-error feedback conditions suggests that AD patients shifted control from their deficient declarative memory system to a feedback-dependent procedural memory system when training conditions allowed. Taken together, these findings serve to not only increase our understanding of category learning in AD, but to also provide new insights into the ways in which different memory systems interact to support the acquisition of categorical knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.
Prototype Learning and Dissociable Categorization Systems in Alzheimer’s Disease
Heindel, William C.; Festa, Elena K.; Ott, Brian R.; Landy, Kelly M.; Salmon, David P.
2015-01-01
Recent neuroimaging studies suggest that prototype learning may be mediated by at least two dissociable memory systems depending on the mode of acquisition, with A/Not-A prototype learning dependent upon a perceptual representation system located within posterior visual cortex and A/B prototype learning dependent upon a declarative memory system associated with medial temporal and frontal regions. The degree to which patients with Alzheimer’s disease (AD) can acquire new categorical information may therefore critically depend upon the mode of acquisition. The present study examined A/Not-A and A/B prototype learning in AD patients using procedures that allowed direct comparison of learning across tasks. Despite impaired explicit recall of category features in all tasks, patients showed differential patterns of category acquisition across tasks. First, AD patients demonstrated impaired prototype induction along with intact exemplar classification under incidental A/Not-A conditions, suggesting that the loss of functional connectivity within visual cortical areas disrupted the integration processes supporting prototype induction within the perceptual representation system. Second, AD patients demonstrated intact prototype induction but impaired exemplar classification during A/B learning under observational conditions, suggesting that this form of prototype learning is dependent on a declarative memory system that is disrupted in AD. Third, the surprisingly intact classification of both prototypes and exemplars during A/B learning under trial-and-error feedback conditions suggests that AD patients shifted control from their deficient declarative memory system to a feedback-dependent procedural memory system when training conditions allowed. Taken together, these findings serve to not only increase our understanding of category learning in AD, but to also provide new insights into the ways in which different memory systems interact to support the acquisition of categorical knowledge. PMID:23751172
Learning and transfer of category knowledge in an indirect categorization task.
Helie, Sebastien; Ashby, F Gregory
2012-05-01
Knowledge representations acquired during category learning experiments are 'tuned' to the task goal. A useful paradigm to study category representations is indirect category learning. In the present article, we propose a new indirect categorization task called the "same"-"different" categorization task. The same-different categorization task is a regular same-different task, but the question asked to the participants is about the stimulus category membership instead of stimulus identity. Experiment 1 explores the possibility of indirectly learning rule-based and information-integration category structures using the new paradigm. The results suggest that there is little learning about the category structures resulting from an indirect categorization task unless the categories can be separated by a one-dimensional rule. Experiment 2 explores whether a category representation learned indirectly can be used in a direct classification task (and vice versa). The results suggest that previous categorical knowledge acquired during a direct classification task can be expressed in the same-different categorization task only when the categories can be separated by a rule that is easily verbalized. Implications of these results for categorization research are discussed.
Detecting and reacting to change: the effect of exposure to narrow categorizations.
Chakravarti, Amitav; Fang, Christina; Shapira, Zur
2011-11-01
The ability to detect a change, to accurately assess the magnitude of the change, and to react to that change in a commensurate fashion are of critical importance in many decision domains. Thus, it is important to understand the factors that systematically affect people's reactions to change. In this article we document a novel effect: decision makers' reactions to a change (e.g., a visual change, a technology change) were systematically affected by the type of categorizations they encountered in an unrelated prior task (e.g., the response categories associated with a survey question). We found that prior exposure to narrow, as opposed to broad, categorizations improved decision makers' ability to detect change and led to stronger reactions to a given change. These differential reactions occurred because the prior categorizations, even though unrelated, altered the extent to which the subsequently presented change was perceived as either a relatively large change or a relatively small one.
Fleischhauer, Monika; Strobel, Alexander; Diers, Kersten; Enge, Sören
2014-02-01
The Implicit Association Test (IAT) is a widely used latency-based categorization task that indirectly measures the strength of automatic associations between target and attribute concepts. So far, little is known about the perceptual and cognitive processes underlying personality IATs. Thus, the present study examined event-related potential indices during the execution of an IAT measuring neuroticism (N = 70). The IAT effect was strongly modulated by the P1 component indicating early facilitation of relevant visual input and by a P3b-like late positive component reflecting the efficacy of stimulus categorization. Both components covaried, and larger amplitudes led to faster responses. The results suggest a relationship between early perceptual and semantic processes operating at a more automatic, implicit level and later decision-related categorization of self-relevant stimuli contributing to the IAT effect. Copyright © 2013 Society for Psychophysiological Research.
Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian
2018-04-18
Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.
The Cost of Switching Language in a Semantic Categorization Task.
ERIC Educational Resources Information Center
von Studnitz, Roswitha E.; Green, David W.
2002-01-01
Presents a study in which German-English bilinguals decided whether a visually presented word, either German or English, referred to an animate or to an inanimate entity. Bilinguals were slower to respond on a language switch trial than on language non-switch trials but only if they had to make the same response as on the prior trial. (Author/VWL)
ERIC Educational Resources Information Center
Sloutsky, Vladimir M.; Fisher, Anna V.
2012-01-01
Noles and Gelman (2012) attempt to critically reevaluate the claim that linguistic labels affect children's judgments of visual similarity. They report results of an experiment that used a modified version of Sloutsky and Fisher's (2004) task and conclude that "labels do not generally affect children's perceptual similarity judgments; rather,…
An fMRI investigation into the effect of preceding stimuli during visual oddball tasks.
Fajkus, Jiří; Mikl, Michal; Shaw, Daniel Joel; Brázdil, Milan
2015-08-15
This study investigates the modulatory effect of stimulus sequence on neural responses to novel stimuli. A group of 34 healthy volunteers underwent event-related functional magnetic resonance imaging while performing a three-stimulus visual oddball task, involving randomly presented frequent stimuli and two types of infrequent stimuli - targets and distractors. We developed a modified categorization of rare stimuli that incorporated the type of preceding rare stimulus, and analyzed the event-related functional data according to this sequence categorization; specifically, we explored hemodynamic response modulation associated with increasing rare-to-rare stimulus interval. For two consecutive targets, a modulation of brain function was evident throughout posterior midline and lateral temporal cortex, while responses to targets preceded by distractors were modulated in a widely distributed fronto-parietal system. As for distractors that follow targets, brain function was modulated throughout a set of posterior brain structures. For two successive distractors, however, no significant modulation was observed, which is consistent with previous studies and our primary hypothesis. The addition of the aforementioned technique extends the possibilities of conventional oddball task analysis, enabling researchers to explore the effects of the whole range of rare stimuli intervals. This methodology can be applied to study a wide range of associated cognitive mechanisms, such as decision making, expectancy and attention. Copyright © 2015 Elsevier B.V. All rights reserved.
Social Experience Does Not Abolish Cultural Diversity in Eye Movements
Kelly, David J.; Jack, Rachael E.; Miellet, Sébastien; De Luca, Emanuele; Foreman, Kay; Caldara, Roberto
2011-01-01
Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed “Eastern” eye movement strategies, while approximately 25% of participants displayed “Western” strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that “culture” alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required. PMID:21886626
Soto, Fabian A.; Bassett, Danielle S.; Ashby, F. Gregory
2016-01-01
Recent work has shown that multimodal association areas–including frontal, temporal and parietal cortex–are focal points of functional network reconfiguration during human learning and performance of cognitive tasks. On the other hand, neurocomputational theories of category learning suggest that the basal ganglia and related subcortical structures are focal points of functional network reconfiguration during early learning of some categorization tasks, but become less so with the development of automatic categorization performance. Using a combination of network science and multilevel regression, we explore how changes in the connectivity of small brain regions can predict behavioral changes during training in a visual categorization task. We find that initial category learning, as indexed by changes in accuracy, is predicted by increasingly efficient integrative processing in subcortical areas, with higher functional specialization, more efficient integration across modules, but a lower cost in terms of redundancy of information processing. The development of automaticity, as indexed by changes in the speed of correct responses, was predicted by lower clustering (particularly in subcortical areas), higher strength (highest in cortical areas) and higher betweenness centrality. By combining neurocomputational theories and network scientific methods, these results synthesize the dissociative roles of multimodal association areas and subcortical structures in the development of automaticity during category learning. PMID:27453156
The influence of print exposure on the body-object interaction effect in visual word recognition.
Hansen, Dana; Siakaluk, Paul D; Pexman, Penny M
2012-01-01
We examined the influence of print exposure on the body-object interaction (BOI) effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations ("Is the word easily imageable?"; Experiment 1) or phonological lexical decisions ("Does the item sound like a real English word?"; Experiment 2). The results from Experiment 1 showed that there was a larger BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that the BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.
Fu, Qiufang; Liu, Yong-Jin; Dienes, Zoltan; Wu, Jianhui; Chen, Wenfeng; Fu, Xiaolan
2017-01-01
It remains controversial whether visual awareness is correlated with early activation indicated by VAN (visual awareness negativity), as the recurrent process hypothesis theory proposes, or with later activation indicated by P3 or LP (late positive), as suggested by global workspace theories. To address this issue, a backward masking task was adopted, in which participants were first asked to categorize natural scenes of color photographs and line-drawings and then to rate the clarity of their visual experience on a Perceptual Awareness Scale (PAS). The interstimulus interval between the scene and the mask was manipulated. The behavioral results showed that categorization accuracy increased with PAS ratings for both color photographs and line-drawings, with no difference in accuracy between the two types of images for each rating, indicating that the experience rating reflected visibility. Importantly, the event-related potential (ERP) results revealed that for correct trials, the early posterior N1 and anterior P2 components changed with the PAS ratings for color photographs, but did not vary with the PAS ratings for line-drawings, indicating that the N1 and P2 do not always correlate with subjective visual awareness. Moreover, for both types of images, the anterior N2 and posterior VAN changed with the PAS ratings in a linear way, while the LP changed with the PAS ratings in a non-linear way, suggesting that these components relate to different types of subjective awareness. The results reconcile the apparently contradictory predictions of different theories and help to resolve the current debate on neural correlates of visual awareness.
Fu, Qiufang; Liu, Yong-Jin; Dienes, Zoltan; Wu, Jianhui; Chen, Wenfeng; Fu, Xiaolan
2017-01-01
It remains controversial whether visual awareness is correlated with early activation indicated by VAN (visual awareness negativity), as the recurrent process hypothesis theory proposes, or with later activation indicated by P3 or LP (late positive), as suggested by global workspace theories. To address this issue, a backward masking task was adopted, in which participants were first asked to categorize natural scenes of color photographs and line-drawings and then to rate the clarity of their visual experience on a Perceptual Awareness Scale (PAS). The interstimulus interval between the scene and the mask was manipulated. The behavioral results showed that categorization accuracy increased with PAS ratings for both color photographs and line-drawings, with no difference in accuracy between the two types of images for each rating, indicating that the experience rating reflected visibility. Importantly, the event-related potential (ERP) results revealed that for correct trials, the early posterior N1 and anterior P2 components changed with the PAS ratings for color photographs, but did not vary with the PAS ratings for line-drawings, indicating that the N1 and P2 do not always correlate with subjective visual awareness. Moreover, for both types of images, the anterior N2 and posterior VAN changed with the PAS ratings in a linear way, while the LP changed with the PAS ratings in a non-linear way, suggesting that these components relate to different types of subjective awareness. The results reconcile the apparently contradictory predictions of different theories and help to resolve the current debate on neural correlates of visual awareness. PMID:28261141
Enhanced HMAX model with feedforward feature learning for multiclass categorization.
Li, Yinlin; Wu, Wei; Zhang, Bo; Li, Fengfu
2015-01-01
In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX) is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT) layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 ms of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: (1) To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; (2) To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; (3) Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.
Lundqvist, Daniel; Svärd, Joakim; Michelgård Palmquist, Åsa; Fischer, Håkan; Svenningsson, Per
2017-09-01
The literature on emotional processing in Parkinson's disease (PD) patients shows mixed results. This may be because of various methodological and/or patient-related differences, such as failing to adjust for cognitive functioning, depression, and/or mood. In the current study, we tested PD patients and healthy controls (HCs) using emotional stimuli across a variety of tasks, including visual search, short-term memory (STM), categorical perception, and emotional stimulus rating. The PD and HC groups were matched on cognitive ability, depression, and mood. We also explored possible relationships between task results and antiparkinsonian treatment effects, as measured by levodopa equivalent dosages (LED), in the PD group. The results show that PD patients use a larger emotional range compared with HCs when reporting their impression of emotional faces on rated emotional valence, arousal, and potency. The results also show that dopaminergic therapy was correlated with stimulus rating results such that PD patients with higher LED scores rated negative faces as less arousing, less negative, and less powerful. Finally, results also show that PD patients display a general slowing effect in the visual search tasks compared with HCs, indicating overall slowed responses. There were no group differences observed in the STM or categorical perception tasks. Our results indicate a relationship between emotional responses, PD, and dopaminergic therapy, in which PD per se is associated with stronger emotional responses, whereas LED levels are negatively correlated with the strength of emotional responses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
ERIC Educational Resources Information Center
Kovack-Lesh, Kristine A.; Horst, Jessica S.; Oakes, Lisa M.
2008-01-01
We examined the effect of 4-month-old infants' previous experience with dogs, cats, or both and their online looking behavior on their learning of the adult-defined category of "cat" in a visual familiarization task. Four-month-old infants' (N = 123) learning in the laboratory was jointly determined by whether or not they had experience…
Forms of Memory for Representation of Visual Objects
1991-04-15
neuropsychological syndromes that involve disruption of perceptual representation systems should pay rich dividends for implicit memory research (Schacter et al...BLACKORDi. 1988b. Deficits in the implicit retention of new associations by alcoholic Korsakoff patients. Brain and Cognition 7: 145-156. COFER, C. C...MOREINES & N. BUTTERS. 1973. Retrieving information from Korsakoff patients: Effects of categorical cues and reference to the task. Cortex 9: 165
Dyslexic Participants Show Intact Spontaneous Categorization Processes
ERIC Educational Resources Information Center
Nikolopoulos, Dimitris S.; Pothos, Emmanuel M.
2009-01-01
We examine the performance of dyslexic participants on an unsupervised categorization task against that of matched non-dyslexic control participants. Unsupervised categorization is a cognitive process critical for conceptual development. Existing research in dyslexia has emphasized perceptual tasks and supervised categorization tasks (for which…
Koivisto, Mika; Kahila, Ella
2017-04-01
Top-down processes are widely assumed to be essential in visual awareness, subjective experience of seeing. However, previous studies have not tried to separate directly the roles of different types of top-down influences in visual awareness. We studied the effects of top-down preparation and object substitution masking (OSM) on visual awareness during categorization of objects presented in natural scene backgrounds. The results showed that preparation facilitated categorization but did not influence visual awareness. OSM reduced visual awareness and impaired categorization. The dissociations between the effects of preparation and OSM on visual awareness and on categorization imply that they influence at different stages of cognitive processing. We propose that preparation influences at the top of the visual hierarchy, whereas OSM interferes with processes occurring at lower levels of the hierarchy. These lower level processes play an essential role in visual awareness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Categorization difficulty modulates the mediated route for response selection in task switching.
Schneider, Darryl W
2017-12-22
Conflict during response selection in task switching is indicated by the response congruency effect: worse performance for incongruent targets (requiring different responses across tasks) than for congruent targets (requiring the same response). The effect can be explained by dual-task processing in a mediated route for response selection, whereby targets are categorized with respect to both tasks. In the present study, the author tested predictions for the modulation of response congruency effects by categorization difficulty derived from a relative-speed-of-processing hypothesis. Categorization difficulty was manipulated for the relevant and irrelevant task dimensions in a novel spatial task-switching paradigm that involved judging the locations of target dots in a grid, without repetition of dot configurations. Response congruency effects were observed and they varied systematically with categorization difficulty (e.g., being larger when irrelevant categorization was easy than when it was hard). These results are consistent with the relative-speed-of-processing hypothesis and suggest that task-switching models that implement variations of the mediated route for response selection need to address the time course of categorization.
Harel, Assaf; Bentin, Shlomo
2013-01-01
A much-debated question in object recognition is whether expertise for faces and expertise for non-face objects utilize common perceptual information. We investigated this issue by assessing the diagnostic information required for different types of expertise. Specifically, we asked whether face categorization and expert car categorization at the subordinate level relies on the same spatial frequency (SF) scales. Fifteen car experts and fifteen novices performed a category verification task with spatially filtered images of faces, cars, and airplanes. Images were categorized based on their basic (e.g. "car") and subordinate level (e.g. "Japanese car") identity. The effect of expertise was not evident when objects were categorized at the basic level. However, when the car experts categorized faces and cars at the subordinate level, the two types of expertise required different kinds of SF information. Subordinate categorization of faces relied on low SFs more than on high SFs, whereas subordinate expert car categorization relied on high SFs more than on low SFs. These findings suggest that expertise in the recognition of objects and faces do not utilize the same type of information. Rather, different types of expertise require different types of diagnostic visual information.
Harel, Assaf; Bentin, Shlomo
2013-01-01
A much-debated question in object recognition is whether expertise for faces and expertise for non-face objects utilize common perceptual information. We investigated this issue by assessing the diagnostic information required for different types of expertise. Specifically, we asked whether face categorization and expert car categorization at the subordinate level relies on the same spatial frequency (SF) scales. Fifteen car experts and fifteen novices performed a category verification task with spatially filtered images of faces, cars, and airplanes. Images were categorized based on their basic (e.g. “car”) and subordinate level (e.g. “Japanese car”) identity. The effect of expertise was not evident when objects were categorized at the basic level. However, when the car experts categorized faces and cars at the subordinate level, the two types of expertise required different kinds of SF information. Subordinate categorization of faces relied on low SFs more than on high SFs, whereas subordinate expert car categorization relied on high SFs more than on low SFs. These findings suggest that expertise in the recognition of objects and faces do not utilize the same type of information. Rather, different types of expertise require different types of diagnostic visual information. PMID:23826188
Interhemispheric interaction in the split-brain.
Lambert, A J
1991-01-01
An experiment is reported in which a split-brain patient (LB) was simultaneously presented with two words, one to the left and one to the right of fixation. He was instructed to categorize the right sided word (living vs non-living), and to ignore anything appearing to the left of fixation. LB's performance on this task closely resembled that of normal neurologically intact individuals. Manual response speed was slower when the unattended (left visual field) word belonged to the same category as the right visual field word. Implications of this finding for views of the split-brain syndrome are discussed.
Distributed encoding of spatial and object categories in primate hippocampal microcircuits
Opris, Ioan; Santos, Lucas M.; Gerhardt, Greg A.; Song, Dong; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.
2015-01-01
The primate hippocampus plays critical roles in the encoding, representation, categorization and retrieval of cognitive information. Such cognitive abilities may use the transformational input-output properties of hippocampal laminar microcircuitry to generate spatial representations and to categorize features of objects, images, and their numeric characteristics. Four nonhuman primates were trained in a delayed-match-to-sample (DMS) task while multi-neuron activity was simultaneously recorded from the CA1 and CA3 hippocampal cell fields. The results show differential encoding of spatial location and categorization of images presented as relevant stimuli in the task. Individual hippocampal cells encoded visual stimuli only on specific types of trials in which retention of either, the Sample image, or the spatial position of the Sample image indicated at the beginning of the trial, was required. Consistent with such encoding, it was shown that patterned microstimulation applied during Sample image presentation facilitated selection of either Sample image spatial locations or types of images, during the Match phase of the task. These findings support the existence of specific codes for spatial and numeric object representations in primate hippocampus which can be applied on differentially signaled trials. Moreover, the transformational properties of hippocampal microcircuitry, together with the patterned microstimulation are supporting the practical importance of this approach for cognitive enhancement and rehabilitation, needed for memory neuroprosthetics. PMID:26500473
Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).
Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen
2018-06-06
Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.
Peyrin, C; Lallier, M; Démonet, J F; Pernet, C; Baciu, M; Le Bas, J F; Valdois, S
2012-03-01
A dissociation between phonological and visual attention (VA) span disorders has been reported in dyslexic children. This study investigates whether this cognitively-based dissociation has a neurobiological counterpart through the investigation of two cases of developmental dyslexia. LL showed a phonological disorder but preserved VA span whereas FG exhibited the reverse pattern. During a phonological rhyme judgement task, LL showed decreased activation of the left inferior frontal gyrus whereas this region was activated at the level of the controls in FG. Conversely, during a visual categorization task, FG demonstrated decreased activation of the parietal lobules whereas these regions were activated in LL as in the controls. These contrasted patterns of brain activation thus mirror the cognitive disorders' dissociation. These findings provide the first evidence for an association between distinct brain mechanisms and distinct cognitive deficits in developmental dyslexia, emphasizing the importance of taking into account the heterogeneity of the reading disorder. Copyright © 2012 Elsevier Inc. All rights reserved.
Odour discrimination and identification are improved in early blindness.
Cuevas, Isabel; Plaza, Paula; Rombaux, Philippe; De Volder, Anne G; Renier, Laurent
2009-12-01
Previous studies showed that early blind humans develop superior abilities in the use of their remaining senses, hypothetically due to a functional reorganization of the deprived visual brain areas. While auditory and tactile functions have been investigated for long, little is known about the effects of early visual deprivation on olfactory processing. However, blind humans make an extensive use of olfactory information in their daily life. Here we investigated olfactory discrimination and identification abilities in early blind subjects and age-matched sighted controls. Three levels of cuing were used in the identification task, i.e., free-identification (no cue), categorization (semantic cues) and multiple choice (semantic and phonological cues). Early blind subjects significantly outperformed the controls in odour discrimination, free-identification and categorization. In addition, the larger group difference was observed in the free-identification as compared to the categorization and the multiple choice conditions. This indicated that a better access to the semantic information from odour perception accounted for part of the improved olfactory performances in odour identification in the blind. We concluded that early blind subjects have both improved perceptual abilities and a better access to the information stored in semantic memory than sighted subjects.
Preservation of crossmodal selective attention in healthy aging
Hugenschmidt, Christina E.; Peiffer, Ann M.; McCoy, Thomas P.; Hayasaka, Satoru; Laurienti, Paul J.
2010-01-01
The goal of the present study was to determine if older adults benefited from attention to a specific sensory modality in a voluntary attention task and evidenced changes in voluntary or involuntary attention when compared to younger adults. Suppressing and enhancing effects of voluntary attention were assessed using two cued forced-choice tasks, one that asked participants to localize and one that asked them to categorize visual and auditory targets. Involuntary attention was assessed using the same tasks, but with no attentional cues. The effects of attention were evaluated using traditional comparisons of means and Cox proportional hazards models. All analyses showed that older adults benefited behaviorally from selective attention in both visual and auditory conditions, including robust suppressive effects of attention. Of note, the performance of the older adults was commensurate with that of younger adults in almost all analyses, suggesting that older adults can successfully engage crossmodal attention processes. Thus, age-related increases in distractibility across sensory modalities are likely due to mechanisms other than deficits in attentional processing. PMID:19404621
Catching Audiovisual Interactions With a First-Person Fisherman Video Game.
Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert
2017-07-01
The human brain is excellent at integrating information from different sources across multiple sensory modalities. To examine one particularly important form of multisensory interaction, we manipulated the temporal correlation between visual and auditory stimuli in a first-person fisherman video game. Subjects saw rapidly swimming fish whose size oscillated, either at 6 or 8 Hz. Subjects categorized each fish according to its rate of size oscillation, while trying to ignore a concurrent broadband sound seemingly emitted by the fish. In three experiments, categorization was faster and more accurate when the rate at which a fish oscillated in size matched the rate at which the accompanying, task-irrelevant sound was amplitude modulated. Control conditions showed that the difference between responses to matched and mismatched audiovisual signals reflected a performance gain in the matched condition, rather than a cost from the mismatched condition. The performance advantage with matched audiovisual signals was remarkably robust over changes in task demands between experiments. Performance with matched or unmatched audiovisual signals improved over successive trials at about the same rate, emblematic of perceptual learning in which visual oscillation rate becomes more discriminable with experience. Finally, analysis at the level of individual subjects' performance pointed to differences in the rates at which subjects can extract information from audiovisual stimuli.
Mishler, Ada D; Neider, Mark B
2017-11-01
The redundant signals effect, a speed-up in response times with multiple targets compared to a single target in one display, is well-documented, with some evidence suggesting that it can occur even in conceptual processing when targets are presented bilaterally. The current study was designed to determine whether or not category-based redundant signals can speed up processing even without bilateral presentation. Toward that end, participants performed a go/no-go visual task in which they responded only to members of the target category (i.e., they responded only to numbers and did not respond to letters). Numbers and letters were presented along an imaginary vertical line in the center of the visual field. When the single signal trials contained a nontarget letter (Experiment 1), there was a significant redundant signals effect. The effect was not significant when the single-signal trials did not contain a nontarget letter (Experiments 2 and 3). The results indicate that, when targets are defined categorically and not presented bilaterally, the redundant signals effect may be an effect of reducing the presence of information that draws attention away from the target. This suggests that redundant signals may not speed up conceptual processing when interhemispheric presentation is not available. Copyright © 2017 Elsevier B.V. All rights reserved.
Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven
2012-01-01
The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921
The remains of the trial: goal-determined inter-trial suppression of selective attention.
Lleras, Alejandro; Levinthal, Brian R; Kawahara, Jun
2009-01-01
When an observer is searching through the environment for a target, what are the consequences of not finding a target in a given environment? We examine this issue in detail and propose that the visual system systematically tags environmental information during a search, in an effort to improve performance in future search events. Information that led to search successes is positively tagged, so as to favor future deployments of attention toward that type of information, whereas information that led to search failures is negatively tagged, so as to discourage future deployments of attention toward such failed information. To study this, we use an oddball-search task, where participants search for one item that differs from all others along one feature or belongs to a different visual category, from the other stimuli in the display. We find that when participants perform oddball-search tasks, the absence of a target delays identification of future targets containing the feature or category that was shared by all distractors in the target-absent trial. We interpret this effect as reflecting an implicit assessment of performance: target-absent trials can be viewed as processing "failures" insofar as they do not provide the visual system with the information needed to complete the task. Here, we study the goal-oriented nature of this bias in three ways. First, we show that the direction of the bias is determined by the experimental task. Second, we show that the effect is independent of the mode of presentation of stimuli: it happens with both serial and simultaneous stimuli presentation. Third, we show that, when using categorically defined oddballs as the search stimuli (find the face among houses or vice versa), the bias generalizes to unseen members of the "failed" category. Together, these findings support the idea that this inter-trial attentional biases arise from high-level, task-constrained, implicit assessments of performance, involving categorical associations between classes of stimuli and behavioral outcomes (success/failure), which are independent of attentional modality (temporal vs. spatial attention).
Cognitive processing of visual images in migraine populations in between headache attacks.
Mickleborough, Marla J S; Chapman, Christine M; Toma, Andreea S; Handy, Todd C
2014-09-25
People with migraine headache have altered interictal visual sensory-level processing in between headache attacks. Here we examined the extent to which these migraine abnormalities may extend into higher visual processing such as implicit evaluative analysis of visual images in between migraine events. Specifically, we asked two groups of participants--migraineurs (N=29) and non-migraine controls (N=29)--to view a set of unfamiliar commercial logos in the context of a target identification task as the brain electrical responses to these objects were recorded via event-related potentials (ERPs). Following this task, participants individually identified those logos that they most liked or disliked. We applied a between-groups comparison of how ERP responses to logos varied as a function of hedonic evaluation. Our results suggest migraineurs have abnormal implicit evaluative processing of visual stimuli. Specifically, migraineurs lacked a bias for disliked logos found in control subjects, as measured via a late positive potential (LPP) ERP component. These results suggest post-sensory consequences of migraine in between headache events, specifically abnormal cognitive evaluative processing with a lack of normal categorical hedonic evaluation. Copyright © 2014 Elsevier B.V. All rights reserved.
The Timing of Visual Object Categorization
Mack, Michael L.; Palmeri, Thomas J.
2011-01-01
An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction. PMID:21811480
Collet, Anne-Claire; Fize, Denis; VanRullen, Rufin
2015-01-01
Rapid visual categorization is a crucial ability for survival of many animal species, including monkeys and humans. In real conditions, objects (either animate or inanimate) are never isolated but embedded in a complex background made of multiple elements. It has been shown in humans and monkeys that the contextual background can either enhance or impair object categorization, depending on context/object congruency (for example, an animal in a natural vs. man-made environment). Moreover, a scene is not only a collection of objects; it also has global physical features (i.e phase and amplitude of Fourier spatial frequencies) which help define its gist. In our experiment, we aimed to explore and compare the contribution of the amplitude spectrum of scenes in the context-object congruency effect in monkeys and humans. We designed a rapid visual categorization task, Animal versus Non-Animal, using as contexts both real scenes photographs and noisy backgrounds built from the amplitude spectrum of real scenes but with randomized phase spectrum. We showed that even if the contextual congruency effect was comparable in both species when the context was a real scene, it differed when the foreground object was surrounded by a noisy background: in monkeys we found a similar congruency effect in both conditions, but in humans the congruency effect was absent (or even reversed) when the context was a noisy background. PMID:26207915
Modeling Image Patches with a Generic Dictionary of Mini-Epitomes
Papandreou, George; Chen, Liang-Chieh; Yuille, Alan L.
2015-01-01
The goal of this paper is to question the necessity of features like SIFT in categorical visual recognition tasks. As an alternative, we develop a generative model for the raw intensity of image patches and show that it can support image classification performance on par with optimized SIFT-based techniques in a bag-of-visual-words setting. Key ingredient of the proposed model is a compact dictionary of mini-epitomes, learned in an unsupervised fashion on a large collection of images. The use of epitomes allows us to explicitly account for photometric and position variability in image appearance. We show that this flexibility considerably increases the capacity of the dictionary to accurately approximate the appearance of image patches and support recognition tasks. For image classification, we develop histogram-based image encoding methods tailored to the epitomic representation, as well as an “epitomic footprint” encoding which is easy to visualize and highlights the generative nature of our model. We discuss in detail computational aspects and develop efficient algorithms to make the model scalable to large tasks. The proposed techniques are evaluated with experiments on the challenging PASCAL VOC 2007 image classification benchmark. PMID:26321859
Adeli, Hossein; Vitu, Françoise; Zelinsky, Gregory J
2017-02-08
Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASC's predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts. SIGNIFICANCE STATEMENT The superior colliculus (SC) performs a visual-to-motor transformation vital to overt attention, but existing SC models cannot predict saccades to visually complex real-world stimuli. We introduce a brain-inspired SC model that outperforms state-of-the-art image-based competitors in predicting the sequences of fixations made by humans performing a range of everyday tasks (scene viewing and exemplar and categorical search), making clear the value of looking to the brain for model design. This work is significant in that it will drive new research by making computationally explicit predictions of SC neural population activity in response to naturalistic stimuli and tasks. It will also serve as a blueprint for the construction of other brain-inspired models, helping to usher in the next generation of truly intelligent autonomous systems. Copyright © 2017 the authors 0270-6474/17/371453-15$15.00/0.
The Characteristics and Limits of Rapid Visual Categorization
Fabre-Thorpe, Michèle
2011-01-01
Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180
Post-decision biases reveal a self-consistency principle in perceptual inference.
Luu, Long; Stocker, Alan A
2018-05-15
Making a categorical judgment can systematically bias our subsequent perception of the world. We show that these biases are well explained by a self-consistent Bayesian observer whose perceptual inference process is causally conditioned on the preceding choice. We quantitatively validated the model and its key assumptions with a targeted set of three psychophysical experiments, focusing on a task sequence where subjects first had to make a categorical orientation judgment before estimating the actual orientation of a visual stimulus. Subjects exhibited a high degree of consistency between categorical judgment and estimate, which is difficult to reconcile with alternative models in the face of late, memory related noise. The observed bias patterns resemble the well-known changes in subjective preferences associated with cognitive dissonance, which suggests that the brain's inference processes may be governed by a universal self-consistency constraint that avoids entertaining 'dissonant' interpretations of the evidence. © 2018, Luu et al.
Universal and adapted vocabularies for generic visual categorization.
Perronnin, Florent
2008-07-01
Generic Visual Categorization (GVC) is the pattern classification problem which consists in assigning labels to an image based on its semantic content. This is a challenging task as one has to deal with inherent object/scene variations as well as changes in viewpoint, lighting and occlusion. Several state-of-the-art GVC systems use a vocabulary of visual terms to characterize images with a histogram of visual word counts. We propose a novel practical approach to GVC based on a universal vocabulary, which describes the content of all the considered classes of images, and class vocabularies obtained through the adaptation of the universal vocabulary using class-specific data. The main novelty is that an image is characterized by a set of histograms - one per class - where each histogram describes whether the image content is best modeled by the universal vocabulary or the corresponding class vocabulary. This framework is applied to two types of local image features: low-level descriptors such as the popular SIFT and high-level histograms of word co-occurrences in a spatial neighborhood. It is shown experimentally on two challenging datasets (an in-house database of 19 categories and the PASCAL VOC 2006 dataset) that the proposed approach exhibits state-of-the-art performance at a modest computational cost.
Discrimination of Complex Human Behavior by Pigeons (Columba livia) and Humans
Qadri, Muhammad A. J.; Sayde, Justin M.; Cook, Robert G.
2014-01-01
The cognitive and neural mechanisms for recognizing and categorizing behavior are not well understood in non-human animals. In the current experiments, pigeons and humans learned to categorize two non-repeating, complex human behaviors (“martial arts” vs. “Indian dance”). Using multiple video exemplars of a digital human model, pigeons discriminated these behaviors in a go/no-go task and humans in a choice task. Experiment 1 found that pigeons already experienced with discriminating the locomotive actions of digital animals acquired the discrimination more rapidly when action information was available than when only pose information was available. Experiments 2 and 3 found this same dynamic superiority effect with naïve pigeons and human participants. Both species used the same combination of immediately available static pose information and more slowly perceived dynamic action cues to discriminate the behavioral categories. Theories based on generalized visual mechanisms, as opposed to embodied, species-specific action networks, offer a parsimonious account of how these different animals recognize behavior across and within species. PMID:25379777
The activity in the anterior insulae is modulated by perceptual decision-making difficulty.
Lamichhane, Bidhan; Adhikari, Bhim M; Dhamala, Mukesh
2016-07-07
Previous neuroimaging studies provide evidence for the involvement of the anterior insulae (INSs) in perceptual decision-making processes. However, how the insular cortex is involved in integration of degraded sensory information to create a conscious percept of environment and to drive our behaviors still remains a mystery. In this study, using functional magnetic resonance imaging (fMRI) and four different perceptual categorization tasks in visual and audio-visual domains, we measured blood oxygen level dependent (BOLD) signals and examined the roles of INSs in easy and difficult perceptual decision-making. We created a varying degree of degraded stimuli by manipulating the task-specific stimuli in these four experiments to examine the effects of task difficulty on insular cortex response. We hypothesized that significantly higher BOLD response would be associated with the ambiguity of the sensory information and decision-making difficulty. In all of our experimental tasks, we found the INS activity consistently increased with task difficulty and participants' behavioral performance changed with the ambiguity of the presented sensory information. These findings support the hypothesis that the anterior insulae are involved in sensory-guided, goal-directed behaviors and their activities can predict perceptual load and task difficulty. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Rapid Presentation of Emotional Expressions Reveals New Emotional Impairments in Tourette’s Syndrome
Mermillod, Martial; Devaux, Damien; Derost, Philippe; Rieu, Isabelle; Chambres, Patrick; Auxiette, Catherine; Legrand, Guillaume; Galland, Fabienne; Dalens, Hélène; Coulangeon, Louise Marie; Broussolle, Emmanuel; Durif, Franck; Jalenques, Isabelle
2013-01-01
Objective: Based on a variety of empirical evidence obtained within the theoretical framework of embodiment theory, we considered it likely that motor disorders in Tourette’s syndrome (TS) would have emotional consequences for TS patients. However, previous research using emotional facial categorization tasks suggests that these consequences are limited to TS patients with obsessive-compulsive behaviors (OCB). Method: These studies used long stimulus presentations which allowed the participants to categorize the different emotional facial expressions (EFEs) on the basis of a perceptual analysis that might potentially hide a lack of emotional feeling for certain emotions. In order to reduce this perceptual bias, we used a rapid visual presentation procedure. Results: Using this new experimental method, we revealed different and surprising impairments on several EFEs in TS patients compared to matched healthy control participants. Moreover, a spatial frequency analysis of the visual signal processed by the patients suggests that these impairments may be located at a cortical level. Conclusion: The current study indicates that the rapid visual presentation paradigm makes it possible to identify various potential emotional disorders that were not revealed by the standard visual presentation procedures previously reported in the literature. Moreover, the spatial frequency analysis performed in our study suggests that emotional deficit in TS might lie at the level of temporal cortical areas dedicated to the processing of HSF visual information. PMID:23630481
Anterior Temporal Lobe Morphometry Predicts Categorization Ability.
Garcin, Béatrice; Urbanski, Marika; Thiebaut de Schotten, Michel; Levy, Richard; Volle, Emmanuelle
2018-01-01
Categorization is the mental operation by which the brain classifies objects and events. It is classically assessed using semantic and non-semantic matching or sorting tasks. These tasks show a high variability in performance across healthy controls and the cerebral bases supporting this variability remain unknown. In this study we performed a voxel-based morphometry study to explore the relationships between semantic and shape categorization tasks and brain morphometric differences in 50 controls. We found significant correlation between categorization performance and the volume of the gray matter in the right anterior middle and inferior temporal gyri. Semantic categorization tasks were associated with more rostral temporal regions than shape categorization tasks. A significant relationship was also shown between white matter volume in the right temporal lobe and performance in the semantic tasks. Tractography revealed that this white matter region involved several projection and association fibers, including the arcuate fasciculus, inferior fronto-occipital fasciculus, uncinate fasciculus, and inferior longitudinal fasciculus. These results suggest that categorization abilities are supported by the anterior portion of the right temporal lobe and its interaction with other areas.
Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel
2017-12-01
Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.
The relationship between visual search and categorization of own- and other-age faces.
Craig, Belinda M; Lipp, Ottmar V
2018-03-13
Young adult participants are faster to detect young adult faces in crowds of infant and child faces than vice versa. These findings have been interpreted as evidence for more efficient attentional capture by own-age than other-age faces, but could alternatively reflect faster rejection of other-age than own-age distractors, consistent with the previously reported other-age categorization advantage: faster categorization of other-age than own-age faces. Participants searched for own-age faces in other-age backgrounds or vice versa. Extending the finding to different other-age groups, young adult participants were faster to detect young adult faces in both early adolescent (Experiment 1) and older adult backgrounds (Experiment 2). To investigate whether the own-age detection advantage could be explained by faster categorization and rejection of other-age background faces, participants in experiments 3 and 4 also completed an age categorization task. Relatively faster categorization of other-age faces was related to relatively faster search through other-age backgrounds on target absent trials but not target present trials. These results confirm that other-age faces are more quickly categorized and searched through and that categorization and search processes are related; however, this correlational approach could not confirm or reject the contribution of background face processing to the own-age detection advantage. © 2018 The British Psychological Society.
Shifting Attention Between Visual Dimensions as a Source of Switch Costs.
Elchlepp, Heike; Best, Maisy; Lavric, Aureliu; Monsell, Stephen
2017-04-01
Task-switching experiments have documented a puzzling phenomenon: Advance warning of the switch reduces but does not eliminate the switch cost. Theoretical accounts have posited that the residual switch cost arises when one selects the relevant stimulus-response mapping, leaving earlier perceptual processes unaffected. We put this assumption to the test by seeking electrophysiological markers of encoding a perceptual dimension. Participants categorized a colored letter as a vowel or consonant or its color as "warm" or "cold." Orthogonally to the color manipulation, some colors were eight times more frequent than others, and the letters were in upper- or lowercase. Color frequency modulated the electroencephalogram amplitude at around 150 ms when participants repeated the color-classification task. When participants switched from the letter task to the color task, this effect was significantly delayed. Thus, even when prepared for, a task switch delays or prolongs encoding of the relevant perceptual dimension.
Beyeler, Michael; Dutt, Nikil D; Krichmar, Jeffrey L
2013-12-01
Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain. Copyright © 2013 Elsevier Ltd. All rights reserved.
Okita, Manabu; Yukihiro, Takashi; Miyamoto, Kenzo; Morioka, Shu; Kaba, Hideto
2017-04-01
To explore the mechanism underlying the imitation of finger gestures, we devised a simple imitation task in which the patients were instructed to replicate finger configurations in two conditions: one in which they could see their hand (visual feedback: VF) and one in which they could not see their hand (non-visual feedback: NVF). Patients with left brain damage (LBD) or right brain damage (RBD), respectively, were categorized into two groups based on their scores on the imitation task in the NVF condition: the impaired imitation groups (I-LBD and I-RBD) who failed two or more of the five patterns and the control groups (C-LBD and C-RBD) who made one or no errors. We also measured the movement-production times for imitation. The I-RBD group performed significantly worse than the C-RBD group even in the VF condition. In contrast, the I-LBD group was selectively impaired in the NVF condition. The I-LBD group performed the imitations at a significantly slower rate than the C-LBD group in both the VF and NVF conditions. These results suggest that impaired imitation in patients with LBD is partly due to an abnormal integration of visual and somatosensory information based on the task specificity of the NVF condition. Copyright © 2017 Elsevier Inc. All rights reserved.
Challenging Cognitive Control by Mirrored Stimuli in Working Memory Matching
Wirth, Maria; Gaschler, Robert
2017-01-01
Cognitive conflict has often been investigated by placing automatic processing originating from learned associations in competition with instructed task demands. Here we explore whether mirror generalization as a congenital mechanism can be employed to create cognitive conflict. Past research suggests that the visual system automatically generates an invariant representation of visual objects and their mirrored counterparts (i.e., mirror generalization), and especially so for lateral reversals (e.g., a cup seen from the left side vs. right side). Prior work suggests that mirror generalization can be reduced or even overcome by learning (i.e., for those visual objects for which it is not appropriate, such as letters d and b). We, therefore, minimized prior practice on resolving conflicts involving mirror generalization by using kanji stimuli as non-verbal and unfamiliar material. In a 1-back task, participants had to check a stream of kanji stimuli for identical repetitions and avoid miss-categorizing mirror reversed stimuli as exact repetitions. Consistent with previous work, lateral reversals led to profound slowing of reaction times and lower accuracy in Experiment 1. Yet, different from previous reports suggesting that lateral reversals lead to stronger conflict, similar slowing for vertical and horizontal mirror transformations was observed in Experiment 2. Taken together, the results suggest that transformations of visual stimuli can be employed to challenge cognitive control in the 1-back task. PMID:28503160
The effect of category learning on attentional modulation of visual cortex.
Folstein, Jonathan R; Fuller, Kelly; Howard, Dorothy; DePatie, Thomas
2017-09-01
Learning about visual object categories causes changes in the way we perceive those objects. One likely mechanism by which this occurs is the application of attention to potentially relevant objects. Here we test the hypothesis that category membership influences the allocation of attention, allowing attention to be applied not only to object features, but to entire categories. Participants briefly learned to categorize a set of novel cartoon animals after which EEG was recorded while participants distinguished between a target and non-target category. A second identical EEG session was conducted after two sessions of categorization practice. The category structure and task design allowed parametric manipulation of number of target features while holding feature frequency and category membership constant. We found no evidence that category membership influenced attentional selection: a postero-lateral negative component, labeled the selection negativity/N250, increased over time and was sensitive to number of target features, not target categories. In contrast, the right hemisphere N170 was not sensitive to target features. The P300 appeared sensitive to category in the first session, but showed a graded sensitivity to number of target features in the second session, possibly suggesting a transition from rule-based to similarity based categorization. Copyright © 2017. Published by Elsevier Ltd.
Bogale, Bezawork Afework; Aoyama, Masato; Sugita, Shoei
2011-01-01
We trained jungle crows to discriminate among photographs of human face according to their sex in a simultaneous two-alternative task to study their categorical learning ability. Once the crows reached a discrimination criterion (greater than or equal to 80% correct choices in two consecutive sessions; binomial probability test, p<.05), they next received generalization and transfer tests (i.e., greyscale, contour, and 'full' occlusion) in Experiment 1 followed by a 'partial' occlusion test in Experiment 2 and random stimuli pair test in Experiment 3. Jungle crows learned the discrimination task in a few trials and successfully generalized to novel stimuli sets. However, all crows failed the greyscale test and half of them the contour test. Neither occlusion of internal features of the face, nor randomly pairing of exemplars affected discrimination performance of most, if not all crows. We suggest that jungle crows categorize human face photographs based on perceptual similarities as other non-human animals do, and colour appears to be the most salient feature controlling discriminative behaviour. However, the variability in the use of facial contours among individuals suggests an exploitation of multiple features and individual differences in visual information processing among jungle crows. Copyright © 2010 Elsevier B.V. All rights reserved.
Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion.
Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel
2014-01-01
Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.
Fu, Qiufang; Liu, Yong-Jin; Dienes, Zoltan; Wu, Jianhui; Chen, Wenfeng; Fu, Xiaolan
2016-07-01
A fundamental question in vision research is whether visual recognition is determined by edge-based information (e.g., edge, line, and conjunction) or surface-based information (e.g., color, brightness, and texture). To investigate this question, we manipulated the stimulus onset asynchrony (SOA) between the scene and the mask in a backward masking task of natural scene categorization. The behavioral results showed that correct classification was higher for line-drawings than for color photographs when the SOA was 13ms, but lower when the SOA was longer. The ERP results revealed that most latencies of early components were shorter for the line-drawings than for the color photographs, and the latencies gradually increased with the SOA for the color photographs but not for the line-drawings. The results provide new evidence that edge-based information is the primary determinant of natural scene categorization, receiving priority processing; by contrast, surface information takes longer to facilitate natural scene categorization. Copyright © 2016 Elsevier Inc. All rights reserved.
The Influence of Flankers on Race Categorization of Faces
Sun, Hsin-Mei; Balas, Benjamin
2012-01-01
Context affects multiple cognitive and perceptual processes. In the present study, we asked how the context of a set of faces affected the perception of a target face’s race in two distinct tasks. In Experiments 1 and 2, participants categorized target faces according to perceived racial category (Black or White). In Experiment 1, the target face was presented alone, or with Black or White flanker faces. The orientation of flanker faces was also manipulated to investigate how face inversion effect interacts with the influences of flanker faces on the target face. The results showed that participants were more likely to categorize the target face as White when it was surrounded by inverted White faces (an assimilation effect). Experiment 2 further examined how different aspects of the visual context affect the perception of the target face by manipulating flanker faces’ shape and pigmentation as well as their orientation. The results showed that flanker faces’ shape and pigmentation affected the perception of the target face differently. While shape elicited a contrast effect, pigmentation appeared to be assimilative. These novel findings suggest that the perceived race of a face is modulated by the appearance of other faces and their distinct shape and pigmentation properties. However, the contrast and assimilation effects elicited by flanker faces’ shape and pigmentation may be specific to race categorization, since the same stimuli used in a delayed matching task (Experiment 3) revealed that flanker pigmentation induced a contrast effect on the perception of target pigmentation. PMID:22825930
Shape equivalence under perspective and projective transformations.
Wagemans, J; Lamote, C; Van Gool, L
1997-06-01
When a planar shape is viewed obliquely, it is deformed by a perspective deformation. If the visual system were to pick up geometrical invariants from such projections, these would necessarily be invariant under the wider class of projective transformations. To what extent can the visual system tell the difference between perspective and nonperspective but still projective deformations of shapes? To investigate this, observers were asked to indicate which of two test patterns most resembled a standard pattern. The test patterns were related to the standard pattern by a perspective or projective transformation, or they were completely unrelated. Performance was slightly better in a matching task with perspective and unrelated test patterns (92.6%) than in a projective-random matching task (88.8%). In a direct comparison, participants had a small preference (58.5%) for the perspectively related patterns over the projectively related ones. Preferences were based on the values of the transformation parameters (slant and shear). Hence, perspective and projective transformations yielded perceptual differences, but they were not treated in a categorically different manner by the human visual system.
The functional architecture of the ventral temporal cortex and its role in categorization
Grill-Spector, Kalanit; Weiner, Kevin S.
2014-01-01
Visual categorization is thought to occur in the human ventral temporal cortex (VTC), but how this categorization is achieved is still largely unknown. In this Review, we consider the computations and representations that are necessary for categorization and examine how the microanatomical and macroanatomical layout of the VTC might optimize them to achieve rapid and flexible visual categorization. We propose that efficient categorization is achieved by organizing representations in a nested spatial hierarchy in the VTC. This spatial hierarchy serves as a neural infrastructure for the representational hierarchy of visual information in the VTC and thereby enables flexible access to category information at several levels of abstraction. PMID:24962370
Soto, Fabian A.; Waldschmidt, Jennifer G.; Helie, Sebastien; Ashby, F. Gregory
2013-01-01
Previous evidence suggests that relatively separate neural networks underlie initial learning of rule-based and information-integration categorization tasks. With the development of automaticity, categorization behavior in both tasks becomes increasingly similar and exclusively related to activity in cortical regions. The present study uses multi-voxel pattern analysis to directly compare the development of automaticity in different categorization tasks. Each of three groups of participants received extensive training in a different categorization task: either an information-integration task, or one of two rule-based tasks. Four training sessions were performed inside an MRI scanner. Three different analyses were performed on the imaging data from a number of regions of interest (ROIs). The common patterns analysis had the goal of revealing ROIs with similar patterns of activation across tasks. The unique patterns analysis had the goal of revealing ROIs with dissimilar patterns of activation across tasks. The representational similarity analysis aimed at exploring (1) the similarity of category representations across ROIs and (2) how those patterns of similarities compared across tasks. The results showed that common patterns of activation were present in motor areas and basal ganglia early in training, but only in the former later on. Unique patterns were found in a variety of cortical and subcortical areas early in training, but they were dramatically reduced with training. Finally, patterns of representational similarity between brain regions became increasingly similar across tasks with the development of automaticity. PMID:23333700
Gender and Prior Science Achievement Affect Categorization on a Procedural Learning Task
ERIC Educational Resources Information Center
Hong, Jon-Chao; Lu, Chow-Chin; Wang, Jen-Lian; Liao, Shin; Wu, Ming-Ray; Hwang, Ming-Yueh; Lin, Pei-Hsin
2013-01-01
Categorization is one of the main mental processes by which perception and conception develop. Nevertheless, categorization receives little attention with the development of critical thinking in Taiwan elementary schools. Thus, the present study investigates the effect that individual differences have on performing categorization tasks.…
2015-01-01
Background Though cluster analysis has become a routine analytic task for bioinformatics research, it is still arduous for researchers to assess the quality of a clustering result. To select the best clustering method and its parameters for a dataset, researchers have to run multiple clustering algorithms and compare them. However, such a comparison task with multiple clustering results is cognitively demanding and laborious. Results In this paper, we present XCluSim, a visual analytics tool that enables users to interactively compare multiple clustering results based on the Visual Information Seeking Mantra. We build a taxonomy for categorizing existing techniques of clustering results visualization in terms of the Gestalt principles of grouping. Using the taxonomy, we choose the most appropriate interactive visualizations for presenting individual clustering results from different types of clustering algorithms. The efficacy of XCluSim is shown through case studies with a bioinformatician. Conclusions Compared to other relevant tools, XCluSim enables users to compare multiple clustering results in a more scalable manner. Moreover, XCluSim supports diverse clustering algorithms and dedicated visualizations and interactions for different types of clustering results, allowing more effective exploration of details on demand. Through case studies with a bioinformatics researcher, we received positive feedback on the functionalities of XCluSim, including its ability to help identify stably clustered items across multiple clustering results. PMID:26328893
Chromatic Perceptual Learning but No Category Effects without Linguistic Input.
Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L
2016-01-01
Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.
The effect of integration masking on visual processing in perceptual categorization.
Hélie, Sébastien
2017-08-01
Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Fize, Denis; Fabre-Thorpe, Michele; Richard, Ghislaine; Doyon, Bernard; Thorpe, Simon J.
2005-01-01
Humans are fast and accurate at performing an animal categorization task with natural photographs briefly flashed centrally. Here, this central categorization task is compared to a three position task in which photographs could appear randomly either centrally, or at 3.6 [degrees] eccentricity (right or left) of the fixation point. A mild…
Wiese, Holger; Schweinberger, Stefan R; Neumann, Markus F
2008-11-01
We used repetition priming to investigate implicit and explicit processes of unfamiliar face categorization. During prime and test phases, participants categorized unfamiliar faces according to either age or gender. Faces presented at test were either new or primed in a task-congruent (same task during priming and test) or incongruent (different tasks) condition. During age categorization, reaction times revealed significant priming for both priming conditions, and event-related potentials yielded an increased N170 over the left hemisphere as a result of priming. During gender categorization, congruent faces elicited priming and a latency decrease in the right N170. Accordingly, information about age is extracted irrespective of processing demands, and priming facilitates the extraction of feature information reflected in the left N170 effect. By contrast, priming of gender categorization may depend on whether the task at initial presentation requires configural processing.
Categorization of compensatory motions in transradial myoelectric prosthesis users.
Hussaini, Ali; Zinck, Arthur; Kyberd, Peter
2017-06-01
Prosthesis users perform various compensatory motions to accommodate for the loss of the hand and wrist as well as the reduced functionality of a prosthetic hand. Investigate different compensation strategies that are performed by prosthesis users. Comparative analysis. A total of 20 able-bodied subjects and 4 prosthesis users performed a set of bimanual activities. Movements of the trunk and head were recorded using a motion capture system and a digital video recorder. Clinical motion angles were calculated to assess the compensatory motions made by the prosthesis users. The video recording also assisted in visually identifying the compensations. Compensatory motions by the prosthesis users were evident in the tasks performed (slicing and stirring activities) as compared to the benchmark of able-bodied subjects. Compensations took the form of a measured increase in range of motion, an observed adoption of a new posture during task execution, and prepositioning of items in the workspace prior to initiating a given task. Compensatory motions were performed by prosthesis users during the selected tasks. These can be categorized into three different types of compensations. Clinical relevance Proper identification and classification of compensatory motions performed by prosthesis users into three distinct forms allows clinicians and researchers to accurately identify and quantify movement. It will assist in evaluating new prosthetic interventions by providing distinct terminology that is easily understood and can be shared between research institutions.
Miles, James D; Proctor, Robert W
2009-10-01
In the current study, we show that the non-intentional processing of visually presented words and symbols can be attenuated by sounds. Importantly, this attenuation is dependent on the similarity in categorical domain between the sounds and words or symbols. Participants performed a task in which left or right responses were made contingent on the color of a centrally presented target that was either a location word (LEFT or RIGHT) or a left or right arrow. Responses were faster when they were on the side congruent with the word or arrow. This bias was reduced for location words by a neutral spoken word and for arrows by a tone series, but not vice versa. We suggest that words and symbols are processed with minimal attentional requirements until they are categorized into specific knowledge domains, but then become sensitive to other information within the same domain regardless of the similarity between modalities.
Bonin, Tanor; Smilek, Daniel
2016-04-01
We evaluated whether task-irrelevant inharmonic music produces greater interference with cognitive performance than task-irrelevant harmonic music. Participants completed either an auditory (Experiment 1) or a visual (Experiment 2) version of the cognitively demanding 2-back task in which they were required to categorize each digit in a sequence of digits as either being a target (a digit also presented two positions earlier in the sequence) or a distractor (all other items). They were concurrently exposed to either task-irrelevant harmonic music (judged to be consonant), task-irrelevant inharmonic music (judged to be dissonant), or no music at all as a distraction. The main finding across both experiments was that performance on the 2-back task was worse when participants were exposed to inharmonic music than when they were exposed to harmonic music. Interestingly, performance on the 2-back task was generally the same regardless of whether harmonic music or no music was played. We suggest that inharmonic, dissonant music interferes with cognitive performance by requiring greater cognitive processing than harmonic, consonant music, and speculate about why this might be.
Neuroanatomical correlates of encoding in episodic memory: levels of processing effect.
Kapur, S; Craik, F I; Tulving, E; Wilson, A A; Houle, S; Brown, G M
1994-01-01
Cognitive studies of memory processes demonstrate that memory for stimuli is a function of how they are encoded; stimuli processed semantically are better remembered than those processed in a perceptual or shallow fashion. This study investigates the neural correlates of this cognitive phenomenon. Twelve subjects performed two different cognitive tasks on a series of visually presented nouns. In one task, subjects detected the presence or absence of the letter a; in the other, subjects categorized each noun as living or nonliving. Positron emission tomography (PET) scans using 15O-labeled water were obtained during both tasks. Subjects showed substantially better recognition memory for nouns seen in the living/nonliving task, compared to nouns seen in the a-checking task. Comparison of the PET images between the two cognitive tasks revealed a significant activation in the left inferior prefrontal cortex (Brodmann's areas 45, 46, 47, and 10) in the semantic task as compared to the perceptual task. We propose that memory processes are subserved by a wide neurocognitive network and that encoding processes involve preferential activation of the structures in the left inferior prefrontal cortex. PMID:8134340
Neuroanatomical correlates of encoding in episodic memory: levels of processing effect.
Kapur, S; Craik, F I; Tulving, E; Wilson, A A; Houle, S; Brown, G M
1994-03-15
Cognitive studies of memory processes demonstrate that memory for stimuli is a function of how they are encoded; stimuli processed semantically are better remembered than those processed in a perceptual or shallow fashion. This study investigates the neural correlates of this cognitive phenomenon. Twelve subjects performed two different cognitive tasks on a series of visually presented nouns. In one task, subjects detected the presence or absence of the letter a; in the other, subjects categorized each noun as living or nonliving. Positron emission tomography (PET) scans using 15O-labeled water were obtained during both tasks. Subjects showed substantially better recognition memory for nouns seen in the living/nonliving task, compared to nouns seen in the a-checking task. Comparison of the PET images between the two cognitive tasks revealed a significant activation in the left inferior prefrontal cortex (Brodmann's areas 45, 46, 47, and 10) in the semantic task as compared to the perceptual task. We propose that memory processes are subserved by a wide neurocognitive network and that encoding processes involve preferential activation of the structures in the left inferior prefrontal cortex.
Zelinsky, Gregory J; Peng, Yifan; Berg, Alexander C; Samaras, Dimitris
2013-10-08
Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery.
Zelinsky, Gregory J.; Peng, Yifan; Berg, Alexander C.; Samaras, Dimitris
2013-01-01
Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery. PMID:24105460
Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
Cerebellar tDCS Does Not Enhance Performance in an Implicit Categorization Learning Task.
Verhage, Marie C; Avila, Eric O; Frens, Maarten A; Donchin, Opher; van der Geest, Jos N
2017-01-01
Background: Transcranial Direct Current Stimulation (tDCS) is a form of non-invasive electrical stimulation that changes neuronal excitability in a polarity and site-specific manner. In cognitive tasks related to prefrontal and cerebellar learning, cortical tDCS arguably facilitates learning, but the few studies investigating cerebellar tDCS, however, are inconsistent. Objective: We investigate the effect of cerebellar tDCS on performance of an implicit categorization learning task. Methods: Forty participants performed a computerized version of an implicit categorization learning task where squares had to be sorted into two categories, according to an unknown but fixed rule that integrated both the size and luminance of the square. Participants did one round of categorization to familiarize themselves with the task and to provide a baseline of performance. After that, 20 participants received anodal tDCS (20 min, 1.5 mA) over the right cerebellum, and 19 participants received sham stimulation and simultaneously started a second session of the categorization task using a new rule. Results: As expected, subjects performed better in the second session than in the first, baseline session, showing increased accuracy scores and reduced reaction times. Over trials, participants learned the categorization rule, improving their accuracy and reaction times. However, we observed no effect of anodal tDCS stimulation on overall performance or on learning, compared to sham stimulation. Conclusion: These results suggest that cerebellar tDCS does not modulate performance and learning on an implicit categorization task.
Rennels, Jennifer L.; Kayl, Andrea J.; Langlois, Judith H.; Davis, Rachel E.; Orlewicz, Mateusz
2015-01-01
Infants typically have a preponderance of experience with females, resulting in visual preferences for female faces, particularly high attractive females, and in better categorization of female relative to male faces. We examined whether these abilities generalized to infants’ visual preferences for and categorization of perceptually similar male faces (i.e., low masculine males). Twelve-month-olds visually preferred high attractive relative to low attractive male faces within low masculine pairs only (Exp. 1), but did not visually prefer low masculine relative to high masculine male faces (Exp. 2). Lack of visual preferences was not due to infants’ inability to discriminate between the male faces (Exps. 3 & 4). Twelve-month-olds categorized low masculine, but not high masculine, male faces (Exp. 5). Infants could individuate male faces within each of the categories (Exp. 6). Twelve-month-olds’ attention toward and categorization of male faces may reflect a generalization of their female facial expertise. PMID:26547249
Attribute-based classification for zero-shot visual object categorization.
Lampert, Christoph H; Nickisch, Hannes; Harmeling, Stefan
2014-03-01
We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.
An ERP analysis of recognition and categorization decisions in a prototype-distortion task.
Tunney, Richard J; Fernie, Gordon; Astle, Duncan E
2010-04-12
Theories of categorization make different predictions about the underlying processes used to represent categories. Episodic theories suggest that categories are represented in memory by storing previously encountered exemplars in memory. Prototype theories suggest that categories are represented in the form of a prototype independently of memory. A number of studies that show dissociations between categorization and recognition are often cited as evidence for the prototype account. These dissociations have compared recognition judgements made to one set of items to categorization judgements to a different set of items making a clear interpretation difficult. Instead of using different stimuli for different tests this experiment compares the processes by which participants make decisions about category membership in a prototype-distortion task and with recognition decisions about the same set of stimuli by examining the Event Related Potentials (ERPs) associated with them. Sixty-three participants were asked to make categorization or recognition decisions about stimuli that either formed an artificial category or that were category non-members. We examined the ERP components associated with both kinds of decision for pre-exposed and control participants. In contrast to studies using different items we observed no behavioural differences between the two kinds of decision; participants were equally able to distinguish category members from non-members, regardless of whether they were performing a recognition or categorisation judgement. Interestingly, this did not interact with prior-exposure. However, the ERP data demonstrated that the early visual evoked response that discriminated category members from non-members was modulated by which judgement participants performed and whether they had been pre-exposed to category members. We conclude from this that any differences between categorization and recognition reflect differences in the information that participants focus on in the stimuli to make the judgements at test, rather than any differences in encoding or process.
An ERP Analysis of Recognition and Categorization Decisions in a Prototype-Distortion Task
Tunney, Richard J.; Fernie, Gordon; Astle, Duncan E.
2010-01-01
Background Theories of categorization make different predictions about the underlying processes used to represent categories. Episodic theories suggest that categories are represented in memory by storing previously encountered exemplars in memory. Prototype theories suggest that categories are represented in the form of a prototype independently of memory. A number of studies that show dissociations between categorization and recognition are often cited as evidence for the prototype account. These dissociations have compared recognition judgements made to one set of items to categorization judgements to a different set of items making a clear interpretation difficult. Instead of using different stimuli for different tests this experiment compares the processes by which participants make decisions about category membership in a prototype-distortion task and with recognition decisions about the same set of stimuli by examining the Event Related Potentials (ERPs) associated with them. Method Sixty-three participants were asked to make categorization or recognition decisions about stimuli that either formed an artificial category or that were category non-members. We examined the ERP components associated with both kinds of decision for pre-exposed and control participants. Conclusion In contrast to studies using different items we observed no behavioural differences between the two kinds of decision; participants were equally able to distinguish category members from non-members, regardless of whether they were performing a recognition or categorisation judgement. Interestingly, this did not interact with prior-exposure. However, the ERP data demonstrated that the early visual evoked response that discriminated category members from non-members was modulated by which judgement participants performed and whether they had been pre-exposed to category members. We conclude from this that any differences between categorization and recognition reflect differences in the information that participants focus on in the stimuli to make the judgements at test, rather than any differences in encoding or process. PMID:20404932
Aust, Ulrike; Braunöder, Elisabeth
2015-02-01
The present experiment investigated pigeons' and humans' processing styles-local or global-in an exemplar-based visual categorization task in which category membership of every stimulus had to be learned individually, and in a rule-based task in which category membership was defined by a perceptual rule. Group Intact was trained with the original pictures (providing both intact local and global information), Group Scrambled was trained with scrambled versions of the same pictures (impairing global information), and Group Blurred was trained with blurred versions (impairing local information). Subsequently, all subjects were tested for transfer to the 2 untrained presentation modes. Humans outperformed pigeons regarding learning speed and accuracy as well as transfer performance and showed good learning irrespective of group assignment, whereas the pigeons of Group Blurred needed longer to learn the training tasks than the pigeons of Groups Intact and Scrambled. Also, whereas humans generalized equally well to any novel presentation mode, pigeons' transfer from and to blurred stimuli was impaired. Both species showed faster learning and, for the most part, better transfer in the rule-based than in the exemplar-based task, but there was no evidence of the used processing mode depending on the type of task (exemplar- or rule-based). Whereas pigeons relied on local information throughout, humans did not show a preference for either processing level. Additional tests with grayscale versions of the training stimuli, with versions that were both blurred and scrambled, and with novel instances of the rule-based task confirmed and further extended these findings. PsycINFO Database Record (c) 2015 APA, all rights reserved.
ERP correlates of letter identity and letter position are modulated by lexical frequency
Vergara-Martínez, Marta; Perea, Manuel; Gómez, Pablo; Swaab, Tamara Y.
2013-01-01
The encoding of letter position is a key aspect in all recently proposed models of visual-word recognition. We analyzed the impact of lexical frequency on letter position assignment by examining the temporal dynamics of lexical activation induced by pseudowords extracted from words of different frequencies. For each word (e.g., BRIDGE), we created two pseudowords: A transposed-letter (TL: BRIGDE) and a replaced-letter pseudoword (RL: BRITGE). ERPs were recorded while participants read words and pseudowords in two tasks: Semantic categorization (Experiment 1) and lexical decision (Experiment 2). For high-frequency stimuli, similar ERPs were obtained for words and TL-pseudowords, but the N400 component to words was reduced relative to RL-pseudowords, indicating less lexical/semantic activation. In contrast, TL- and RL-pseudowords created from low-frequency stimuli elicited similar ERPs. Behavioral responses in the lexical decision task paralleled this asymmetry. The present findings impose constraints on computational and neural models of visual-word recognition. PMID:23454070
Evidence for perceptual deficits in associative visual (prosop)agnosia: a single-case study.
Delvenne, Jean François; Seron, Xavier; Coyette, Françoise; Rossion, Bruno
2004-01-01
Associative visual agnosia is classically defined as normal visual perception stripped of its meaning [Archiv für Psychiatrie und Nervenkrankheiten 21 (1890) 22/English translation: Cognitive Neuropsychol. 5 (1988) 155]: these patients cannot access to their stored visual memories to categorize the objects nonetheless perceived correctly. However, according to an influential theory of visual agnosia [Farah, Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision, MIT Press, Cambridge, MA, 1990], visual associative agnosics necessarily present perceptual deficits that are the cause of their impairment at object recognition Here we report a detailed investigation of a patient with bilateral occipito-temporal lesions strongly impaired at object and face recognition. NS presents normal drawing copy, and normal performance at object and face matching tasks as used in classical neuropsychological tests. However, when tested with several computer tasks using carefully controlled visual stimuli and taking both his accuracy rate and response times into account, NS was found to have abnormal performances at high-level visual processing of objects and faces. Albeit presenting a different pattern of deficits than previously described in integrative agnosic patients such as HJA and LH, his deficits were characterized by an inability to integrate individual parts into a whole percept, as suggested by his failure at processing structurally impossible three-dimensional (3D) objects, an absence of face inversion effects and an advantage at detecting and matching single parts. Taken together, these observations question the idea of separate visual representations for object/face perception and object/face knowledge derived from investigations of visual associative (prosop)agnosia, and they raise some methodological issues in the analysis of single-case studies of (prosop)agnosic patients.
Sevinc, Gunes; Spreng, R Nathan
2014-01-01
Human morality has been investigated using a variety of tasks ranging from judgments of hypothetical dilemmas to viewing morally salient stimuli. These experiments have provided insight into neural correlates of moral judgments and emotions, yet these approaches reveal important differences in moral cognition. Moral reasoning tasks require active deliberation while moral emotion tasks involve the perception of stimuli with moral implications. We examined convergent and divergent brain activity associated with these experimental paradigms taking a quantitative meta-analytic approach. A systematic search of the literature yielded 40 studies. Studies involving explicit decisions in a moral situation were categorized as active (n = 22); studies evoking moral emotions were categorized as passive (n = 18). We conducted a coordinate-based meta-analysis using the Activation Likelihood Estimation to determine reliable patterns of brain activity. Results revealed a convergent pattern of reliable brain activity for both task categories in regions of the default network, consistent with the social and contextual information processes supported by this brain network. Active tasks revealed more reliable activity in the temporoparietal junction, angular gyrus and temporal pole. Active tasks demand deliberative reasoning and may disproportionately involve the retrieval of social knowledge from memory, mental state attribution, and construction of the context through associative processes. In contrast, passive tasks reliably engaged regions associated with visual and emotional information processing, including lingual gyrus and the amygdala. A laterality effect was observed in dorsomedial prefrontal cortex, with active tasks engaging the left, and passive tasks engaging the right. While overlapping activity patterns suggest a shared neural network for both tasks, differential activity suggests that processing of moral input is affected by task demands. The results provide novel insight into distinct features of moral cognition, including the generation of moral context through associative processes and the perceptual detection of moral salience.
Sevinc, Gunes; Spreng, R. Nathan
2014-01-01
Background and Objectives Human morality has been investigated using a variety of tasks ranging from judgments of hypothetical dilemmas to viewing morally salient stimuli. These experiments have provided insight into neural correlates of moral judgments and emotions, yet these approaches reveal important differences in moral cognition. Moral reasoning tasks require active deliberation while moral emotion tasks involve the perception of stimuli with moral implications. We examined convergent and divergent brain activity associated with these experimental paradigms taking a quantitative meta-analytic approach. Data Source A systematic search of the literature yielded 40 studies. Studies involving explicit decisions in a moral situation were categorized as active (n = 22); studies evoking moral emotions were categorized as passive (n = 18). We conducted a coordinate-based meta-analysis using the Activation Likelihood Estimation to determine reliable patterns of brain activity. Results & Conclusions Results revealed a convergent pattern of reliable brain activity for both task categories in regions of the default network, consistent with the social and contextual information processes supported by this brain network. Active tasks revealed more reliable activity in the temporoparietal junction, angular gyrus and temporal pole. Active tasks demand deliberative reasoning and may disproportionately involve the retrieval of social knowledge from memory, mental state attribution, and construction of the context through associative processes. In contrast, passive tasks reliably engaged regions associated with visual and emotional information processing, including lingual gyrus and the amygdala. A laterality effect was observed in dorsomedial prefrontal cortex, with active tasks engaging the left, and passive tasks engaging the right. While overlapping activity patterns suggest a shared neural network for both tasks, differential activity suggests that processing of moral input is affected by task demands. The results provide novel insight into distinct features of moral cognition, including the generation of moral context through associative processes and the perceptual detection of moral salience. PMID:24503959
Vélez-Coto, María; Rodríguez-Fórtiz, María José; Rodriguez-Almendros, María Luisa; Cabrera-Cuevas, Marcelino; Rodríguez-Domínguez, Carlos; Ruiz-López, Tomás; Burgos-Pulido, Ángeles; Garrido-Jiménez, Inmaculada; Martos-Pérez, Juan
2017-05-01
People with low-functioning ASD and other disabilities often find it difficult to understand the symbols traditionally used in educational materials during the learning process. Technology-based interventions are becoming increasingly common, helping children with cognitive disabilities to perform academic tasks and improve their abilities and knowledge. Such children often find it difficult to perform certain tasks contained in educational materials since they lack necessary skills such as abstract reasoning. In order to help these children, the authors designed and created SIGUEME to train attention and the perceptual and visual cognitive skills required to work with and understand graphic materials and objects. A pre-test/post-test design was implemented to test SIGUEME. Seventy-four children with low-functioning ASD (age=13.47, SD=8.74) were trained with SIGUEME over twenty-five sessions and compared with twenty-eight children (age=12.61, SD=2.85) who had not received any intervention. There was a statistically significant improvement in the experimental group in Attention (W=-5.497, p<0.001). There was also a significant change in Association and Categorization (W=2.721, p=0.007) and Interaction (W=-3.287, p=0.001). SIGUEME is an effective tool for improving attention, categorization and interaction in low-functioning children with ASD. It is also a useful and powerful instrument for teachers, parents and educators by increasing the child's motivation and autonomy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Decoding task-based attentional modulation during face categorization.
Chiu, Yu-Chin; Esterman, Michael; Han, Yuefeng; Rosen, Heather; Yantis, Steven
2011-05-01
Attention is a neurocognitive mechanism that selects task-relevant sensory or mnemonic information to achieve current behavioral goals. Attentional modulation of cortical activity has been observed when attention is directed to specific locations, features, or objects. However, little is known about how high-level categorization task set modulates perceptual representations. In the current study, observers categorized faces by gender (male vs. female) or race (Asian vs. White). Each face was perceptually ambiguous in both dimensions, such that categorization of one dimension demanded selective attention to task-relevant information within the face. We used multivoxel pattern classification to show that task-specific modulations evoke reliably distinct spatial patterns of activity within three face-selective cortical regions (right fusiform face area and bilateral occipital face areas). This result suggests that patterns of activity in these regions reflect not only stimulus-specific (i.e., faces vs. houses) responses but also task-specific (i.e., race vs. gender) attentional modulation. Furthermore, exploratory whole-brain multivoxel pattern classification (using a searchlight procedure) revealed a network of dorsal fronto-parietal regions (left middle frontal gyrus and left inferior and superior parietal lobule) that also exhibit distinct patterns for the two task sets, suggesting that these regions may represent abstract goals during high-level categorization tasks.
Space-by-time manifold representation of dynamic facial expressions for emotion categorization
Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.
2016-01-01
Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521
Chromatic Perceptual Learning but No Category Effects without Linguistic Input
Grandison, Alexandra; Sowden, Paul T.; Drivonikou, Vicky G.; Notman, Leslie A.; Alexander, Iona; Davies, Ian R. L.
2016-01-01
Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest. PMID:27252669
Measuring Category Intuitiveness in Unconstrained Categorization Tasks
ERIC Educational Resources Information Center
Pothos, Emmanuel M.; Perlman, Amotz; Bailey, Todd M.; Kurtz, Ken; Edwards, Darren J.; Hines, Peter; McDonnell, John V.
2011-01-01
What makes a category seem natural or intuitive? In this paper, an unsupervised categorization task was employed to examine observer agreement concerning the categorization of nine different stimulus sets. The stimulus sets were designed to capture different intuitions about classification structure. The main empirical index of category…
NICE: A Computational Solution to Close the Gap from Colour Perception to Colour Categorization
Parraga, C. Alejandro; Akbarinia, Arash
2016-01-01
The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms. PMID:26954691
NICE: A Computational Solution to Close the Gap from Colour Perception to Colour Categorization.
Parraga, C Alejandro; Akbarinia, Arash
2016-01-01
The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms.
Dima, Diana C; Perry, Gavin; Singh, Krish D
2018-06-11
In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming
2018-02-28
The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.
Brants, Marijke; Bulthé, Jessica; Daniels, Nicky; Wagemans, Johan; Op de Beeck, Hans P
2016-02-15
Visual object perception is an important function in primates which can be fine-tuned by experience, even in adults. Which factors determine the regions and the neurons that are modified by learning is still unclear. Recently, it was proposed that the exact cortical focus and distribution of learning effects might depend upon the pre-learning mapping of relevant functional properties and how this mapping determines the informativeness of neural units for the stimuli and the task to be learned. From this hypothesis we would expect that visual experience would strengthen the pre-learning distributed functional map of the relevant distinctive object properties. Here we present a first test of this prediction in twelve human subjects who were trained in object categorization and differentiation, preceded and followed by a functional magnetic resonance imaging session. Specifically, training increased the distributed multi-voxel pattern information for trained object distinctions in object-selective cortex, resulting in a generalization from pre-training multi-voxel activity patterns to after-training activity patterns. Simulations show that the increased selectivity combined with the inter-session generalization is consistent with a training-induced strengthening of a pre-existing selectivity map. No training-related neural changes were detected in other regions. In sum, training to categorize or individuate objects strengthened pre-existing representations in human object-selective cortex, providing a first indication that the neuroanatomical distribution of learning effects depends upon the pre-learning mapping of visual object properties. Copyright © 2015 Elsevier Inc. All rights reserved.
Rossion, Bruno; Dricot, Laurence; Goebel, Rainer; Busigny, Thomas
2011-01-01
How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex. PMID:21267432
Montanes, P; Goldblum, M C; Boller, F
1996-08-01
The present study was conducted to assess the hypothesis that visual similarity between exemplars within a semantic category may affect differentially the recognition process of living and nonliving things, according to task demands, in patients with semantic memory disorders. Thirty-nine Alzheimer's patients and 39 normal elderly subjects were presented with a task in which they had to classify pictures and words, depicting either living or nonliving things, at two levels of classification: subordinate (e.g., mammals versus birds or tools versus vehicles) and attribute (e.g., wild versus domestic animals or fast versus slow vehicles). Contrary to previous results (Montañes, Goldblum, & Boller, 1995) in a naming task, but as expected, living things were better classified than nonliving ones by both controls and patients. As expected, classifications at the subordinate level also gave rise to better performance than classifications at the attribute level. Although (and somewhat unexpectedly) no advantage of picture over word classification emerged, some effects consistent with the hypothesis that visual similarity affects picture classification emerged, in particular within a subgroup of patients with predominant verbal deficits and the most severe semantic memory disorders. This subgroup obtained a better score on classification of pictures than of words depicting living items (that share many visual features) when classification is at the subordinate level (for which visual similarity is a reliable clue to classification), but met with major difficulties when classifying those pictures at the attribute level (for which shared visual features are not reliable clues to classification). These results emphasize the fact that some "normal" effects specific to items in living and nonliving categories have to be considered among the factors causing selective category-specific deficits in patients, as well as their relevance in achieving tasks which require either differentiation between competing exemplars in the same semantic category (naming) or detection of resemblance between those exemplars (categorization).
Investigating Visual Alerting in Maritime Command and Control
2008-12-01
were: “ qwe ” for neutral or “asd” for hostile. The final step in the categorization task was to confirm their decision by clicking the mouse on a box...the Report display was active. As in Experiment 1, participants typed “ qwe ” or “asd” into the textbox to report their classification. An example of...button, and enter your answer in the text box on the right screen. To enter your answer type " qwe "=neutral or "asd"=hostile. Use the mouse to
Asymmetric top-down modulation of ascending visual pathways in pigeons.
Freund, Nadja; Valencia-Alfonso, Carlos E; Kirsch, Janina; Brodmann, Katja; Manns, Martina; Güntürkün, Onur
2016-03-01
Cerebral asymmetries are a ubiquitous phenomenon evident in many species, incl. humans, and they display some similarities in their organization across vertebrates. In many species the left hemisphere is associated with the ability to categorize objects based on abstract or experience-based behaviors. Using the asymmetrically organized visual system of pigeons as an animal model, we show that descending forebrain pathways asymmetrically modulate visually evoked responses of single thalamic units. Activity patterns of neurons within the nucleus rotundus, the largest thalamic visual relay structure in birds, were differently modulated by left and right hemispheric descending systems. Thus, visual information ascending towards the left hemisphere was modulated by forebrain top-down systems at thalamic level, while right thalamic units were strikingly less modulated. This asymmetry of top-down control could promote experience-based processes within the left hemisphere, while biasing the right side towards stimulus-bound response patterns. In a subsequent behavioral task we tested the possible functional impact of this asymmetry. Under monocular conditions, pigeons learned to discriminate color pairs, so that each hemisphere was trained on one specific discrimination. Afterwards the animals were presented with stimuli that put the hemispheres in conflict. Response patterns on the conflicting stimuli revealed a clear dominance of the left hemisphere. Transient inactivation of left hemispheric top-down control reduced this dominance while inactivation of right hemispheric top-down control had no effect on response patterns. Functional asymmetries of descending systems that modify visual ascending pathways seem to play an important role in the superiority of the left hemisphere in experience-based visual tasks. Copyright © 2015. Published by Elsevier Ltd.
The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.
Norris, Dennis
2006-04-01
This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).
Child–Adult Differences in Using Dual-Task Paradigms to Measure Listening Effort
Charles, Lauren M.; Ricketts, Todd A.
2017-01-01
Purpose The purpose of the project was to investigate the effects modifying the secondary task in a dual-task paradigm to measure objective listening effort. To be specific, the complexity and depth of processing were increased relative to a simple secondary task. Method Three dual-task paradigms were developed for school-age children. The primary task was word recognition. The secondary task was a physical response to a visual probe (simple task), a physical response to a complex probe (increased complexity), or word categorization (increased depth of processing). Sixteen adults (22–32 years, M = 25.4) and 22 children (9–17 years, M = 13.2) were tested using the 3 paradigms in quiet and noise. Results For both groups, manipulations of the secondary task did not affect word recognition performance. For adults, increasing depth of processing increased the calculated effect of noise; however, for children, results with the deep secondary task were the least stable. Conclusions Manipulations of the secondary task differentially affected adults and children. Consistent with previous findings, increased depth of processing enhanced paradigm sensitivity for adults. However, younger participants were more likely to demonstrate the expected effects of noise on listening effort using a secondary task that did not require deep processing. PMID:28346816
Impact of feature saliency on visual category learning.
Hammer, Rubi
2015-01-01
People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.
Impact of feature saliency on visual category learning
Hammer, Rubi
2015-01-01
People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the ‘essence’ of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies. PMID:25954220
ERIC Educational Resources Information Center
Diesendruck, Gil; Peretz, Shimon
2013-01-01
Visual appearance is one of the main cues children rely on when categorizing novel objects. In 3 studies, testing 128 3-year-olds and 192 5-year-olds, we investigated how various kinds of information may differentially lead children to overlook visual appearance in their categorization decisions across domains. Participants saw novel animals or…
Li, Yuanyuan; Li, Fei; He, Ning; Guo, Lanting; Huang, Xiaoqi; Lui, Su; Gong, Qiyong
2014-08-04
Impaired working memory is thought to be a core feature of attention deficit hyperactivity disorder (ADHD). Previous imaging studies investigating working memory in ADHD have used tasks involving different cognitive resources and ignoring the categorical judgments about objects that are essential parts of performance in visual working memory tasks, thus complicating the interpretation of their findings. In the present study, we explored differential neural activation in children and adolescents with ADHD and in healthy controls using functional magnetic resonance imaging (fMRI) with the categorical n-back task (CN-BT), which maximized demands for executive reasoning while holding memory demands constant. A total of 33 drug-naive, right-handed male ADHD without comorbidity (mean age 9.9±2.4 years) and 27 right-handed, healthy male controls (mean age 10.9±2.7 years) were recruited in the present study. Event-related fMRI was used to study differences in brain activity during the CN-BT between the two groups. The two groups did not differ in their accuracy in the CN-BT, although the ADHD patients showed significantly shorter reaction times to correct responses than did the controls. During the CN-BT, both ADHD patients and controls showed significant positive and negative activations by the correct responses, mainly in the sensory-motor pathways and the striato-cerebellum circuit. Additionally, the ADHD patients showed significantly higher activation in the bilateral globus pallidus and the right hippocampus compared with the controls. There was also a positive correlation between hyperactivation of the left globus pallidus and the reaction time to correct responses in ADHD. In contrast to controls, ADHD patients showed neural hyperactivation in the striatum and mediotemporal areas during a working memory task involving categorization. Hyperfunction in these areas might be the pathophysiological foundation of ADHD, related to the deficits of working memory and the impulsive symptoms. Copyright © 2014 Elsevier Inc. All rights reserved.
Schuch, Stefanie; Werheid, Katja; Koch, Iring
2012-01-01
The present study investigated whether the processing characteristics of categorizing emotional facial expressions are different from those of categorizing facial age and sex information. Given that emotions change rapidly, it was hypothesized that processing facial expressions involves a more flexible task set that causes less between-task interference than the task sets involved in processing age or sex of a face. Participants switched between three tasks: categorizing a face as looking happy or angry (emotion task), young or old (age task), and male or female (sex task). Interference between tasks was measured by global interference and response interference. Both measures revealed patterns of asymmetric interference. Global between-task interference was reduced when a task was mixed with the emotion task. Response interference, as measured by congruency effects, was larger for the emotion task than for the nonemotional tasks. The results support the idea that processing emotional facial expression constitutes a more flexible task set that causes less interference (i.e., task-set "inertia") than processing the age or sex of a face.
Vanmarcke, Steven; Wagemans, Johan
2017-04-01
Adolescents with and without autism spectrum disorder (ASD) performed two priming experiments in which they implicitly processed a prime stimulus, containing high and/or low spatial frequency information, and then explicitly categorized a target face either as male/female (gender task) or as positive/negative (Valence task). Adolescents with ASD made more categorization errors than typically developing adolescents. They also showed an age-dependent improvement in categorization speed and had more difficulties with categorizing facial expressions than gender. However, in neither of the categorization tasks, we found group differences in the processing of coarse versus fine prime information. This contradicted our expectations, and indicated that the perceptual differences between adolescents with and without ASD critically depended on the processing time available for the primes.
Categorical encoding of color in the brain.
Bird, Chris M; Berens, Samuel C; Horner, Aidan J; Franklin, Anna
2014-03-25
The areas of the brain that encode color categorically have not yet been reliably identified. Here, we used functional MRI adaptation to identify neuronal populations that represent color categories irrespective of metric differences in color. Two colors were successively presented within a block of trials. The two colors were either from the same or different categories (e.g., "blue 1 and blue 2" or "blue 1 and green 1"), and the size of the hue difference was varied. Participants performed a target detection task unrelated to the difference in color. In the middle frontal gyrus of both hemispheres and to a lesser extent, the cerebellum, blood-oxygen level-dependent response was greater for colors from different categories relative to colors from the same category. Importantly, activation in these regions was not modulated by the size of the hue difference, suggesting that neurons in these regions represent color categorically, regardless of metric color difference. Representational similarity analyses, which investigated the similarity of the pattern of activity across local groups of voxels, identified other regions of the brain (including the visual cortex), which responded to metric but not categorical color differences. Therefore, categorical and metric hue differences appear to be coded in qualitatively different ways and in different brain regions. These findings have implications for the long-standing debate on the origin and nature of color categories, and also further our understanding of how color is processed by the brain.
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
Effects of target typicality on categorical search.
Maxfield, Justin T; Stalder, Westri D; Zelinsky, Gregory J
2014-10-01
The role of target typicality in a categorical visual search task was investigated by cueing observers with a target name, followed by a five-item target present/absent search array in which the target images were rated in a pretest to be high, medium, or low in typicality with respect to the basic-level target cue. Contrary to previous work, we found that search guidance was better for high-typicality targets compared to low-typicality targets, as measured by both the proportion of immediate target fixations and the time to fixate the target. Consistent with previous work, we also found an effect of typicality on target verification times, the time between target fixation and the search judgment; as target typicality decreased, verification times increased. To model these typicality effects, we trained Support Vector Machine (SVM) classifiers on the target categories, and tested these on the corresponding specific targets used in the search task. This analysis revealed significant differences in classifier confidence between the high-, medium-, and low-typicality groups, paralleling the behavioral results. Collectively, these findings suggest that target typicality broadly affects both search guidance and verification, and that differences in typicality can be predicted by distance from an SVM classification boundary. © 2014 ARVO.
Cortes-Ciriano, Isidro
2016-01-01
Assessing compound toxicity at early stages of the drug discovery process is a crucial task to dismiss drug candidates likely to fail in clinical trials. Screening drug candidates against structural alerts, i.e. chemical fragments associated to a toxicological response prior or after being metabolized (bioactivation), has proved a valuable approach for this task. During the last decades, diverse algorithms have been proposed for the automatic derivation of structural alerts from categorical toxicity data sets. Here, the python library bioalerts is presented, which comprises functionalities for the automatic derivation of structural alerts from categorical (dichotomous), e.g. toxic/non-toxic, and continuous bioactivity data sets, e.g. [Formula: see text] or [Formula: see text] values. The library bioalerts relies on the RDKit implementation of the circular Morgan fingerprint algorithm to compute chemical substructures, which are derived by considering radial atom neighbourhoods of increasing bond radius. In addition to the derivation of structural alerts, bioalerts provides functionalities for the calculation of unhashed (keyed) Morgan fingerprints, which can be used in predictive bioactivity modelling with the advantage of allowing for a chemically meaningful deconvolution of the chemical space. Finally, bioalerts provides functionalities for the easy visualization of the derived structural alerts.
The Nature of Phoneme Representation in Spoken Word Recognition
ERIC Educational Resources Information Center
Gaskell, M. Gareth; Quinlan, Philip T.; Tamminen, Jakke; Cleland, Alexandra A.
2008-01-01
Four experiments used the psychological refractory period logic to examine whether integration of multiple sources of phonemic information has a decisional locus. All experiments made use of a dual-task paradigm in which participants made forced-choice color categorization (Task 1) and phoneme categorization (Task 2) decisions at varying stimulus…
Drawing the line between constituent structure and coherence relations in visual narratives.
Cohn, Neil; Bender, Patrick
2017-02-01
Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of visual narrative grammar posits that hierarchic "grammatical" structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a "segmentation task" where participants drew lines between images in order to divide them into subepisodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants' divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The role of object categories in hybrid visual and memory search
Cunningham, Corbin A.; Wolfe, Jeremy M.
2014-01-01
In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054
Brady, Timothy F; Oliva, Aude
2008-07-01
Recent work has shown that observers can parse streams of syllables, tones, or visual shapes and learn statistical regularities in them without conscious intent (e.g., learn that A is always followed by B). Here, we demonstrate that these statistical-learning mechanisms can operate at an abstract, conceptual level. In Experiments 1 and 2, observers incidentally learned which semantic categories of natural scenes covaried (e.g., kitchen scenes were always followed by forest scenes). In Experiments 3 and 4, category learning with images of scenes transferred to words that represented the categories. In each experiment, the category of the scenes was irrelevant to the task. Together, these results suggest that statistical-learning mechanisms can operate at a categorical level, enabling generalization of learned regularities using existing conceptual knowledge. Such mechanisms may guide learning in domains as disparate as the acquisition of causal knowledge and the development of cognitive maps from environmental exploration.
Forder, Lewis; He, Xun; Franklin, Anna
2017-01-01
Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing.
He, Xun; Franklin, Anna
2017-01-01
Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing. PMID:28542426
Atoms of recognition in human and computer vision.
Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel
2016-03-08
Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
ERIC Educational Resources Information Center
Dick, Anthony Steven
2012-01-01
Two experiments examined processes underlying cognitive inflexibility in set-shifting tasks typically used to assess the development of executive function in children. Adult participants performed a Flexible Item Selection Task (FIST) that requires shifting from categorizing by one dimension (e.g., color) to categorizing by a second orthogonal…
Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke
2017-01-01
Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features other than object parts perform relatively poorly, perhaps because DNNs more comprehensively capture the colors, textures and contours which matter to human object perception. However, categorical models outperform DNNs, suggesting that further work may be needed to bring high-level semantic representations in DNNs closer to those extracted by humans. Modern DNNs explain similarity judgments remarkably well considering they were not trained on this task, and are promising models for many aspects of human cognition. PMID:29062291
Ihrke, Matthias; Brennen, Tim
2011-01-01
In this paper three experiments and corresponding model simulations are reported that investigate the priming of famous name recognition in order to explore the structure of the part of the semantic system dealing with people. Consistent with empirical findings, novel computational simulations using Burton et al.’s interactive activation and competition model point to a conceptual distinction between how priming is initiated in single- and double-familiarity tasks, indicating that priming should be weaker or non-existent for the single-familiarity task. Experiment 1 demonstrates that, within a double-familiarity framework using famous names, categorical, and associative priming are reliable effects. Pushing the model to the limit, it predicts that pairs of celebrities who are neither associatively nor categorically related but who share single biographical features, both died in a car crash for example, should prime each other. Experiment 2 investigated this in a double-familiarity task but the effect was not observed. We therefore simulated and realized a pairwise learning task that was conceptually similar to the double-familiarity-decision task but allowed to strengthen the underlying connections. Priming based on a single biographical feature could be found both in simulations and the experiment. The effect was not due to visual or name similarity which were controlled for and participants did not report using the biographical links between the people to learn the pairs. The results are interpreted to lend further support to structural models of the memory for persons. Furthermore, the results are consistent with the idea that episodic features known about people are stored in semantic memory and are automatically activated when encountering that person. PMID:21687446
Differential effect of visual masking in perceptual categorization.
Hélie, Sébastien; Cousineau, Denis
2015-06-01
This article explores the visual information used to categorize stimuli drawn from a common stimulus space into verbal and nonverbal categories using 2 experiments. Experiment 1 explores the effect of target duration on verbal and nonverbal categorization using backward masking to interrupt visual processing. With categories equated for difficulty for long and short target durations, intermediate target duration shows an advantage for verbal categorization over nonverbal categorization. Experiment 2 tests whether the results of Experiment 1 can be explained by shorter target duration resulting in a smaller signal-to-noise ratio of the categorization stimulus. To test for this possibility, Experiment 2 used integration masking with the same stimuli, categories, and masks as Experiment 1 with a varying level of mask opacity. As predicted, low mask opacity yielded similar results to long target duration while high mask opacity yielded similar results to short target duration. Importantly, intermediate mask opacity produced an advantage for verbal categorization over nonverbal categorization, similar to intermediate target duration. These results suggest that verbal and nonverbal categorization are affected differently by manipulations affecting the signal-to-noise ratio of the stimulus, consistent with multiple-system theories of categorizations. The results further suggest that verbal categorization may be more digital (and more robust to low signal-to-noise ratio) while the information used in nonverbal categorization may be more analog (and less robust to lower signal-to-noise ratio). This article concludes with a discussion of how these new results affect the use of masking in perceptual categorization and multiple-system theories of perceptual category learning. (c) 2015 APA, all rights reserved).
Monnery-Patris, Sandrine; Marty, Lucile; Bayer, Frédéric; Nicklaus, Sophie; Chambaron, Stéphanie
2016-01-01
Attitudes are important precursors of behaviours. This study aims to compare the food attitudes (i.e., hedonic- and nutrition-based) of children using both an implicit pairing task and an explicit forced-choice categorization task suitable for the cognitive abilities of 5- to 11-year-olds. A dominance of hedonically driven attitudes was expected for all ages in the pairing task, designed to elicit affective and spontaneous answers, whereas a progressive emergence of nutrition-based attitudes was expected in the categorization task, designed to involve deliberate analyses of the costs/benefits of foods. An additional exploratory goal was to evaluate differences in the attitudes of normal and overweight children in both tasks. Children from 3 school levels (n = 194; mean age = 8.03 years) were individually tested on computers in their schools. They performed a pairing task in which the tendencies to associate foods with nutritional vs. culinary contexts were assessed. Next, they were asked to categorize each food into one of the following four categories: "yummy", "yucky" (i.e., hedonic categories), "makes you strong", or"makes you fat" (i.e., nutritional categories). The hedonic/culinary pairs were very frequently selected (81% on average), and this frequency significantly increased through school levels. In contrast, in the categorization task, a significant increase in nutrition-driven categorizations with school level was observed. Additional analyses revealed no differences in the food attitudes between the normal and overweight children in the pairing task, and a tendency towards lower hedonic categorizations among the overweight children. Culinary associations can reflect cultural learning in the French context where food pleasure is dominant. In contrast, the progressive emergence of cognitively driven attitudes with age may reflect the cognitive development of children who are more reasonable and influenced by social norms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hargreaves, Ian S; Pexman, Penny M
2014-05-01
According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.
Perraudin, Sandrine; Mounoud, Pierre
2009-11-01
We conducted three experiments to study the role of instrumental (e.g. knife-bread) and categorical (e.g. cake-bread) relations in the development of conceptual organization with a priming paradigm, by varying the nature of the task (naming--Experiment 1--or categorical decision--Experiments 2 and 3). The participants were 5-, 7- and 9-year-old children and adults. The results showed that on both types of task, adults and 9-year-old children presented instrumental and categorical priming effects, whereas 5-year-old children presented mainly instrumental priming effects, with categorical effects remaining marginal. Moreover, the magnitude of the instrumental priming effects decreased with age. Finally, the priming effects observed for 7-year-old children depended on the task, especially for the categorical effects. The theoretical implications of these results for our understanding of conceptual reorganization from 5 to 9 years of age are discussed.
Brown, Angela M; Lindsey, Delwin T; Guckes, Kevin M
2011-01-01
The relation between colors and their names is a classic case-study for investigating the Sapir-Whorf hypothesis that categorical perception is imposed on perception by language. Here, we investigate the Sapir-Whorf prediction that visual search for a green target presented among blue distractors (or vice versa) should be faster than search for a green target presented among distractors of a different color of green (or for a blue target among different blue distractors). Gilbert, Regier, Kay & Ivry (2006) reported that this Sapir-Whorf effect is restricted to the right visual field (RVF), because the major brain language centers are in the left cerebral hemisphere. We found no categorical effect at the Green|Blue color boundary, and no categorical effect restricted to the RVF. Scaling of perceived color differences by Maximum Likelihood Difference Scaling (MLDS) also showed no categorical effect, including no effect specific to the RVF. Two models fit the data: a color difference model based on MLDS and a standard opponent-colors model of color discrimination based on the spectral sensitivities of the cones. Neither of these models, nor any of our data, suggested categorical perception of colors at the Green|Blue boundary, in either visual field. PMID:21980188
Arrington, Catherine M; Weaver, Starla M
2015-01-01
Under conditions of volitional control in multitask environments, subjects may engage in a variety of strategies to guide task selection. The current research examines whether subjects may sometimes use a top-down control strategy of selecting a task-irrelevant stimulus dimension, such as location, to guide task selection. We term this approach a stimulus set selection strategy. Using a voluntary task switching procedure, subjects voluntarily switched between categorizing letter and number stimuli that appeared in two, four, or eight possible target locations. Effects of stimulus availability, manipulated by varying the stimulus onset asynchrony between the two target stimuli, and location repetition were analysed to assess the use of a stimulus set selection strategy. Considered across position condition, Experiment 1 showed effects of both stimulus availability and location repetition on task choice suggesting that only in the 2-position condition, where selection based on location always results in a target at the selected location, subjects may have been using a stimulus set selection strategy on some trials. Experiment 2 replicated and extended these findings in a visually more cluttered environment. These results indicate that, contrary to current models of task selection in voluntary task switching, the top-down control of task selection may occur in the absence of the formation of an intention to perform a particular task.
Categorical encoding of color in the brain
Bird, Chris M.; Berens, Samuel C.; Horner, Aidan J.; Franklin, Anna
2014-01-01
The areas of the brain that encode color categorically have not yet been reliably identified. Here, we used functional MRI adaptation to identify neuronal populations that represent color categories irrespective of metric differences in color. Two colors were successively presented within a block of trials. The two colors were either from the same or different categories (e.g., “blue 1 and blue 2” or “blue 1 and green 1”), and the size of the hue difference was varied. Participants performed a target detection task unrelated to the difference in color. In the middle frontal gyrus of both hemispheres and to a lesser extent, the cerebellum, blood-oxygen level-dependent response was greater for colors from different categories relative to colors from the same category. Importantly, activation in these regions was not modulated by the size of the hue difference, suggesting that neurons in these regions represent color categorically, regardless of metric color difference. Representational similarity analyses, which investigated the similarity of the pattern of activity across local groups of voxels, identified other regions of the brain (including the visual cortex), which responded to metric but not categorical color differences. Therefore, categorical and metric hue differences appear to be coded in qualitatively different ways and in different brain regions. These findings have implications for the long-standing debate on the origin and nature of color categories, and also further our understanding of how color is processed by the brain. PMID:24591602
Faces in-between: evaluations reflect the interplay of facial features and task-dependent fluency.
Winkielman, Piotr; Olszanowski, Michal; Gola, Mateusz
2015-04-01
Facial features influence social evaluations. For example, faces are rated as more attractive and trustworthy when they have more smiling features and also more female features. However, the influence of facial features on evaluations should be qualified by the affective consequences of fluency (cognitive ease) with which such features are processed. Further, fluency (along with its affective consequences) should depend on whether the current task highlights conflict between specific features. Four experiments are presented. In 3 experiments, participants saw faces varying in expressions ranging from pure anger, through mixed expression, to pure happiness. Perceivers first categorized faces either on a control dimension, or an emotional dimension (angry/happy). Thus, the emotional categorization task made "pure" expressions fluent and "mixed" expressions disfluent. Next, participants made social evaluations. Results show that after emotional categorization, but not control categorization, targets with mixed expressions are relatively devalued. Further, this effect is mediated by categorization disfluency. Additional data from facial electromyography reveal that on a basic physiological level, affective devaluation of mixed expressions is driven by their objective ambiguity. The fourth experiment shows that the relative devaluation of mixed faces that vary in gender ambiguity requires a gender categorization task. Overall, these studies highlight that the impact of facial features on evaluation is qualified by their fluency, and that the fluency of features is a function of the current task. The discussion highlights the implications of these findings for research on emotional reactions to ambiguity. (c) 2015 APA, all rights reserved).
Aging, culture, and memory for categorically processed information.
Yang, Lixia; Chen, Wenfeng; Ng, Andy H; Fu, Xiaolan
2013-11-01
Literature on cross-cultural differences in cognition suggests that categorization, as an information processing and organization strategy, was more often used by Westerners than by East Asians, particularly for older adults. This study examines East-West cultural differences in memory for categorically processed items and sources in young and older Canadians and native Chinese with a conceptual source memory task (Experiment 1) and a reality monitoring task (Experiment 2). In Experiment 1, participants encoded photographic faces of their own ethnicity that were artificially categorized into GOOD or EVIL characters and then completed a source memory task in which they identified faces as old-GOOD, old-EVIL, or new. In Experiment 2, participants viewed a series of words, each followed either by a corresponding image (i.e., SEEN) or by a blank square within which they imagined an image for the word (i.e., IMAGINED). At test, they decided whether the test words were old-SEEN, old-IMAGINED, or new. In general, Canadians outperformed Chinese in memory for categorically processed information, an effect more pronounced for older than for young adults. Extensive exercise of culturally preferred categorization strategy differentially benefits Canadians and reduces their age group differences in memory for categorically processed information.
Edwards, Darren J; Wood, Rodger
2016-01-01
This study explored over-selectivity (executive dysfunction) using a standard unsupervised categorization task. Over-selectivity has been demonstrated using supervised categorization procedures (where training is given); however, little has been done in the way of unsupervised categorization (without training). A standard unsupervised categorization task was used to assess levels of over-selectivity in a traumatic brain injury (TBI) population. Individuals with TBI were selected from the Tertiary Traumatic Brain Injury Clinic at Swansea University and were asked to categorize two-dimensional items (pictures on cards), into groups that they felt were most intuitive, and without any learning (feedback from experimenter). This was compared against categories made by a control group for the same task. The findings of this study demonstrate that individuals with TBI had deficits for both easy and difficult categorization sets, as indicated by a larger amount of one-dimensional sorting compared to control participants. Deficits were significantly greater for the easy condition. The implications of these findings are discussed in the context of over-selectivity, and the processes that underlie this deficit. Also, the implications for using this procedure as a screening measure for over-selectivity in TBI are discussed.
Children (but not adults) judge similarity in own- and other-race faces by the color of their skin
Balas, Benjamin; Peissig, Jessie; Moulson, Margaret
2014-01-01
Face shape and pigmentation are both diagnostic cues for face identification and categorization. In particular, both shape and pigmentation contribute to observers’ categorization of faces by race. Though many theoretical accounts of the behavioral other-race effect either explicitly or implicitly depend on differential use of visual information as a function of category expertise, there is little evidence that observers do in fact differentially rely upon distinct visual cues for own- and other-race faces. Presently, we examined how Asian and Caucasian children (4–6 years old) and adults use 3D shape and 2D pigmentation to make similarity judgments of White, Black, and Asian faces. Children in this age range are both capable of making category judgments about race, but are also sufficiently plastic with regard to the behavioral other-race effect that it seems as though their representations of facial appearance across different categories are still emerging. Using a simple match-to-sample similarity task, we found that children tend to use pigmentation to judge facial similarity more than adults, and also that own- vs. other-group category membership appears to influence how quickly children learn to use shape information more readily. We therefore suggest that children continue to adjust how different visual information is weighted during early and middle childhood and that experience with faces affects the speed at which adult-like weightings are established. PMID:25462031
Rapid visual perception of interracial crowds: Racial category learning from emotional segregation.
Lamer, Sarah Ariel; Sweeny, Timothy D; Dyer, Michael Louis; Weisbuch, Max
2018-05-01
Drawing from research on social identity and ensemble coding, we theorize that crowd perception provides a powerful mechanism for social category learning. Crowds include allegiances that may be distinguished by visual cues to shared behavior and mental states, providing perceivers with direct information about social groups and thus a basis for learning social categories. Here, emotion expressions signaled group membership: to the extent that a crowd exhibited emotional segregation (i.e., was segregated into emotional subgroups), a visible characteristic (race) that incidentally distinguished emotional subgroups was expected to support categorical distinctions. Participants were randomly assigned to view interracial crowds in which emotion differences between (black vs. white) subgroups were either small (control condition) or large (emotional segregation condition). On each trial, participants saw crowds of 12 faces (6 black, 6 white) for roughly 300 ms and were asked to estimate the average emotion of the entire crowd. After all trials, participants completed a racial categorization task and self-report measure of race essentialism. As predicted, participants exposed to emotional segregation (vs. control) exhibited stronger racial category boundaries and stronger race essentialism. Furthermore, such effects accrued via ensemble coding, a visual mechanism that summarizes perceptual information: emotional segregation strengthened participants' racial category boundaries to the extent that segregation limited participants' abilities to integrate emotion across racial subgroups. Together with evidence that people observe emotional segregation in natural environments, these findings suggest that crowd perception mechanisms support racial category boundaries and race essentialism. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
PATTERNS OF CLINICALLY SIGNIFICANT COGNITIVE IMPAIRMENT IN HOARDING DISORDER.
Mackin, R Scott; Vigil, Ofilio; Insel, Philip; Kivowitz, Alana; Kupferman, Eve; Hough, Christina M; Fekri, Shiva; Crothers, Ross; Bickford, David; Delucchi, Kevin L; Mathews, Carol A
2016-03-01
The cognitive characteristics of individuals with hoarding disorder (HD) are not well understood. Existing studies are relatively few and somewhat inconsistent but suggest that individuals with HD may have specific dysfunction in the cognitive domains of categorization, speed of information processing, and decision making. However, there have been no studies evaluating the degree to which cognitive dysfunction in these domains reflects clinically significant cognitive impairment (CI). Participants included 78 individuals who met DSM-V criteria for HD and 70 age- and education-matched controls. Cognitive performance on measures of memory, attention, information processing speed, abstract reasoning, visuospatial processing, decision making, and categorization ability was evaluated for each participant. Rates of clinical impairment for each measure were compared, as were age- and education-corrected raw scores for each cognitive test. HD participants showed greater incidence of CI on measures of visual memory, visual detection, and visual categorization relative to controls. Raw-score comparisons between groups showed similar results with HD participants showing lower raw-score performance on each of these measures. In addition, in raw-score comparisons HD participants also demonstrated relative strengths compared to control participants on measures of verbal and visual abstract reasoning. These results suggest that HD is associated with a pattern of clinically significant CI in some visually mediated neurocognitive processes including visual memory, visual detection, and visual categorization. Additionally, these results suggest HD individuals may also exhibit relative strengths, perhaps compensatory, in abstract reasoning in both verbal and visual domains. © 2015 Wiley Periodicals, Inc.
When Art Moves the Eyes: A Behavioral and Eye-Tracking Study
Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2012-01-01
The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007
Time course of discrimination between emotional facial expressions: the role of visual saliency.
Calvo, Manuel G; Nummenmaa, Lauri
2011-08-01
Saccadic and manual responses were used to investigate the speed of discrimination between happy and non-happy facial expressions in two-alternative-forced-choice tasks. The minimum latencies of correct saccadic responses indicated that the earliest time point at which discrimination occurred ranged between 200 and 280ms, depending on type of expression. Corresponding minimum latencies for manual responses ranged between 440 and 500ms. For both response modalities, visual saliency of the mouth region was a critical factor in facilitating discrimination: The more salient the mouth was in happy face targets in comparison with non-happy distracters, the faster discrimination was. Global image characteristics (e.g., luminance) and semantic factors (i.e., categorical similarity and affective valence of expression) made minor or no contribution to discrimination efficiency. This suggests that visual saliency of distinctive facial features, rather than the significance of expression, is used to make both early and later expression discrimination decisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
When art moves the eyes: a behavioral and eye-tracking study.
Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2012-01-01
The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception.
Individual Differences in Handedness Effects on Categorical versus Coordinate Spatial Processing
2017-05-17
distractor questionnaires (the Edinburgh Handedness Inventory [Oldfield, 1971], the Kentucky Inventory of Mindfulness Skills [Baer, Smith, & Allen, 2004...and the Mindful Attention Awareness Scale [Brown & Ryan, 2003]), and then engaged in one of two tasks. During the Categorical Task participants
Discrimination of holograms and real objects by pigeons (Columba livia) and humans (Homo sapiens).
Stephan, Claudia; Steurer, Michael M; Aust, Ulrike
2014-08-01
The type of stimulus material employed in visual tasks is crucial to all comparative cognition research that involves object recognition. There is considerable controversy about the use of 2-dimensional stimuli and the impact that the lack of the 3rd dimension (i.e., depth) may have on animals' performance in tests for their visual and cognitive abilities. We report evidence of discrimination learning using a completely novel type of stimuli, namely, holograms. Like real objects, holograms provide full 3-dimensional shape information but they also offer many possibilities for systematically modifying the appearance of a stimulus. Hence, they provide a promising means for investigating visual perception and cognition of different species in a comparative way. We trained pigeons and humans to discriminate either between 2 real objects or between holograms of the same 2 objects, and we subsequently tested both species for the transfer of discrimination to the other presentation mode. The lack of any decrements in accuracy suggests that real objects and holograms were perceived as equivalent in both species and shows the general appropriateness of holograms as stimuli in visual tasks. A follow-up experiment involving the presentation of novel views of the training objects and holograms revealed some interspecies differences in rotational invariance, thereby confirming and extending the results of previous studies. Taken together, these results suggest that holograms may not only provide a promising tool for investigating yet unexplored issues, but their use may also lead to novel insights into some crucial aspects of comparative visual perception and categorization.
Hantsch, Ansgar; Jescheniak, Jörg D; Mädebach, Andreas
2012-07-01
The picture-word interference paradigm is a prominent tool for studying lexical retrieval during speech production. When participants name the pictures, interference from semantically related distractor words has regularly been shown. By contrast, when participants categorize the pictures, facilitation from semantically related distractors has typically been found. In the extant studies, however, differences in the task instructions (naming vs. categorizing) were confounded with the response level: While responses in naming were typically located at the basic level (e.g., "dog"), responses were located at the superordinate level in categorization (e.g., "animal"). The present study avoided this confound by having participants respond at the basic level in both naming and categorization, using the same pictures, distractors, and verbal responses. Our findings confirm the polarity reversal of the semantic effects--that is, semantic interference in naming, and semantic facilitation in categorization. These findings show that the polarity reversal of the semantic effect is indeed due to the different tasks and is not an artifact of the different response levels used in previous studies. Implications for current models of language production are discussed.
Ghosts in the machine: memory interference from the previous trial.
Papadimitriou, Charalampos; Ferdoash, Afreen; Snyder, Lawrence H
2015-01-15
Previous memoranda can interfere with the memorization or storage of new information, a concept known as proactive interference. Studies of proactive interference typically use categorical memoranda and match-to-sample tasks with categorical measures such as the proportion of correct to incorrect responses. In this study we instead train five macaques in a spatial memory task with continuous memoranda and responses, allowing us to more finely probe working memory circuits. We first ask whether the memoranda from the previous trial result in proactive interference in an oculomotor delayed response task. We then characterize the spatial and temporal profile of this interference and ask whether this profile can be predicted by an attractor network model of working memory. We find that memory in the current trial shows a bias toward the location of the memorandum of the previous trial. The magnitude of this bias increases with the duration of the memory period within which it is measured. Our simulations using standard attractor network models of working memory show that these models easily replicate the spatial profile of the bias. However, unlike the behavioral findings, these attractor models show an increase in bias with the duration of the previous rather than the current memory period. To model a bias that increases with current trial duration we posit two separate memory stores, a rapidly decaying visual store that resists proactive interference effects and a sustained memory store that is susceptible to proactive interference. Copyright © 2015 the American Physiological Society.
Ghosts in the machine: memory interference from the previous trial
Ferdoash, Afreen; Snyder, Lawrence H.
2014-01-01
Previous memoranda can interfere with the memorization or storage of new information, a concept known as proactive interference. Studies of proactive interference typically use categorical memoranda and match-to-sample tasks with categorical measures such as the proportion of correct to incorrect responses. In this study we instead train five macaques in a spatial memory task with continuous memoranda and responses, allowing us to more finely probe working memory circuits. We first ask whether the memoranda from the previous trial result in proactive interference in an oculomotor delayed response task. We then characterize the spatial and temporal profile of this interference and ask whether this profile can be predicted by an attractor network model of working memory. We find that memory in the current trial shows a bias toward the location of the memorandum of the previous trial. The magnitude of this bias increases with the duration of the memory period within which it is measured. Our simulations using standard attractor network models of working memory show that these models easily replicate the spatial profile of the bias. However, unlike the behavioral findings, these attractor models show an increase in bias with the duration of the previous rather than the current memory period. To model a bias that increases with current trial duration we posit two separate memory stores, a rapidly decaying visual store that resists proactive interference effects and a sustained memory store that is susceptible to proactive interference. PMID:25376781
Monkeys show recognition without priming in a classification task
Basile, Benjamin M.; Hampton, Robert R.
2012-01-01
Humans show visual perceptual priming by identifying degraded images faster and more accurately if they have seen the original images, while simultaneously failing to recognize the same images. Such priming is commonly thought, with little evidence, to be widely distributed phylogenetically. Following Brodbeck (1997), we trained rhesus monkeys (Macaca mulatta) to categorize photographs according to content (e.g., birds, fish, flowers, people). In probe trials, we tested whether monkeys were faster or more accurate at categorizing degraded versions of previously seen images (primed) than degraded versions of novel images (unprimed). Monkeys categorized reliably, but showed no benefit from having previously seen the images. This finding was robust across manipulations of image quality (color, grayscale, line drawings), type of image degradation (occlusion, blurring), levels of processing, and number of repetitions of the prime. By contrast, in probe matching-to-sample trials, monkeys recognized the primes, demonstrating that they remembered the primes and could discriminate them from other images in the same category under the conditions used to test for priming. Two experiments that replicated Brodbeck’s (1997) procedures also produced no evidence of priming. This inability to find priming in monkeys under perceptual conditions sufficient for recognition presents a puzzle. PMID:22975587
Compensatory processing during rule-based category learning in older adults.
Bharani, Krishna L; Paller, Ken A; Reber, Paul J; Weintraub, Sandra; Yanar, Jorge; Morrison, Robert G
2016-01-01
Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex.
Compensatory Processing During Rule-Based Category Learning in Older Adults
Bharani, Krishna L.; Paller, Ken A.; Reber, Paul J.; Weintraub, Sandra; Yanar, Jorge; Morrison, Robert G.
2016-01-01
Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex. PMID:26422522
Prototypes, Exemplars, and the Natural History of Categorization
Smith, J. David
2013-01-01
The article explores—from a utility/adaptation perspective—the role of prototype and exemplar processes in categorization. The author surveys important category tasks within the categorization literature from the perspective of the optimality of applying prototype and exemplar processes. Formal simulations reveal that organisms will often (not always!) receive more useful signals about category belongingness if they average their exemplar experience into a prototype and use this as the comparative standard for categorization. This survey then provides the theoretical context for considering the evolution of cognitive systems for categorization. In the article’s final sections, the author reviews recent research on the performance of nonhuman primates and humans in the tasks analyzed in the article. Diverse species share operating principles, default commitments, and processing weaknesses in categorization. From these commonalities, it may be possible to infer some properties of the categorization ecology these species generally experienced during cognitive evolution. PMID:24005828
Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision.
Van Dromme, Ilse C; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter
2016-04-01
The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams.
Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision
Van Dromme, Ilse C.; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter
2016-01-01
The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. PMID:27082854
The Nature of Infant Color Categorization: Evidence from Eye Movements on a Target Detection Task
ERIC Educational Resources Information Center
Franklin, A.; Pilling, M.; Davies, I.
2005-01-01
Infants respond categorically to color. However, the nature of infants' categorical responding to color is unclear. The current study investigated two issues. First, is infants' categorical responding more absolute than adults' categorical responding? That is, can infants discriminate two stimuli from the same color category? Second, is color…
Genetic influences on free and cued recall in long-term memory tasks.
Volk, Heather E; McDermott, Kathleen B; Roediger, Henry L; Todd, Richard D
2006-10-01
Long-term memory (LTM) problems are associated with many psychiatric and neurological illnesses and are commonly measured using free and cued recall tasks. Although LTM has been linked with biologic mechanisms, the etiology of distinct LTM tasks is unknown. We studied LTM in 95 healthy female twin pairs identified through birth records in the state of Missouri. Performance on tasks of free recall of unrelated words, free and cued recall of categorized words, and the vocabulary section of the Wechsler Adult Intelligence Scale (WAIS-R) were examined using structural equation modeling. Additive genetic and unique environmental factors influenced LTM and intelligence. Free recall of unrelated and categorized words, and cued recall of categorized words, were moderately heritable (55%, 38%, and 37%). WAIS-R vocabulary score was highly heritable (77%). Controlling for verbal intelligence in multivariate analyses of recall, two components of genetic influence on LTM were found; one for all three recall scores and one for free and cued categorized word recall. Recall of unrelated and categorized words is influenced by different genetic and environmental factors indicating heterogeneity in LTM. Verbal intelligence is etiologically different from LTM indicating that these two abilities utilize different brain functions.
Parmentier, Fabrice B R; Pacheco-Unguetti, Antonia P; Valero, Sara
2018-01-01
Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants' biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants' biological needs.
Pacheco-Unguetti, Antonia P.; Valero, Sara
2018-01-01
Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants’ biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants’ biological needs. PMID:29300763
Lee, Jeongmi; Geng, Joy J
2017-02-01
The efficiency of finding an object in a crowded environment depends largely on the similarity of nontargets to the search target. Models of attention theorize that the similarity is determined by representations stored within an "attentional template" held in working memory. However, the degree to which the contents of the attentional template are individually unique and where those idiosyncratic representations are encoded in the brain are unknown. We investigated this problem using representational similarity analysis of human fMRI data to measure the common and idiosyncratic representations of famous face morphs during an identity categorization task; data from the categorization task were then used to predict performance on a separate identity search task. We hypothesized that the idiosyncratic categorical representations of the continuous face morphs would predict their distractability when searching for each target identity. The results identified that patterns of activation in the lateral prefrontal cortex (LPFC) as well as in face-selective areas in the ventral temporal cortex were highly correlated with the patterns of behavioral categorization of face morphs and search performance that were common across subjects. However, the individually unique components of the categorization behavior were reliably decoded only in right LPFC. Moreover, the neural pattern in right LPFC successfully predicted idiosyncratic variability in search performance, such that reaction times were longer when distractors had a higher probability of being categorized as the target identity. These results suggest that the prefrontal cortex encodes individually unique components of categorical representations that are also present in attentional templates for target search. Everyone's perception of the world is uniquely shaped by personal experiences and preferences. Using functional MRI, we show that individual differences in the categorization of face morphs between two identities could be decoded from the prefrontal cortex and the ventral temporal cortex. Moreover, the individually unique representations in prefrontal cortex predicted idiosyncratic variability in attentional performance when looking for each identity in the "crowd" of another morphed face in a separate search task. Our results reveal that the representation of task-related information in prefrontal cortex is individually unique and preserved across categorization and search performance. This demonstrates the possibility of predicting individual behaviors across tasks with patterns of brain activity. Copyright © 2017 the authors 0270-6474/17/371257-12$15.00/0.
Color Categories: Evidence for the Cultural Relativity Hypothesis
ERIC Educational Resources Information Center
Roberson, D.; Davidoff, J.; Davies, I.R.L.; Shapiro, L.R.
2005-01-01
The question of whether language affects our categorization of perceptual continua is of particular interest for the domain of color where constraints on categorization have been proposed both within the visual system and in the visual environment. Recent research (Roberson, Davies, & Davidoff, 2000; Roberson et al., in press) found…
The picture superiority effect in categorization: visual or semantic?
Job, R; Rumiati, R; Lotto, L
1992-09-01
Two experiments are reported whose aim was to replicate and generalize the results presented by Snodgrass and McCullough (1986) on the effect of visual similarity in the categorization process. For pictures, Snodgrass and McCullough's results were replicated because Ss took longer to discriminate elements from 2 categories when they were visually similar than when they were visually dissimilar. However, unlike Snodgrass and McCullough, an analogous increase was also observed for word stimuli. The pattern of results obtained here can be explained most parsimoniously with reference to the effect of semantic similarity, or semantic and visual relatedness, rather than to visual similarity alone.
Following the time course of face gender and expression processing: a task-dependent ERP study.
Valdés-Conroy, Berenice; Aguado, Luis; Fernández-Cahill, María; Romero-Ferreiro, Verónica; Diéguez-Risco, Teresa
2014-05-01
The effects of task demands and the interaction between gender and expression in face perception were studied using event-related potentials (ERPs). Participants performed three different tasks with male and female faces that were emotionally inexpressive or that showed happy or angry expressions. In two of the tasks (gender and expression categorization) facial properties were task-relevant while in a third task (symbol discrimination) facial information was irrelevant. Effects of expression were observed on the visual P100 component under all task conditions, suggesting the operation of an automatic process that is not influenced by task demands. The earliest interaction between expression and gender was observed later in the face-sensitive N170 component. This component showed differential modulations by specific combinations of gender and expression (e.g., angry male vs. angry female faces). Main effects of expression and task were observed in a later occipito-temporal component peaking around 230 ms post-stimulus onset (EPN or early posterior negativity). Less positive amplitudes in the presence of angry faces and during performance of the gender and expression tasks were observed. Finally, task demands also modulated a positive component peaking around 400 ms (LPC, or late positive complex) that showed enhanced amplitude for the gender task. The pattern of results obtained here adds new evidence about the sequence of operations involved in face processing and the interaction of facial properties (gender and expression) in response to different task demands. Copyright © 2014 Elsevier B.V. All rights reserved.
Similar Task Features Shape Judgment and Categorization Processes
ERIC Educational Resources Information Center
Hoffmann, Janina A.; von Helversen, Bettina; Rieskamp, Jörg
2016-01-01
The distinction between similarity-based and rule-based strategies has instigated a large body of research in categorization and judgment. Within both domains, the task characteristics guiding strategy shifts are increasingly well documented. Across domains, past research has observed shifts from rule-based strategies in judgment to…
Smith, Travis R; Beran, Michael J
2018-05-31
The present experiments extended to monkeys a previously used abstract categorization procedure (Castro & Wasserman, 2016) where pigeons had categorized arrays of clipart icons based upon two task rules: the number of clipart objects in the array or the variability of objects in the array. Experiment 1 replicated Castro and Wasserman by using capuchin monkeys and rhesus monkeys and reported that monkeys' performances were similar to pigeons' in terms of acquisition, pattern of errors, and the absence of switch costs. Furthermore, monkeys' insensitivity to the added irrelevant information suggested that an associative (rather than rule-based) categorization mechanism was dominant. Experiment 2 was conducted to include categorization cue reversals to determine (a) whether the monkeys would quickly adapt to the reversals and inhibit interference from a prereversal task rule (consistent with a rule-based mechanism) and (b) whether the latency to make a response prior to a correct or incorrect outcome was informative about the presence of a cognitive mechanism. The cue reassignment produced profound and long-lasting performance deficits, and a long reacquisition phase suggested the involvement of associative learning processes; however, monkeys also displayed longer latencies to choose prior to correct responses on challenging trials, suggesting the involvement of nonassociative processes. Together these performances suggest a mix of associative and cognitive-control processes governing monkey categorization judgments. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Meuwese, Julia D I; van Loon, Anouk M; Lamme, Victor A F; Fahrenfort, Johannes J
2014-05-01
Perceptual decisions seem to be made automatically and almost instantly. Constructing a unitary subjective conscious experience takes more time. For example, when trying to avoid a collision with a car on a foggy road you brake or steer away in a reflex, before realizing you were in a near accident. This subjective aspect of object recognition has been given little attention. We used metacognition (assessed with confidence ratings) to measure subjective experience during object detection and object categorization for degraded and masked objects, while objective performance was matched. Metacognition was equal for degraded and masked objects, but categorization led to higher metacognition than did detection. This effect turned out to be driven by a difference in metacognition for correct rejection trials, which seemed to be caused by an asymmetry of the distractor stimulus: It does not contain object-related information in the detection task, whereas it does contain such information in the categorization task. Strikingly, this asymmetry selectively impacted metacognitive ability when objective performance was matched. This finding reveals a fundamental difference in how humans reflect versus act on information: When matching the amount of information required to perform two tasks at some objective level of accuracy (acting), metacognitive ability (reflecting) is still better in tasks that rely on positive evidence (categorization) than in tasks that rely more strongly on an absence of evidence (detection).
Interference among the Processing of Facial Emotion, Face Race, and Face Gender.
Li, Yongna; Tse, Chi-Shing
2016-01-01
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).
Interference among the Processing of Facial Emotion, Face Race, and Face Gender
Li, Yongna; Tse, Chi-Shing
2016-01-01
People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621
Fine-grained visual marine vessel classification for coastal surveillance and defense applications
NASA Astrophysics Data System (ADS)
Solmaz, Berkan; Gundogdu, Erhan; Karaman, Kaan; Yücesoy, Veysel; Koç, Aykut
2017-10-01
The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.
Rule-Based Category Learning in Children: The Role of Age and Executive Functioning
Rabi, Rahel; Minda, John Paul
2014-01-01
Rule-based category learning was examined in 4–11 year-olds and adults. Participants were asked to learn a set of novel perceptual categories in a classification learning task. Categorization performance improved with age, with younger children showing the strongest rule-based deficit relative to older children and adults. Model-based analyses provided insight regarding the type of strategy being used to solve the categorization task, demonstrating that the use of the task appropriate strategy increased with age. When children and adults who identified the correct categorization rule were compared, the performance deficit was no longer evident. Executive functions were also measured. While both working memory and inhibitory control were related to rule-based categorization and improved with age, working memory specifically was found to marginally mediate the age-related improvements in categorization. When analyses focused only on the sample of children, results showed that working memory ability and inhibitory control were associated with categorization performance and strategy use. The current findings track changes in categorization performance across childhood, demonstrating at which points performance begins to mature and resemble that of adults. Additionally, findings highlight the potential role that working memory and inhibitory control may play in rule-based category learning. PMID:24489658
ERIC Educational Resources Information Center
Schmitz, Melanie; Wentura, Dirk
2012-01-01
The evaluative priming effect (i.e., faster target responses following evaluatively congruent compared with evaluatively incongruent primes) in nonevaluative priming tasks (such as naming or semantic categorization tasks) is considered important for the question of how evaluative connotations are represented in memory. However, the empirical…
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Memory, reasoning, and categorization: parallels and common mechanisms
Hayes, Brett K.; Heit, Evan; Rotello, Caren M.
2014-01-01
Traditionally, memory, reasoning, and categorization have been treated as separate components of human cognition. We challenge this distinction, arguing that there is broad scope for crossover between the methods and theories developed for each task. The links between memory and reasoning are illustrated in a review of two lines of research. The first takes theoretical ideas (two-process accounts) and methodological tools (signal detection analysis, receiver operating characteristic curves) from memory research and applies them to important issues in reasoning research: relations between induction and deduction, and the belief bias effect. The second line of research introduces a task in which subjects make either memory or reasoning judgments for the same set of stimuli. Other than broader generalization for reasoning than memory, the results were similar for the two tasks, across a variety of experimental stimuli and manipulations. It was possible to simultaneously explain performance on both tasks within a single cognitive architecture, based on exemplar-based comparisons of similarity. The final sections explore evidence for empirical and processing links between inductive reasoning and categorization and between categorization and recognition. An important implication is that progress in all three of these fields will be expedited by further investigation of the many commonalities between these tasks. PMID:24987380
Memory, reasoning, and categorization: parallels and common mechanisms.
Hayes, Brett K; Heit, Evan; Rotello, Caren M
2014-01-01
Traditionally, memory, reasoning, and categorization have been treated as separate components of human cognition. We challenge this distinction, arguing that there is broad scope for crossover between the methods and theories developed for each task. The links between memory and reasoning are illustrated in a review of two lines of research. The first takes theoretical ideas (two-process accounts) and methodological tools (signal detection analysis, receiver operating characteristic curves) from memory research and applies them to important issues in reasoning research: relations between induction and deduction, and the belief bias effect. The second line of research introduces a task in which subjects make either memory or reasoning judgments for the same set of stimuli. Other than broader generalization for reasoning than memory, the results were similar for the two tasks, across a variety of experimental stimuli and manipulations. It was possible to simultaneously explain performance on both tasks within a single cognitive architecture, based on exemplar-based comparisons of similarity. The final sections explore evidence for empirical and processing links between inductive reasoning and categorization and between categorization and recognition. An important implication is that progress in all three of these fields will be expedited by further investigation of the many commonalities between these tasks.
Categorically Defined Targets Trigger Spatiotemporal Visual Attention
ERIC Educational Resources Information Center
Wyble, Brad; Bowman, Howard; Potter, Mary C.
2009-01-01
Transient attention to a visually salient cue enhances processing of a subsequent target in the same spatial location between 50 to 150 ms after cue onset (K. Nakayama & M. Mackeben, 1989). Do stimuli from a categorically defined target set, such as letters or digits, also generate transient attention? Participants reported digit targets among…
Relationship between listeners' nonnative speech recognition and categorization abilities
Atagi, Eriko; Bent, Tessa
2015-01-01
Enhancement of the perceptual encoding of talker characteristics (indexical information) in speech can facilitate listeners' recognition of linguistic content. The present study explored this indexical-linguistic relationship in nonnative speech processing by examining listeners' performance on two tasks: nonnative accent categorization and nonnative speech-in-noise recognition. Results indicated substantial variability across listeners in their performance on both the accent categorization and nonnative speech recognition tasks. Moreover, listeners' accent categorization performance correlated with their nonnative speech-in-noise recognition performance. These results suggest that having more robust indexical representations for nonnative accents may allow listeners to more accurately recognize the linguistic content of nonnative speech. PMID:25618098
Winn, Matthew B; Won, Jong Ho; Moon, Il Joon
This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language.
Winn, Matthew B.; Won, Jong Ho; Moon, Il Joon
2016-01-01
Objectives This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). We hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. We further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design Nineteen CI listeners and 10 listeners with normal hearing (NH) participated in a suite of tasks that included spectral ripple discrimination (SRD), temporal modulation detection (TMD), and syllable categorization, which was split into a spectral-cue-based task (targeting the /ba/-/da/ contrast) and a timing-cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated in order to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression in order to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for CI listeners. Results CI users were generally less successful at utilizing both spectral and temporal cues for categorization compared to listeners with normal hearing. For the CI listener group, SRD was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. TMD using 100 Hz and 10 Hz modulated noise was not correlated with the CI subjects’ categorization of VOT, nor with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart non-linguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (VOT) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language. PMID:27438871
Testing the Construct Validity of a Virtual Reality Hip Arthroscopy Simulator.
Khanduja, Vikas; Lawrence, John E; Audenaert, Emmanuel
2017-03-01
To test the construct validity of the hip diagnostics module of a virtual reality hip arthroscopy simulator. Nineteen orthopaedic surgeons performed a simulated arthroscopic examination of a healthy hip joint using a 70° arthroscope in the supine position. Surgeons were categorized as either expert (those who had performed 250 hip arthroscopies or more) or novice (those who had performed fewer than this). Twenty-one specific targets were visualized within the central and peripheral compartments; 9 via the anterior portal, 9 via the anterolateral portal, and 3 via the posterolateral portal. This was immediately followed by a task testing basic probe examination of the joint in which a series of 8 targets were probed via the anterolateral portal. During the tasks, the surgeon's performance was evaluated by the simulator using a set of predefined metrics including task duration, number of soft tissue and bone collisions, and distance travelled by instruments. No repeat attempts at the tasks were permitted. Construct validity was then evaluated by comparing novice and expert group performance metrics over the 2 tasks using the Mann-Whitney test, with a P value of less than .05 considered significant. On the visualization task, the expert group outperformed the novice group on time taken (P = .0003), number of collisions with soft tissue (P = .001), number of collisions with bone (P = .002), and distance travelled by the arthroscope (P = .02). On the probe examination, the 2 groups differed only in the time taken to complete the task (P = .025) with no significant difference in other metrics. Increased experience in hip arthroscopy was reflected by significantly better performance on the virtual reality simulator across 2 tasks, supporting its construct validity. This study validates a virtual reality hip arthroscopy simulator and supports its potential for developing basic arthroscopic skills. Level III. Copyright © 2016 Arthroscopy Association of North America. All rights reserved.
The Effects of Phonetic Similarity and List Length on Children's Sound Categorization Performance.
ERIC Educational Resources Information Center
Snowling, Margaret J.; And Others
1994-01-01
Examined the phonological analysis and verbal working memory components of the sound categorization task and their relationships to reading skill differences. Children were tested on sound categorization by having them identify odd words in sequences. Sound categorization performance was sensitive to individual differences in speech perception…
When the rhythm disappears and the mind keeps dancing: sustained effects of attentional entrainment.
Trapp, Sabrina; Havlicek, Ondrej; Schirmer, Annett; Keller, Peter E
2018-01-17
Research has demonstrated that the human cognitive system allocates attention most efficiently to a stimulus that occurs in synchrony with an established rhythmic background. However, our environment is dynamic and constantly changing. What happens when rhythms to which our cognitive system adapted disappear? We addressed this question using a visual categorization task comprising emotional and neutral faces. The task was split into three blocks of which the first and the last were completed in silence. The second block was accompanied by an acoustic background rhythm that, for one group of participants, was synchronous with face presentations, and for another group was asynchronous. Irrespective of group, performance improved with background stimulation. Importantly, improved performance extended into the third silent block for the synchronous, but not for the asynchronous group. These data suggest that attentional entrainment resulting from rhythmic environmental regularities disintegrates only gradually after the regularities disappear.
The effect of orthographic and emotional neighbourhood in a colour categorization task.
Camblats, Anna-Malika; Mathey, Stéphanie
2016-02-01
This study investigated whether and how the strength of reading interference in a colour categorization task can be influenced by lexical competition and the emotional characteristics of words not directly presented. Previous findings showed inhibitory effects of high-frequency orthographic and emotional neighbourhood in the lexical decision task. Here, we examined the effect of orthographic neighbour frequency according to the emotional valence of the higher-frequency neighbour in an emotional orthographic Stroop paradigm. Stimuli were coloured neutral words that had either (1) no orthographic neighbour (e.g. PISTIL [pistil]), (2) one neutral higher-frequency neighbour (e.g. tirade [tirade]/TIRAGE [draw]) or (3) one negative higher-frequency neighbour (e.g. idiome [idiom]/IDIOTE [idiotic]). The results showed that colour categorization times were longer for words with no orthographic neighbour than for words with one neutral neighbour of higher frequency and even longer when the higher-frequency neighbour was neutral rather than negative. Thus, it appears not only that the orthographic neighbourhood of the coloured stimulus words intervenes in a colour categorization task, but also that the emotional content of the neighbour contributes to response times. These findings are discussed in terms of lexical competition between the stimulus word and non-presented orthographic neighbours, which in turn would modify the strength of reading interference on colour categorization times.
Developmental Changes in Infants' Categorization of Anger and Disgust Facial Expressions
ERIC Educational Resources Information Center
Ruba, Ashley L.; Johnson, Kristin M.; Harris, Lasana T.; Wilbourn, Makeba Parramore
2017-01-01
For decades, scholars have examined how children first recognize emotional facial expressions. This research has found that infants younger than 10 months can discriminate negative, within-valence facial expressions in looking time tasks, and children older than 24 months struggle to categorize these expressions in labeling and free-sort tasks.…
ERIC Educational Resources Information Center
Gijsel, Martine A. R.; Ormel, Ellen A.; Hermans, Daan; Verhoeven, L.; Bosman, Anna M. T.
2011-01-01
In the present study, the development of semantic categorization and its relationship with reading was investigated across Dutch primary grade students. Three Exemplar-level tasks (Experiment 1) and two Superordinate-level tasks (Experiment 2) with different types of distracters (phonological, semantic and perceptual) were administered to assess…
Top-down and bottom-up modulation of language related areas – An fMRI Study
Noesselt, Tömme; Shah, Nadim Jon; Jäncke, Lutz
2003-01-01
Background One major problem for cognitive neuroscience is to describe the interaction between stimulus and task driven neural modulation. We used fMRI to investigate this interaction in the human brain. Ten male subjects performed a passive listening and a semantic categorization task in a factorial design. In both tasks, words were presented auditorily at three different rates. Results We found: (i) as word presentation rate increased hemodynamic responses increased bilaterally in the superior temporal gyrus including Heschl's gyrus (HG), the planum temporale (PT), and the planum polare (PP); (ii) compared to passive listening, semantic categorization produced increased bilateral activations in the ventral inferior frontal gyrus (IFG) and middle frontal gyrus (MFG); (iii) hemodynamic responses in the left dorsal IFG increased linearly with increasing word presentation rate only during the semantic categorization task; (iv) in the semantic task hemodynamic responses decreased bilaterally in the insula with increasing word presentation rates; and (v) in parts of the HG the hemodynamic response increased with increasing word presentation rates during passive listening more strongly. Conclusion The observed "rate effect" in primary and secondary auditory cortex is in accord with previous findings and suggests that these areas are driven by low-level stimulus attributes. The bilateral effect of semantic categorization is also in accord with previous studies and emphasizes the role of these areas in semantic operations. The interaction between semantic categorization and word presentation in the left IFG indicates that this area has linguistic functions not present in the right IFG. Finally, we speculate that the interaction between semantic categorization and word presentation rates in HG and the insula might reflect an inhibition of the transfer of unnecessary information from the temporal to frontal regions of the brain. PMID:12828789
Effects of Talker Variability on Perceptual Learning of Dialects
Clopper, Cynthia G.; Pisoni, David B.
2012-01-01
Two groups of listeners learned to categorize a set of unfamiliar talkers by dialect region using sentences selected from the TIMIT speech corpus. One group learned to categorize a single talker from each of six American English dialect regions. A second group learned to categorize three talkers from each dialect region. Following training, both groups were asked to categorize new talkers using the same categorization task. While the single-talker group was more accurate during initial training and test phases when familiar talkers produced the sentences, the three-talker group performed better on the generalization task with unfamiliar talkers. This cross-over effect in dialect categorization suggests that while talker variation during initial perceptual learning leads to more difficult learning of specific exemplars, exposure to intertalker variability facilitates robust perceptual learning and promotes better categorization performance of unfamiliar talkers. The results suggest that listeners encode and use acoustic-phonetic variability in speech to reliably perceive the dialect of unfamiliar talkers. PMID:15697151
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Learning new color names produces rapid increase in gray matter in the intact adult human cortex
Kwok, Veronica; Niu, Zhendong; Kay, Paul; Zhou, Ke; Mo, Lei; Jin, Zhen; So, Kwok-Fai; Tan, Li Hai
2011-01-01
The human brain has been shown to exhibit changes in the volume and density of gray matter as a result of training over periods of several weeks or longer. We show that these changes can be induced much faster by using a training method that is claimed to simulate the rapid learning of word meanings by children. Using whole-brain magnetic resonance imaging (MRI) we show that learning newly defined and named subcategories of the universal categories green and blue in a period of 2 h increases the volume of gray matter in V2/3 of the left visual cortex, a region known to mediate color vision. This pattern of findings demonstrates that the anatomical structure of the adult human brain can change very quickly, specifically during the acquisition of new, named categories. Also, prior behavioral and neuroimaging research has shown that differences between languages in the boundaries of named color categories influence the categorical perception of color, as assessed by judgments of relative similarity, by response time in alternative forced-choice tasks, and by visual search. Moreover, further behavioral studies (visual search) and brain imaging studies have suggested strongly that the categorical effect of language on color processing is left-lateralized, i.e., mediated by activity in the left cerebral hemisphere in adults (hence “lateralized Whorfian” effects). The present results appear to provide a structural basis in the brain for the behavioral and neurophysiologically observed indices of these Whorfian effects on color processing. PMID:21464316
Games people play: How video games improve probabilistic learning.
Schenk, Sabrina; Lech, Robert K; Suchan, Boris
2017-09-29
Recent research suggests that video game playing is associated with many cognitive benefits. However, little is known about the neural mechanisms mediating such effects, especially with regard to probabilistic categorization learning, which is a widely unexplored area in gaming research. Therefore, the present study aimed to investigate the neural correlates of probabilistic classification learning in video gamers in comparison to non-gamers. Subjects were scanned in a 3T magnetic resonance imaging (MRI) scanner while performing a modified version of the weather prediction task. Behavioral data yielded evidence for better categorization performance of video gamers, particularly under conditions characterized by stronger uncertainty. Furthermore, a post-experimental questionnaire showed that video gamers had acquired higher declarative knowledge about the card combinations and the related weather outcomes. Functional imaging data revealed for video gamers stronger activation clusters in the hippocampus, the precuneus, the cingulate gyrus and the middle temporal gyrus as well as in occipital visual areas and in areas related to attentional processes. All these areas are connected with each other and represent critical nodes for semantic memory, visual imagery and cognitive control. Apart from this, and in line with previous studies, both groups showed activation in brain areas that are related to attention and executive functions as well as in the basal ganglia and in memory-associated regions of the medial temporal lobe. These results suggest that playing video games might enhance the usage of declarative knowledge as well as hippocampal involvement and enhances overall learning performance during probabilistic learning. In contrast to non-gamers, video gamers showed better categorization performance, independently of the uncertainty of the condition. Copyright © 2017 Elsevier B.V. All rights reserved.
Nosofsky, Robert M.; Denton, Stephen E.; Zaki, Safa R.; Murphy-Knudsen, Anne F.; Unverzagt, Frederick W.
2013-01-01
Studies of incidental category learning support the hypothesis of an implicit prototype-extraction system which is distinct from explicit memory (Smith, 2008). In those studies, patients with explicit-memory impairments due to damage to the medial-temporal lobe performed normally in implicit categorization tasks (Bozoki, Grossman, & Smith, 2006; Knowlton & Squire, 1993). However, alternative interpretations are that: i) even people with impairments to a single memory system have sufficient resources to succeed on the particular categorization tasks that have been tested (Nosofsky & Zaki, 1998; Zaki & Nosofsky, 2001); and ii) working memory can be used at time of test to learn the categories (Palmeri & Flanery, 1999). In the present experiments, patients with amnestic mild cognitive impairment or early Alzheimer’s disease were tested in prototype-extraction tasks to examine these possibilities. In a categorization task involving discrete-feature stimuli, the majority of subjects relied on memories for exceedingly few features, even when the task structure strongly encouraged reliance on broad-based prototypes. In a dot-pattern categorization task, even the memory-impaired patients were able to use working memory at time of test to extract the category structure (at least for the stimulus set used in past work). We argue that the results weaken the past case made in favor of a separate system of implicit-prototype extraction. PMID:22746953
Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean
2018-05-01
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Farjardo, Inmaculada; Arfe, Barbara; Benedetti, Patrizia; Altoe, Gianmarco
2008-01-01
Sixty deaf and hearing students were asked to search for goods in a Hypertext Supermarket with either graphical or textual links of high typicality, frequency, and familiarity. Additionally, they performed a picture and word categorization task and two working memory span tasks (spatial and verbal). Results showed that deaf students were faster in…
Lexical Categorization Modalities in Pre-School Children: Influence of Perceptual and Verbal Tasks
ERIC Educational Resources Information Center
Tallandini, Maria Anna; Roia, Anna
2005-01-01
This study investigates how categorical organization functions in pre-school children, focusing on the dichotomy between living and nonliving things. The variables of familiarity, frequency of word use and perceptual complexity were controlled. Sixty children aged between 4 years and 5 years 10 months were investigated. Three tasks were used: a…
Color categories and color appearance
Webster, Michael A.; Kay, Paul
2011-01-01
We examined categorical effects in color appearance in two tasks, which in part differed in the extent to which color naming was explicitly required for the response. In one, we measured the effects of color differences on perceptual grouping for hues that spanned the blue–green boundary, to test whether chromatic differences across the boundary were perceptually exaggerated. This task did not require overt judgments of the perceived colors, and the tendency to group showed only a weak and inconsistent categorical bias. In a second case, we analyzed results from two prior studies of hue scaling of chromatic stimuli (De Valois, De Valois, Switkes, & Mahon, 1997; Malkoc, Kay, & Webster, 2005), to test whether color appearance changed more rapidly around the blue–green boundary. In this task observers directly judge the perceived color of the stimuli and these judgments tended to show much stronger categorical effects. The differences between these tasks could arise either because different signals mediate color grouping and color appearance, or because linguistic categories might differentially intrude on the response to color and/or on the perception of color. Our results suggest that the interaction between language and color processing may be highly dependent on the specific task and cognitive demands and strategies of the observer, and also highlight pronounced individual differences in the tendency to exhibit categorical responses. PMID:22176751
Martin, Chris B; Mirsattari, Seyed M; Pruessner, Jens C; Pietrantonio, Sandra; Burneo, Jorge G; Hayman-Abello, Brent; Köhler, Stefan
2012-11-01
In déjà vu, a phenomenological impression of familiarity for the current visual environment is experienced with a sense that it should in fact not feel familiar. The fleeting nature of this phenomenon in daily life, and the difficulty in developing experimental paradigms to elicit it, has hindered progress in understanding déjà vu. Some neurological patients with temporal-lobe epilepsy (TLE) consistently experience déjà vu at the onset of their seizures. An investigation of such patients offers a unique opportunity to shed light on its possible underlying mechanisms. In the present study, we sought to determine whether unilateral TLE patients with déjà vu (TLE+) show a unique pattern of interictal memory deficits that selectively affect familiarity assessment. In Experiment 1, we employed a Remember-Know paradigm for categorized visual scenes and found evidence for impairments that were limited to familiarity-based responses. In Experiment 2, we administered an exclusion task for highly similar categorized visual scenes that placed both recognition processes in opposition. TLE+ patients again displayed recognition impairments, and these impairments spared their ability to engage recollective processes so as to counteract familiarity. The selective deficits we observed in TLE+ patients contrasted with the broader pattern of recognition-memory impairments that was present in a control group of unilateral patients without déjà vu (TLE-). MRI volumetry revealed that ipsilateral medial temporal structures were less broadly affected in TLE+ than in TLE- patients, with a trend for more focal volume reductions in the rhinal cortices of the TLE+ group. The current findings establish a first empirical link between déjà vu in TLE and processes of familiarity assessment, as defined and measured in current cognitive models. They also reveal a pattern of selectivity in recognition impairments that is rarely observed and, thus, of significant theoretical interest to the memory literature at large. Copyright © 2012 Elsevier Ltd. All rights reserved.
When spiders appear suddenly: spider-phobic patients are distracted by task-irrelevant spiders.
Gerdes, Antje B M; Alpers, Georg W; Pauli, Paul
2008-02-01
Fear is thought to facilitate the detection of threatening stimuli. Few studies have examined the effects of task-irrelevant phobic cues in search tasks that do not involve semantic categorization. In a combined reaction time and eye-tracking experiment we investigated whether peripheral visual cues capture initial attention and distract from the execution of goal-directed eye movements. Twenty-one spider-phobic patients and 21 control participants were instructed to search for a color singleton while ignoring task-irrelevant abrupt-onset distractors which contained either a small picture of a spider (phobic), a flower (non-phobic, but similar to spiders in shape), a mushroom (non-phobic, and not similar to spiders in shape), or no picture. As expected, patients' reaction times were longer on trials with spider distractors. However, eye movements revealed that this was not due to attentional capture by spider distractors; patients more often fixated on all distractors with pictures, but their reaction times were delayed by longer fixation durations on spider distractors. These data do not support automatic capture of attention by phobic cues but suggest that phobic patients fail to disengage attention from spiders.
Positive emotion impedes emotional but not cognitive conflict processing.
Zinchenko, Artyom; Obermeier, Christian; Kanske, Philipp; Schröger, Erich; Kotz, Sonja A
2017-06-01
Cognitive control enables successful goal-directed behavior by resolving a conflict between opposing action tendencies, while emotional control arises as a consequence of emotional conflict processing such as in irony. While negative emotion facilitates both cognitive and emotional conflict processing, it is unclear how emotional conflict processing is affected by positive emotion (e.g., humor). In 2 EEG experiments, we investigated the role of positive audiovisual target stimuli in cognitive and emotional conflict processing. Participants categorized either spoken vowels (cognitive task) or their emotional valence (emotional task) and ignored the visual stimulus dimension. Behaviorally, a positive target showed no influence on cognitive conflict processing, but impeded emotional conflict processing. In the emotional task, response time conflict costs were higher for positive than for neutral targets. In the EEG, we observed an interaction of emotion by congruence in the P200 and N200 ERP components in emotional but not in cognitive conflict processing. In the emotional conflict task, the P200 and N200 conflict effect was larger for emotional than neutral targets. Thus, our results show that emotion affects conflict processing differently as a function of conflict type and emotional valence. This suggests that there are conflict- and valence-specific mechanisms modulating executive control.
Eye tracking measures of uncertainty during perceptual decision making.
Brunyé, Tad T; Gardony, Aaron L
2017-10-01
Perceptual decision making involves gathering and interpreting sensory information to effectively categorize the world and inform behavior. For instance, a radiologist distinguishing the presence versus absence of a tumor, or a luggage screener categorizing objects as threatening or non-threatening. In many cases, sensory information is not sufficient to reliably disambiguate the nature of a stimulus, and resulting decisions are done under conditions of uncertainty. The present study asked whether several oculomotor metrics might prove sensitive to transient states of uncertainty during perceptual decision making. Participants viewed images with varying visual clarity and were asked to categorize them as faces or houses, and rate the certainty of their decisions, while we used eye tracking to monitor fixations, saccades, blinks, and pupil diameter. Results demonstrated that decision certainty influenced several oculomotor variables, including fixation frequency and duration, the frequency, peak velocity, and amplitude of saccades, and phasic pupil diameter. Whereas most measures tended to change linearly along with decision certainty, pupil diameter revealed more nuanced and dynamic information about the time course of perceptual decision making. Together, results demonstrate robust alterations in eye movement behavior as a function of decision certainty and attention demands, and suggest that monitoring oculomotor variables during applied task performance may prove valuable for identifying and remediating transient states of uncertainty. Published by Elsevier B.V.
Monkeys show recognition without priming in a classification task.
Basile, Benjamin M; Hampton, Robert R
2013-02-01
Humans show visual perceptual priming by identifying degraded images faster and more accurately if they have seen the original images, while simultaneously failing to recognize the same images. Such priming is commonly thought, with little evidence, to be widely distributed phylogenetically. Following Brodbeck (1997), we trained rhesus monkeys (Macaca mulatta) to categorize photographs according to content (e.g., birds, fish, flowers, people). In probe trials, we tested whether monkeys were faster or more accurate at categorizing degraded versions of previously seen images (primed) than degraded versions of novel images (unprimed). Monkeys categorized reliably, but showed no benefit from having previously seen the images. This finding was robust across manipulations of image quality (color, grayscale, line drawings), type of image degradation (occlusion, blurring), levels of processing, and number of repetitions of the prime. By contrast, in probe matching-to-sample trials, monkeys recognized the primes, demonstrating that they remembered the primes and could discriminate them from other images in the same category under the conditions used to test for priming. Two experiments that replicated Brodbeck's (1997) procedures also produced no evidence of priming. This inability to find priming in monkeys under perceptual conditions sufficient for recognition presents a puzzle. Copyright © 2012 Elsevier B.V. All rights reserved.
Goodhew, Stephanie C; Greenwood, John A; Edwards, Mark
2016-05-01
The visual system is constantly bombarded with dynamic input. In this context, the creation of enduring object representations presents a particular challenge. We used object-substitution masking (OSM) as a tool to probe these processes. In particular, we examined the effect of target-like stimulus repetitions on OSM. In visual crowding, the presentation of a physically identical stimulus to the target reduces crowding and improves target perception, whereas in spatial repetition blindness, the presentation of a stimulus that belongs to the same category (type) as the target impairs perception. Across two experiments, we found an interaction between spatial repetition blindness and OSM, such that repeating a same-type stimulus as the target increased masking magnitude relative to presentation of a different-type stimulus. These results are discussed in the context of the formation of object files. Moreover, the fact that the inducer only had to belong to the same "type" as the target in order to exacerbate masking, without necessarily being physically identical to the target, has important implications for our understanding of OSM per se. That is, our results show the target is processed to a categorical level in OSM despite effective masking and, strikingly, demonstrate that this category-level content directly influences whether or not the target is perceived, not just performance on another task (as in priming).
Graded effects of regularity in language revealed by N400 indices of morphological priming.
Kielar, Aneta; Joanisse, Marc F
2010-07-01
Differential electrophysiological effects for regular and irregular linguistic forms have been used to support the theory that grammatical rules are encoded using a dedicated cognitive mechanism. The alternative hypothesis is that language systematicities are encoded probabilistically in a way that does not categorically distinguish rule-like and irregular forms. In the present study, this matter was investigated more closely by focusing specifically on whether the regular-irregular distinction in English past tenses is categorical or graded. We compared the ERP priming effects of regulars (baked-bake), vowel-change irregulars (sang-sing), and "suffixed" irregulars that display a partial regularity (suffixed irregular verbs, e.g., slept-sleep), as well as forms that are related strictly along formal or semantic dimensions. Participants performed a visual lexical decision task with either visual (Experiment 1) or auditory prime (Experiment 2). Stronger N400 priming effects were observed for regular than vowel-change irregular verbs, whereas suffixed irregulars tended to group with regular verbs. Subsequent analyses decomposed early versus late-going N400 priming, and suggested that differences among forms can be attributed to the orthographic similarity of prime and target. Effects of morphological relatedness were observed in the later-going time period, however, we failed to observe true regular-irregular dissociations in either experiment. The results indicate that morphological effects emerge from the interaction of orthographic, phonological, and semantic overlap between words.
Representation in dynamical agents.
Ward, Ronnie; Ward, Robert
2009-04-01
This paper extends experiments by Beer [Beer, R. D. (1996). Toward the evolution of dynamical neural networks for minimally cognitive behavior. In P. Maes, M. Mataric, J. Meyer, J. Pollack, & S. Wilson (Eds.), From animals to animats 4: Proceedings of the fourth international conference on simulation of adaptive behavior (pp. 421-429). MIT Press; Beer, R. D. (2003). The dynamics of active categorical perception in an evolved model agent (with commentary and response). Adaptive Behavior, 11 (4), 209-243] with an evolved, dynamical agent to further explore the question of representation in cognitive systems. Beer's environmentally-situated visual agent was controlled by a continuous-time recurrent neural network, and evolved to perform a categorical perception task, discriminating circles from diamonds. Despite the agent's high levels of discrimination performance, Beer found no evidence of internal representation in the best-evolved agent's nervous system. Here we examine the generality of this result. We evolved an agent for shape discrimination, and performed extensive behavioral analyses to test for representation. In this case we find that agents developed to discriminate equal-width shapes exhibit what Clark [Clark, A. (1997). The dynamical challenge. Cognitive Science, 21 (4), 461-481] calls "weak-substantive representation". The agent had internal configurations that (1) were understandably related to the object in the environment, and (2) were functionally used in a task relevant way when the target was not visible to the agent.
Nikbakht, Nader; Tafreshiha, Azadeh; Zoccolan, Davide; Diamond, Mathew E
2018-02-07
To better understand how object recognition can be triggered independently of the sensory channel through which information is acquired, we devised a task in which rats judged the orientation of a raised, black and white grating. They learned to recognize two categories of orientation: 0° ± 45° ("horizontal") and 90° ± 45° ("vertical"). Each trial required a visual (V), a tactile (T), or a visual-tactile (VT) discrimination; VT performance was better than that predicted by optimal linear combination of V and T signals, indicating synergy between sensory channels. We examined posterior parietal cortex (PPC) and uncovered key neuronal correlates of the behavioral findings: PPC carried both graded information about object orientation and categorical information about the rat's upcoming choice; single neurons exhibited identical responses under the three modality conditions. Finally, a linear classifier of neuronal population firing replicated the behavioral findings. Taken together, these findings suggest that PPC is involved in the supramodal processing of shape. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
Interference effects of categorization on decision making.
Wang, Zheng; Busemeyer, Jerome R
2016-05-01
Many decision making tasks in life involve a categorization process, but the effects of categorization on subsequent decision making has rarely been studied. This issue was explored in three experiments (N=721), in which participants were shown a face stimulus on each trial and performed variations of categorization-decision tasks. On C-D trials, they categorized the stimulus and then made an action decision; on X-D trials, they were told the category and then made an action decision; on D-alone trials, they only made an action decision. An interference effect emerged in some of the conditions, such that the probability of an action on the D-alone trials (i.e., when there was no explicit categorization before the decision) differed from the total probability of the same action on the C-D or X-D trials (i.e., when there was explicit categorization before the decision). Interference effects are important because they indicate a violation of the classical law of total probability, which is assumed by many cognitive models. Across all three experiments, a complex pattern of interference effects systematically occurred for different types of stimuli and for different types of categorization-decision tasks. These interference effects present a challenge for traditional cognitive models, such as Markov and signal detection models, but a quantum cognition model, called the belief-action entanglement (BAE) model, predicted that these results could occur. The BAE model employs the quantum principles of superposition and entanglement to explain the psychological mechanisms underlying the puzzling interference effects. The model can be applied to many important and practical categorization-decision situations in life. Copyright © 2016 Elsevier B.V. All rights reserved.
Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.
2013-01-01
Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656
Subliminal semantic priming in speech.
Daltrozzo, Jérôme; Signoret, Carine; Tillmann, Barbara; Perrin, Fabien
2011-01-01
Numerous studies have reported subliminal repetition and semantic priming in the visual modality. We transferred this paradigm to the auditory modality. Prime awareness was manipulated by a reduction of sound intensity level. Uncategorized prime words (according to a post-test) were followed by semantically related, unrelated, or repeated target words (presented without intensity reduction) and participants performed a lexical decision task (LDT). Participants with slower reaction times in the LDT showed semantic priming (faster reaction times for semantically related compared to unrelated targets) and negative repetition priming (slower reaction times for repeated compared to semantically related targets). This is the first report of semantic priming in the auditory modality without conscious categorization of the prime.
Subliminal Semantic Priming in Speech
Tillmann, Barbara; Perrin, Fabien
2011-01-01
Numerous studies have reported subliminal repetition and semantic priming in the visual modality. We transferred this paradigm to the auditory modality. Prime awareness was manipulated by a reduction of sound intensity level. Uncategorized prime words (according to a post-test) were followed by semantically related, unrelated, or repeated target words (presented without intensity reduction) and participants performed a lexical decision task (LDT). Participants with slower reaction times in the LDT showed semantic priming (faster reaction times for semantically related compared to unrelated targets) and negative repetition priming (slower reaction times for repeated compared to semantically related targets). This is the first report of semantic priming in the auditory modality without conscious categorization of the prime. PMID:21655277
Brand, John; Johnson, Aaron P
2014-01-01
In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks.
Brand, John; Johnson, Aaron P.
2014-01-01
In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks. PMID:25520675
Operativity and the Superordinate Categorization of Artifacts.
ERIC Educational Resources Information Center
Ricco, Robert B.; Beilin, Harry
1992-01-01
The relative consistency in categorization by age-equivalent groups of preoperational and concrete-operational first graders was assessed across two categorization tasks employing color drawings of exemplars of superordinate artifact categories. Results are discussed in terms of the utilization and acquisition of knowledge about superordinate…
Abad, María J F; Noguera, Carmen; Ortells, Juan J
2003-07-01
The present research examines the influence of prime-target relationship (associative and categorical versus categorical only) on priming effects from attended and ignored parafoveal words. Participants performed a lexical-decision task on a single central target, which was preceded by two parafoveal prime words, one of which (the attended prime) was spatially precued. The results showed reliable positive and negative priming effects from attended and ignored words, respectively. However, this priming pattern was observed only for the "associative and categorical", but not for the "categorical only" relationship condition. These results suggest that the lack of semantic priming effects from words in some prior studies may be attributed to the kind of material used (i.e. weakly-associated word pairs).
Visual context modulates potentiation of grasp types during semantic object categorization.
Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J
2014-06-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.
Visual context modulates potentiation of grasp types during semantic object categorization
Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.
2013-01-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.
Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer
Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954
Affective and contextual values modulate spatial frequency use in object recognition
Caplette, Laurent; West, Gregory; Gomot, Marie; Gosselin, Frédéric; Wicker, Bruno
2014-01-01
Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system. PMID:24904514
Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode
Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina
2013-01-01
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976
Visual word recognition in deaf readers: lexicality is modulated by communication mode.
Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina
2013-01-01
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.
Seeing is not stereotyping: the functional independence of categorization and stereotype activation
Tomelleri, Silvia
2017-01-01
Abstract Social categorization has been viewed as necessarily resulting in stereotyping, yet extant research suggests the two processes are differentially sensitive to task manipulations. Here, we simultaneously test the degree to which race perception and stereotyping are conditionally automatic. Participants performed a sequential priming task while either explicitly attending to the race of face primes or directing attention away from their semantic nature. We find a dissociation between the perceptual encoding of race and subsequent activation of associated stereotypes, with race perception occurring in both task conditions, but implicit stereotyping occurring only when attention is directed to the race of the face primes. These results support a clear conceptual distinction between categorization and stereotyping and show that the encoding of racial category need not result in stereotype activation. PMID:28338829
Retter, Talia L; Jiang, Fang; Webster, Michael A; Rossion, Bruno
2018-04-01
Fast periodic visual stimulation combined with electroencephalography (FPVS-EEG) has unique sensitivity and objectivity in measuring rapid visual categorization processes. It constrains image processing time by presenting stimuli rapidly through brief stimulus presentation durations and short inter-stimulus intervals. However, the selective impact of these temporal parameters on visual categorization is largely unknown. Here, we presented natural images of objects at a rate of 10 or 20 per second (10 or 20 Hz), with faces appearing once per second (1 Hz), leading to two distinct frequency-tagged EEG responses. Twelve observers were tested with three squarewave image presentation conditions: 1) with an ISI, a traditional 50% duty cycle at 10 Hz (50-ms stimulus duration separated by a 50-ms ISI); 2) removing the ISI and matching the rate, a 100% duty cycle at 10 Hz (100-ms duration with 0-ms ISI); 3) removing the ISI and matching the stimulus presentation duration, a 100% duty cycle at 20 Hz (50-ms duration with 0-ms ISI). The face categorization response was significantly decreased in the 20 Hz 100% condition. The conditions at 10 Hz showed similar face-categorization responses, peaking maximally over the right occipito-temporal (ROT) cortex. However, the onset of the 10 Hz 100% response was delayed by about 20 ms over the ROT region relative to the 10 Hz 50% condition, likely due to immediate forward-masking by preceding images. Taken together, these results help to interpret how the FPVS-EEG paradigm sets temporal constraints on visual image categorization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Giglhuber, Katrin; Maurer, Stefanie; Zimmer, Claus; Meyer, Bernhard; Krieg, Sandro M
2017-02-01
In clinical practice, repetitive navigated transcranial magnetic stimulation (rTMS) is of particular interest for non-invasive mapping of cortical language areas. Yet, rTMS studies try to detect further cortical functions. Damage to the underlying network of visuospatial attention function can result in visual neglect-a severe neurological deficit and influencing factor for a significantly reduced functional outcome. This investigation aims to evaluate the use of rTMS for evoking visual neglect in healthy volunteers and the potential of specifically locating cortical areas that can be assigned for the function of visuospatial attention. Ten healthy, right-handed subjects underwent rTMS visual neglect mapping. Repetitive trains of 5 Hz and 10 pulses were applied to 52 pre-defined cortical spots on each hemisphere; each cortical spot was stimulated 10 times. Visuospatial attention was tested time-locked to rTMS pulses by a landmark task. Task pictures were displayed tachistoscopically for 50 ms. The subjects' performance was analyzed by video, and errors were referenced to cortical spots. We observed visual neglect-like deficits during the stimulation of both hemispheres. Errors were categorized into leftward, rightward, and no response errors. Rightward errors occurred significantly more often during stimulation of the right hemisphere than during stimulation of the left hemisphere (mean rightward error rate (ER) 1.6 ± 1.3 % vs. 1.0 ± 1.0 %, p = 0.0141). Within the left hemisphere, we observed predominantly leftward errors rather than rightward errors (mean leftward ER 2.0 ± 1.3 % vs. rightward ER 1.0 ± 1.0 %; p = 0.0005). Visual neglect can be elicited non-invasively by rTMS, and cortical areas eloquent for visuospatial attention can be detected. Yet, the correlation of this approach with clinical findings has to be shown in upcoming steps.
The dynamics of categorization: Unraveling rapid categorization.
Mack, Michael L; Palmeri, Thomas J
2015-06-01
We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30 ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of categorization, yet no previous study has investigated them together. We systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. However, these advantages were modulated by category trial context. With randomized target categories, the superordinate advantage was eliminated; and with only four repetitions of superordinate categorization within an otherwise randomized context, the basic-level advantage was eliminated. Contrary to theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context. (c) 2015 APA, all rights reserved).
THE DYNAMICS OF CATEGORIZATION: UNRAVELING RAPID CATEGORIZATION
Mack, Michael L.; Palmeri, Thomas J.
2015-01-01
We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of object categorization, yet no study has previously investigated them together. Over five experiments, we systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. But this superordinate advantage was modulated significantly by target category trial context. With randomized target categories, the superordinate advantage was eliminated; and with “blocks” of only four repetitions of superordinate categorization within an otherwise randomized context, the advantage for the basic-level was eliminated. Contrary to some theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context. PMID:25938178
Volz, Kirsten G; Rübsamen, Rudolf; von Cramon, D Yves
2008-09-01
According to the Oxford English Dictionary, intuition is "the ability to understand or know something immediately, without conscious reasoning." In other words, people continuously, without conscious attention, recognize patterns in the stream of sensations that impinge upon them. The result is a vague perception of coherence, which subsequently biases thought and behavior accordingly. Within the visual domain, research using paradigms with difficult recognition has suggested that the orbitofrontal cortex (OFC) serves as a fast detector and predictor of potential content that utilizes coarse facets of the input. To investigate whether the OFC is crucial in biasing task-specific processing, and hence subserves intuitive judgments in various modalities, we used a difficult-recognition paradigm in the auditory domain. Participants were presented with short sequences of distorted, nonverbal, environmental sounds and had to perform a sound categorization task. Imaging results revealed rostral medial OFC activation for such auditory intuitive coherence judgments. By means of a conjunction analysis between the present results and those from a previous study on visual intuitive coherence judgments, the rostral medial OFC was shown to be activated via both modalities. We conclude that rostral OFC activation during intuitive coherence judgments subserves the detection of potential content on the basis of only coarse facets of the input.
The Role of Coping and Race in Healthy Children’s Experimental Pain Responses
Evans, Subhadra; Lu, Qian; Tsao, Jennie C. I.; Zelter, Lonnie K.
2009-01-01
This study examined the relationship between race, laboratory-based coping strategies and anticipatory anxiety and pain intensity for cold, thermal (heat) and pressure experimental pain tasks. Participants were 123 healthy children and adolescents, including 33 African Americans (51% female; mean age =13.9 years) and 90 Caucasians (50% female; mean age = 12.6 years). Coping in response to the cold task was assessed with the Lab Coping Style interview; based on their interview responses, participants were categorized as ‘attenders’ (i.e., those who focused on the task) vs. ‘distractors’ (i.e., those who distracted themselves during the task). Analysis of covariance (ANCOVA) revealed significant interactions between race (African-American vs. Caucasian) and lab-based coping style after controlling for sex, age and socioeconomic status. African-American children classified as attenders reported less anticipatory anxiety for the cold task and lower pain intensity for the cold, heat and pressure tasks compared to those categorized as distractors. For these pain outcomes, Caucasian children classified as distractors reported less anticipatory anxiety and lower pain intensity relative to those categorized as attenders. The findings point to the moderating effect of coping in the relationship between race and experimental pain sensitivity. PMID:20352035
Categorical Perception of Emotional Facial Expressions in Preschoolers
ERIC Educational Resources Information Center
Cheal, Jenna L.; Rutherford, M. D.
2011-01-01
Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers' discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum "felt the…
Different Measures of Structural Similarity Tap Different Aspects of Visual Object Processing
Gerlach, Christian
2017-01-01
The structural similarity of objects has been an important variable in explaining why some objects are easier to categorize at a superordinate level than to individuate, and also why some patients with brain injury have more difficulties in recognizing natural (structurally similar) objects than artifacts (structurally distinct objects). In spite of its merits as an explanatory variable, structural similarity is not a unitary construct, and it has been operationalized in different ways. Furthermore, even though measures of structural similarity have been successful in explaining task and category-effects, this has been based more on implication than on direct empirical demonstrations. Here, the direct influence of two different measures of structural similarity, contour overlap and within-item structural diversity, on object individuation (object decision) and superordinate categorization performance is examined. Both measures can account for performance differences across objects, but in different conditions. It is argued that this reflects differences between the measures in whether they tap: (i) global or local shape characteristics, and (ii) between- or within-category structural similarity. PMID:28861027
Lin, Nan; Guo, Qihao; Han, Zaizhu; Bi, Yanchao
2011-11-01
Neuropsychological and neuroimaging studies have indicated that motor knowledge is one potential dimension along which concepts are organized. Here we present further direct evidence for the effects of motor knowledge in accounting for categorical patterns across object domains (living vs. nonliving) and grammatical domains (nouns vs. verbs), as well as the integrity of other modality-specific knowledge (e.g., visual). We present a Chinese case, XRK, who suffered from semantic dementia with left temporal lobe atrophy. In naming and comprehension tasks, he performed better at nonliving items than at living items, and better at verbs than at nouns. Critically, multiple regression method revealed that these two categorical effects could be both accounted for by the charade rating, a continuous measurement of the significance of motor knowledge for a concept or a semantic feature. Furthermore, charade rating also predicted his performances on the generation frequency of semantic features of various modalities. These findings consolidate the significance of motor knowledge in conceptual organization and further highlights the interactions between different types of semantic knowledge. Copyright © 2010 Elsevier Inc. All rights reserved.
Changes in default mode network as automaticity develops in a categorization task.
Shamloo, Farzin; Helie, Sebastien
2016-10-15
The default mode network (DMN) is a set of brain regions in which blood oxygen level dependent signal is suppressed during attentional focus on the external environment. Because automatic task processing requires less attention, development of automaticity in a rule-based categorization task may result in less deactivation and altered functional connectivity of the DMN when compared to the initial learning stage. We tested this hypothesis by re-analyzing functional magnetic resonance imaging data of participants trained in rule-based categorization for over 10,000 trials (Helie et al., 2010) [12,13]. The results show that some DMN regions are deactivated in initial training but not after automaticity has developed. There is also a significant decrease in DMN deactivation after extensive practice. Seed-based functional connectivity analyses with the precuneus, medial prefrontal cortex (two important DMN regions) and Brodmann area 6 (an important region in automatic categorization) were also performed. The results show increased functional connectivity with both DMN and non-DMN regions after the development of automaticity, and a decrease in functional connectivity between the medial prefrontal cortex and ventromedial orbitofrontal cortex. Together, these results further support the hypothesis of a strategy shift in automatic categorization and bridge the cognitive and neuroscientific conceptions of automaticity in showing that the reduced need for cognitive resources in automatic processing is accompanied by a disinhibition of the DMN and stronger functional connectivity between DMN and task-related brain regions. Copyright © 2016 Elsevier B.V. All rights reserved.
Human-computer dialogue: Interaction tasks and techniques. Survey and categorization
NASA Technical Reports Server (NTRS)
Foley, J. D.
1983-01-01
Interaction techniques are described. Six basic interaction tasks, requirements for each task, requirements related to interaction techniques, and a technique's hardware prerequisites affective device selection are discussed.
Peykarjou, Stefanie; Hoehl, Stefanie; Pauen, Sabina; Rossion, Bruno
2017-10-02
This study investigates categorization of human and ape faces in 9-month-olds using a Fast Periodic Visual Stimulation (FPVS) paradigm while measuring EEG. Categorization responses are elicited only if infants discriminate between different categories and generalize across exemplars within each category. In study 1, human or ape faces were presented as standard and deviant stimuli in upright and inverted trials. Upright ape faces presented among humans elicited strong categorization responses, whereas responses for upright human faces and for inverted ape faces were smaller. Deviant inverted human faces did not elicit categorization. Data were best explained by a model with main effects of species and orientation. However, variance of low-level image characteristics was higher for the ape than the human category. Variance was matched to replicate this finding in an independent sample (study 2). Both human and ape faces elicited categorization in upright and inverted conditions, but upright ape faces elicited the strongest responses. Again, data were best explained by a model of two main effects. These experiments demonstrate that 9-month-olds rapidly categorize faces, and unfamiliar faces presented among human faces elicit increased categorization responses. This likely reflects habituation for the familiar standard category, and stronger release for the unfamiliar category deviants.
NASA Astrophysics Data System (ADS)
Tohir, M.; Abidin, Z.; Dafik; Hobri
2018-04-01
Arithmetics is one of the topics in Mathematics, which deals with logic and detailed process upon generalizing formula. Creativity and flexibility are needed in generalizing formula of arithmetics series. This research aimed at analyzing students creative thinking skills in generalizing arithmetic series. The triangulation method and research-based learning was used in this research. The subjects were students of the Master Program of Mathematics Education in Faculty of Teacher Training and Education at Jember University. The data was collected by giving assignments to the students. The data collection was done by giving open problem-solving task and documentation study to the students to arrange generalization pattern based on the dependent function formula i and the function depend on i and j. Then, the students finished the next problem-solving task to construct arithmetic generalization patterns based on the function formula which depends on i and i + n and the sum formula of functions dependent on i and j of the arithmetic compiled. The data analysis techniques operative in this study was Miles and Huberman analysis model. Based on the result of data analysis on task 1, the levels of students creative thinking skill were classified as follows; 22,22% of the students categorized as “not creative” 38.89% of the students categorized as “less creative” category; 22.22% of the students categorized as “sufficiently creative” and 16.67% of the students categorized as “creative”. By contrast, the results of data analysis on task 2 found that the levels of students creative thinking skills were classified as follows; 22.22% of the students categorized as “sufficiently creative”, 44.44% of the students categorized as “creative” and 33.33% of the students categorized as “very creative”. This analysis result can set the basis for teaching references and actualizing a better teaching model in order to increase students creative thinking skills.
Taxonomic and ad hoc categorization within the two cerebral hemispheres.
Shen, Yeshayahu; Aharoni, Bat-El; Mashal, Nira
2015-01-01
A typicality effect refers to categorization which is performed more quickly or more accurately for typical than for atypical members of a given category. Previous studies reported a typicality effect for category members presented in the left visual field/right hemisphere (RH), suggesting that the RH applies a similarity-based categorization strategy. However, findings regarding the typicality effect within the left hemisphere (LH) are less conclusive. The current study tested the pattern of typicality effects within each hemisphere for both taxonomic and ad hoc categories, using words presented to the left or right visual fields. Experiment 1 tested typical and atypical members of taxonomic categories as well as non-members, and Experiment 2 tested typical and atypical members of ad hoc categories as well as non-members. The results revealed a typicality effect in both hemispheres and in both types of categories. Furthermore, the RH categorized atypical stimuli more accurately than did the LH. Our findings suggest that both hemispheres rely on a similarity-based categorization strategy, but the coarse semantic coding of the RH seems to facilitate the categorization of atypical members.
Seeing is not stereotyping: the functional independence of categorization and stereotype activation.
Ito, Tiffany A; Tomelleri, Silvia
2017-05-01
Social categorization has been viewed as necessarily resulting in stereotyping, yet extant research suggests the two processes are differentially sensitive to task manipulations. Here, we simultaneously test the degree to which race perception and stereotyping are conditionally automatic. Participants performed a sequential priming task while either explicitly attending to the race of face primes or directing attention away from their semantic nature. We find a dissociation between the perceptual encoding of race and subsequent activation of associated stereotypes, with race perception occurring in both task conditions, but implicit stereotyping occurring only when attention is directed to the race of the face primes. These results support a clear conceptual distinction between categorization and stereotyping and show that the encoding of racial category need not result in stereotype activation. © The Author (2017). Published by Oxford University Press.
Kauffmann, Louise; Chauvin, Alan; Pichat, Cédric; Peyrin, Carole
2015-10-01
According to current models of visual perception scenes are processed in terms of spatial frequencies following a predominantly coarse-to-fine processing sequence. Low spatial frequencies (LSF) reach high-order areas rapidly in order to activate plausible interpretations of the visual input. This triggers top-down facilitation that guides subsequent processing of high spatial frequencies (HSF) in lower-level areas such as the inferotemporal and occipital cortices. However, dynamic interactions underlying top-down influences on the occipital cortex have never been systematically investigated. The present fMRI study aimed to further explore the neural bases and effective connectivity underlying coarse-to-fine processing of scenes, particularly the role of the occipital cortex. We used sequences of six filtered scenes as stimuli depicting coarse-to-fine or fine-to-coarse processing of scenes. Participants performed a categorization task on these stimuli (indoor vs. outdoor). Firstly, we showed that coarse-to-fine (compared to fine-to-coarse) sequences elicited stronger activation in the inferior frontal gyrus (in the orbitofrontal cortex), the inferotemporal cortex (in the fusiform and parahippocampal gyri), and the occipital cortex (in the cuneus). Dynamic causal modeling (DCM) was then used to infer effective connectivity between these regions. DCM results revealed that coarse-to-fine processing resulted in increased connectivity from the occipital cortex to the inferior frontal gyrus and from the inferior frontal gyrus to the inferotemporal cortex. Critically, we also observed an increase in connectivity strength from the inferior frontal gyrus to the occipital cortex, suggesting that top-down influences from frontal areas may guide processing of incoming signals. The present results support current models of visual perception and refine them by emphasizing the role of the occipital cortex as a cortical site for feedback projections in the neural network underlying coarse-to-fine processing of scenes. Copyright © 2015 Elsevier Inc. All rights reserved.
The contributions of visual and central attention to visual working memory.
Souza, Alessandra S; Oberauer, Klaus
2017-10-01
We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.
ERIC Educational Resources Information Center
Keel, Linda P.
1998-01-01
Argues that Waldo and Bauman's Goals and Process (GAP) matrix does not include task/work groups. Claims that it is not in the best interest of group work to undo or rework the Association for Specialists in Group Work's four core groups as a model. States that the field of group work needs a commonly shared framework/categorization from which to…
Postma, Albert; Zuidhoek, Sander; Noordzij, Matthijs L; Kappers, Astrid M L
2007-01-01
The roles of visual and haptic experience in different aspects of haptic processing of objects in peripersonal space are examined. In three trials, early-blind, late-blind, and blindfolded-sighted individuals had to match ten shapes haptically to the cut-outs in a board as fast as possible. Both blind groups were much faster than the sighted in all three trials. All three groups improved considerably from trial to trial. In particular, the sighted group showed a strong improvement from the first to the second trial. While superiority of the blind remained for speeded matching after rotation of the stimulus frame, coordinate positional-memory scores in a non-speeded free-recall trial showed no significant differences between the groups. Moreover, when assessed with a verbal response, categorical spatial-memory appeared strongest in the late-blind group. The role of haptic and visual experience thus appears to depend on the task aspect tested.
Almeida, Jorge; Amaral, Lénia; Garcea, Frank E; Aguiar de Sousa, Diana; Xu, Shan; Mahon, Bradford Z; Martins, Isabel Pavão
2018-05-24
A major principle of organization of the visual system is between a dorsal stream that processes visuomotor information and a ventral stream that supports object recognition. Most research has focused on dissociating processing across these two streams. Here we focus on how the two streams interact. We tested neurologically-intact and impaired participants in an object categorization task over two classes of objects that depend on processing within both streams-hands and tools. We measured how unconscious processing of images from one of these categories (e.g., tools) affects the recognition of images from the other category (i.e., hands). Our findings with neurologically-intact participants demonstrated that processing an image of a hand hampers the subsequent processing of an image of a tool, and vice versa. These results were not present in apraxic patients (N = 3). These findings suggest local and global inhibitory processes working in tandem to co-register information across the two streams.
Mapping the Speech Code: Cortical Responses Linking the Perception and Production of Vowels
Schuerman, William L.; Meyer, Antje S.; McQueen, James M.
2017-01-01
The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation. PMID:28439232
Antoine, Sophie; Ranzini, Mariagrazia; Gebuis, Titia; van Dijck, Jean-Philippe; Gevers, Wim
2017-10-01
A largely substantiated view in the domain of working memory is that the maintenance of serial order is achieved by generating associations of each item with an independent representation of its position, so-called position markers. Recent studies reported that the ordinal position of an item in verbal working memory interacts with spatial processing. This suggests that position markers might be spatial in nature. However, these interactions were so far observed in tasks implying a clear binary categorization of space (i.e., with left and right responses or targets). Such binary categorizations leave room for alternative interpretations, such as congruency between non-spatial categorical codes for ordinal position (e.g., begin and end) and spatial categorical codes for response (e.g., left and right). Here we discard this interpretation by providing evidence that this interaction can also be observed in a task that draws upon a continuous processing of space, the line bisection task. Specifically, bisections are modulated by ordinal position in verbal working memory, with lines bisected more towards the right after retrieving items from the end compared to the beginning of the memorized sequence. This supports the idea that position markers are intrinsically spatial in nature.
Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P
2018-01-01
Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.
Does sensitivity in binary choice tasks depend on response modality?
Szumska, Izabela; van der Lubbe, Rob H J; Grzeczkowski, Lukasz; Herzog, Michael H
2016-07-01
In most models of vision, a stimulus is processed in a series of dedicated visual areas, leading to categorization of this stimulus, and possible decision, which subsequently may be mapped onto a motor-response. In these models, stimulus processing is thought to be independent of the response modality. However, in theories of event coding, common coding, and sensorimotor contingency, stimuli may be very specifically mapped onto certain motor-responses. Here, we compared performance in a shape localization task and used three different response modalities: manual, saccadic, and verbal. Meta-contrast masking was employed at various inter-stimulus intervals (ISI) to manipulate target visibility. Although we found major differences in reaction times for the three response modalities, accuracy remained at the same level for each response modality (and all ISIs). Our results support the view that stimulus-response (S-R) associations exist only for specific instances, such as reflexes or skills, but not for arbitrary S-R pairings. Copyright © 2016 Elsevier Inc. All rights reserved.
Categorization and Sensorimotor Interaction with Objects
ERIC Educational Resources Information Center
Iachini, Tina; Borghi, Anna M.; Senese, Vincenzo Paolo
2008-01-01
Three experiments were aimed at verifying whether the modality of interaction with objects and the goals defined by the task influences the weight of the properties used for categorization. In Experiment 1 we used everyday objects (cups and glasses). In order to exclude that the results depended on pre-stored categorical knowledge and to assess…
ERIC Educational Resources Information Center
Studer, Tobias; Hubner, Ronald
2008-01-01
In this study hemispheric asymmetries for categorizing objects at the basic versus subordinate level of abstraction were investigated. As predictions derived from different theoretical approaches are contradictory and experimental evidence is inconclusive in this regard, we conducted two categorization experiments, where we contrasted two…
Information-integration category learning and the human uncertainty response.
Paul, Erick J; Boomer, Joseph; Smith, J David; Ashby, F Gregory
2011-04-01
The human response to uncertainty has been well studied in tasks requiring attention and declarative memory systems. However, uncertainty monitoring and control have not been studied in multi-dimensional, information-integration categorization tasks that rely on non-declarative procedural memory. Three experiments are described that investigated the human uncertainty response in such tasks. Experiment 1 showed that following standard categorization training, uncertainty responding was similar in information-integration tasks and rule-based tasks requiring declarative memory. In Experiment 2, however, uncertainty responding in untrained information-integration tasks impaired the ability of many participants to master those tasks. Finally, Experiment 3 showed that the deficit observed in Experiment 2 was not because of the uncertainty response option per se, but rather because the uncertainty response provided participants a mechanism via which to eliminate stimuli that were inconsistent with a simple declarative response strategy. These results are considered in the light of recent models of category learning and metacognition.
Category learning increases discriminability of relevant object dimensions in visual cortex.
Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel
2013-04-01
Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.
Pomplun, M; Reingold, E M; Shen, J
2001-09-01
In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the difficulty of the comparative visual search task by contrasting a mismatch detection task with a substantially more difficult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the difficulty of the concurrent auditory task. Both the comparative search task difficulty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size.
Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte
2018-04-01
It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and rely more on the facial cues of audio-visual emotional stimuli. Two groups of young adult CD CI users-early signers (ES CI users; n = 11) and late signers (LS CI users; n = 10)-and a group of hearing, non-signing, age-matched controls (n = 12) performed an emotion recognition task with auditory, visual, and cross-modal emotionally congruent and incongruent speech stimuli. On different trials, participants categorized either the facial or the vocal expressions. The ES CI users more accurately recognized affective prosody than the LS CI users in the presence of congruent facial information. Furthermore, the ES CI users, but not the LS CI users, gained more than the controls from congruent visual stimuli when recognizing affective prosody. Both CI groups performed overall worse than the controls in recognizing affective prosody. These results suggest that early sign language experience affects multisensory emotion perception in CD CI users.
A conflict-based model of color categorical perception: evidence from a priming study.
Hu, Zhonghua; Hanley, J Richard; Zhang, Ruiling; Liu, Qiang; Roberson, Debi
2014-10-01
Categorical perception (CP) of color manifests as faster or more accurate discrimination of two shades of color that straddle a category boundary (e.g., one blue and one green) than of two shades from within the same category (e.g., two different shades of green), even when the differences between the pairs of colors are equated according to some objective metric. The results of two experiments provide new evidence for a conflict-based account of this effect, in which CP is caused by competition between visual and verbal/categorical codes on within-category trials. According to this view, conflict arises because the verbal code indicates that the two colors are the same, whereas the visual code indicates that they are different. In Experiment 1, two shades from the same color category were discriminated significantly faster when the previous trial also comprised a pair of within-category colors than when the previous trial comprised a pair from two different color categories. Under the former circumstances, the CP effect disappeared. According to the conflict-based model, response conflict between visual and categorical codes during discrimination of within-category pairs produced an adjustment of cognitive control that reduced the weight given to the categorical code relative to the visual code on the subsequent trial. Consequently, responses on within-category trials were facilitated, and CP effects were reduced. The effectiveness of this conflict-based account was evaluated in comparison with an alternative view that CP reflects temporary warping of perceptual space at the boundaries between color categories.
Distance comparisons in virtual reality: effects of path, context, and age
van der Ham, Ineke J. M.; Baalbergen, Heleen; van der Heijden, Peter G. M.; Postma, Albert; Braspenning, Merel; van der Kuil, Milan N. A.
2015-01-01
In this large scale, individual differences study (N = 521), the effects of cardinal axes of an environment and the path taken between locations on distance comparisons were assessed. The main goal was to identify if and to what extent previous findings in simple 2D tasks can be generalized to a more dynamic, three-dimensional virtual reality environment. Moreover, effects of age and gender were assessed. After memorizing the locations of six objects in a circular environment, participants were asked to judge the distance between objects they encountered. Results indicate that categorization (based on the cardinal axes) was present, as distances within one quadrant were judged as being closer together, even when no visual indication of the cardinal axes was given. Moreover, strong effects of the path taken between object locations were found; objects that were near on the path taken were perceived as being closer together than objects that were further apart on this path, regardless of the metric distance between the objects. Males outperformed females in distance comparison, but did not differ in the extent of the categorization and path effects. Age also affected performance; the categorization and path effects were highly similar across the age range tested, but the general ability to estimate distances does show a clear pattern increase during development and decrease with aging. PMID:26321968
Categorization is modulated by transcranial direct current stimulation over left prefrontal cortex.
Lupyan, Gary; Mirman, Daniel; Hamilton, Roy; Thompson-Schill, Sharon L
2012-07-01
Humans have an unparalleled ability to represent objects as members of multiple categories. A given object, such as a pillow may be-depending on current task demands-represented as an instance of something that is soft, as something that contains feathers, as something that is found in bedrooms, or something that is larger than a toaster. This type of processing requires the individual to dynamically highlight task-relevant properties and abstract over or suppress object properties that, although salient, are not relevant to the task at hand. Neuroimaging and neuropsychological evidence suggests that this ability may depend on cognitive control processes associated with the left inferior prefrontal gyrus. Here, we show that stimulating the left inferior frontal cortex using transcranial direct current stimulation alters performance of healthy subjects on a simple categorization task. Our task required subjects to select pictures matching a description, e.g., "click on all the round things." Cathodal stimulation led to poorer performance on classification trials requiring attention to specific dimensions such as color or shape as opposed to trials that required selecting items belonging to a more thematic category such as objects that hold water. A polarity reversal (anodal stimulation) lowered the threshold for selecting items that were more weakly associated with the target category. These results illustrate the role of frontally-mediated control processes in categorization and suggest potential interactions between categorization, cognitive control, and language. Copyright © 2012 Elsevier B.V. All rights reserved.
Categorization is modulated by transcranical direct current stimulation over left prefrontal cortex
Lupyan, Gary; Mirman, Daniel; Hamilton, Roy; Thompson-Schill, Sharon L.
2013-01-01
Humans have an unparalleled ability to represent objects as members of multiple categories. A given object, such as a pillow may be—depending on current task demands—represented as an instance of something that is soft, as something that contains feathers, as something that is found in bedrooms, or something that is larger than a toaster. This type of processing requires the individual to dynamically highlight task-relevant properties and abstract over or suppress object properties that, although salient, are not relevant to the task at hand. Neuroimaging and neuropsychological evidence suggests that this ability may depend on cognitive control processes associated with the left inferior prefrontal gyrus. Here, we show that stimulating the left inferior frontal cortex using transcranial direct current stimulation alters performance of healthy subjects on a simple categorization task. Our task required subjects to select pictures matching a description, e.g., “click on all the round things.“ Cathodal stimulation led to poorer performance on classification trials requiring attention to specific dimensions such as color or shape as opposed to trials that required selecting items belonging to a more thematic category such as objects that hold water. A polarity reversal (anodal stimulation) lowered the threshold for selecting items that were more weakly associated with the target category. These results illustrate the role of frontally-mediated control processes in categorization and suggest potential interactions between categorization, cognitive control, and language. PMID:22578885
ERIC Educational Resources Information Center
Yamazaki, Y.; Aust, U.; Huber, L.; Hausmann, M.; Gunturkun, O.
2007-01-01
This study was aimed at revealing which cognitive processes are lateralized in visual categorizations of "humans" by pigeons. To this end, pigeons were trained to categorize pictures of humans and then tested binocularly or monocularly (left or right eye) on the learned categorization and for transfer to novel exemplars (Experiment 1). Subsequent…
Schwibbe, Anja; Kothe, Christian; Hampe, Wolfgang; Konradt, Udo
2016-10-01
Sixty years of research have not added up to a concordant evaluation of the influence of spatial and manual abilities on dental skill acquisition. We used Ackerman's theory of ability determinants of skill acquisition to explain the influence of spatial visualization and manual dexterity on the task performance of dental students in two consecutive preclinical technique courses. We measured spatial and manual abilities of applicants to Hamburg Dental School by means of a multiple choice test on Technical Aptitude and a wire-bending test, respectively. Preclinical dental technique tasks were categorized as consistent-simple and inconsistent-complex based on their contents. For analysis, we used robust regression to circumvent typical limitations in dental studies like small sample size and non-normal residual distributions. We found that manual, but not spatial ability exhibited a moderate influence on the performance in consistent-simple tasks during dental skill acquisition in preclinical dentistry. Both abilities revealed a moderate relation with the performance in inconsistent-complex tasks. These findings support the hypotheses which we had postulated on the basis of Ackerman's work. Therefore, spatial as well as manual ability are required for the acquisition of dental skills in preclinical technique courses. These results support the view that both abilities should be addressed in dental admission procedures in addition to cognitive measures.
An information theory account of late frontoparietal ERP positivities in cognitive control.
Barceló, Francisco; Cooper, Patrick S
2018-03-01
ERP research on task switching has revealed distinct transient and sustained positive waveforms (latency circa 300-900 ms) while shifting task rules or stimulus-response (S-R) mappings. However, it remains unclear whether such switch-related positivities show similar scalp topography and index context-updating mechanisms akin to those posed for domain-general (i.e., classic P300) positivities in many task domains. To examine this question, ERPs were recorded from 31 young adults (18-30 years) while they were intermittently cued to switch or repeat their perceptual categorization of Gabor gratings varying in color and thickness (switch task), or else they performed two visually identical control tasks (go/no-go and oddball). Our task cueing paradigm examined two temporarily distinct stages of proactive rule updating and reactive rule execution. A simple information theory model helped us gauge cognitive demands under distinct temporal and task contexts in terms of low-level S-R pathways and higher-order rule updating operations. Task demands modulated domain-general (indexed by classic oddball P3) and switch positivities-indexed by both a cue-locked late positive complex and a sustained positivity ensuing task transitions. Topographic scalp analyses confirmed subtle yet significant split-second changes in the configuration of neural sources for both domain-general P3s and switch positivities as a function of both the temporal and task context. These findings partly meet predictions from information estimates, and are compatible with a family of P3-like potentials indexing functionally distinct neural operations within a common frontoparietal "multiple demand" system during the preparation and execution of simple task rules. © 2016 Society for Psychophysiological Research.
Implicit Self-Importance in an Interpersonal Pronoun Categorization Task.
Fetterman, Adam K; Robinson, Michael D; Gilbertson, Elizabeth P
2014-06-01
Object relations theories emphasize the manner in which the salience/importance of implicit representations of self and other guide interpersonal functioning. Two studies and a pilot test (total N = 304) sought to model such representations. In dyadic contexts, the self is a "you" and the other is a "me", as verified in a pilot test. Study 1 then used a simple categorization task and found evidence for implicit self-importance: The pronoun "you" was categorized more quickly and accurately when presented in a larger font size, whereas the pronoun "me" was categorized more quickly and accurately when presented in a smaller font size. Study 2 showed that this pattern possesses value in understanding individual differences in interpersonal functioning. As predicted, arrogant people scored higher in implicit self-importance in the paradigm. Findings are discussed from the perspective of dyadic interpersonal dynamics.
Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading
O’Sullivan, Aisling E.; Crosse, Michael J.; Di Liberto, Giovanni M.; Lalor, Edmund C.
2017-01-01
Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. PMID:28123363
Interval-level measurement with visual analogue scales in Internet-based research: VAS Generator.
Reips, Ulf-Dietrich; Funke, Frederik
2008-08-01
The present article describes VAS Generator (www.vasgenerator.net), a free Web service for creating a wide range of visual analogue scales that can be used as measurement devices in Web surveys and Web experimentation, as well as for local computerized assessment. A step-by-step example for creating and implementing a visual analogue scale with visual feedback is given. VAS Generator and the scales it generates work independently of platforms and use the underlying languages HTML and JavaScript. Results from a validation study with 355 participants are reported and show that the scales generated with VAS Generator approximate an interval-scale level. In light of previous research on visual analogue versus categorical (e.g., radio button) scales in Internet-based research, we conclude that categorical scales only reach ordinal-scale level, and thus visual analogue scales are to be preferred whenever possible.
Parietal substrates for dimensional effects in visual search: evidence from lesion-symptom mapping
Humphreys, Glyn W.; Chechlacz, Magdalena
2013-01-01
In visual search, the detection of pop-out targets is facilitated when the target-defining dimension remains the same compared with when it changes across trials. We tested the brain regions necessary for these dimensional carry-over effects using a voxel-based morphometry study with brain-lesioned patients. Participants had to search for targets defined by either their colour (red or blue) or orientation (right- or left-tilted), and the target dimension either stayed the same or changed on consecutive trials. Twenty-five patients were categorized according to whether they showed an effect of dimensional change on search or not. The two groups did not differ with regard to their performance on several working memory tasks, and the dimensional carry-over effects were not correlated with working memory performance. With spatial, sustained attention and working memory deficits as well as lesion volume controlled, damage within the right inferior parietal lobule (the angular and supramarginal gyri) extending into the intraparietal sulcus was associated with an absence of dimensional carry-over (P < 0.001, cluster-level corrected for multiple comparisons). The data suggest that these regions of parietal cortex are necessary to implement attention shifting in the context of visual dimensional change. PMID:23404335
Overcoming default categorical bias in spatial memory.
Sampaio, Cristina; Wang, Ranxiao Frances
2010-12-01
In the present study, we investigated whether a strong default categorical bias can be overcome in spatial memory by using alternative membership information. In three experiments, we tested location memory in a circular space while providing participants with an alternative categorization. We found that visual presentation of the boundaries of the alternative categories (Experiment 1) did not induce the use of the alternative categories in estimation. In contrast, visual cuing of the alternative category membership of a target (Experiment 2) and unique target feature information associated with each alternative category (Experiment 3) successfully led to the use of the alternative categories in estimation. Taken together, the results indicate that default categorical bias in spatial memory can be overcome when appropriate cues are provided. We discuss how these findings expand the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in spatial memory by proposing a retrieval-based category adjustment (RCA) model.
Chang, Chien-Kai; Ho, Mary Wen-Reng; Chien, Sarina Hui-Lin
2018-01-01
People go beyond the inferences afforded by a person’s observable features to make guesses about personality traits or even social memberships such as political affiliations. The present study extended Hu et al. (2016) to further investigate the influence of provincial appearance on differentiating KMT (Kuomintang) and DPP (Democratic Progressive Party) candidates by headshot photos with three experiments. In Experiment 1 (Membership categorization task), participants categorized the photos from the pilot study (where the difference between the perceived age of KMT and DPP candidates was reduced) and divided into four blocks by their perceived age. We found that participants were able to distinguish KMT from DPP candidates significantly better than chance, even when the perceived age difference between the two parties was minimized. In Experiment 2 (Trait rating task), we asked young and middle-aged adults to rate six traits on candidate’s photos. We found that “provincial appearance” is the core trait differentiating the two parties for both young and older participants, while “facial maturity” is another trait for older participants only. In Experiment 3 (Double categorization task), we asked participants to categorize the photos from the Exp. 1 on their membership (KMT or DPP) and on provincial appearance (mainlander or native Taiwanese) in two separate sessions. Results showed that young adults were likely to use the “provincial appearance” as the main characteristic cue to categorize candidates’ political membership. In sum, our study showed that Taiwanese adults could categorize the two parties by their headshot photos when age cue was eliminated. Moreover, provincial appearance was the most critical trait for differentiating between KMT and DPP candidates, which may reflect a piece of significant history during the development of the two parties. PMID:29618993
The affective regulation of cognitive priming.
Storbeck, Justin; Clore, Gerald L
2008-04-01
Semantic and affective priming are classic effects observed in cognitive and social psychology, respectively. The authors discovered that affect regulates such priming effects. In Experiment 1, positive and negative moods were induced before one of three priming tasks; evaluation, categorization, or lexical decision. As predicted, positive affect led to both affective priming (evaluation task) and semantic priming (category and lexical decision tasks). However, negative affect inhibited such effects. In Experiment 2, participants in their natural affective state completed the same priming tasks as in Experiment 1. As expected, affective priming (evaluation task) and category priming (categorization and lexical decision tasks) were observed in such resting affective states. Hence, the authors conclude that negative affect inhibits semantic and affective priming. These results support recent theoretical models, which suggest that positive affect promotes associations among strong and weak concepts, and that negative affect impairs such associations (Clore & Storbeck, 2006; Kuhl, 2000). (Copyright) 2008 APA.
Xiao, Youping; Kavanau, Christopher; Bertin, Lauren; Kaplan, Ehud
2011-01-01
Many studies have provided evidence for the existence of universal constraints on color categorization or naming in various languages, but the biological basis of these constraints is unknown. A recent study of the pattern of color categorization across numerous languages has suggested that these patterns tend to avoid straddling a region in color space at or near the border between the English composite categories of "warm" and "cool". This fault line in color space represents a fundamental constraint on color naming. Here we report that the two-way categorization along the fault line is correlated with the sign of the L- versus M-cone contrast of a stimulus color. Moreover, we found that the sign of the L-M cone contrast also accounted for the two-way clustering of the spatially distributed neural responses in small regions of the macaque primary visual cortex, visualized with optical imaging. These small regions correspond to the hue maps, where our previous study found a spatially organized representation of stimulus hue. Altogether, these results establish a direct link between a universal constraint on color naming and the cone-specific information that is represented in the primate early visual system.
Rossion, Bruno; Torfs, Katrien; Jacques, Corentin; Liu-Shuang, Joan
2015-01-16
We designed a fast periodic visual stimulation approach to identify an objective signature of face categorization incorporating both visual discrimination (from nonface objects) and generalization (across widely variable face exemplars). Scalp electroencephalographic (EEG) data were recorded in 12 human observers viewing natural images of objects at a rapid frequency of 5.88 images/s for 60 s. Natural images of faces were interleaved every five stimuli, i.e., at 1.18 Hz (5.88/5). Face categorization was indexed by a high signal-to-noise ratio response, specifically at an oddball face stimulation frequency of 1.18 Hz and its harmonics. This face-selective periodic EEG response was highly significant for every participant, even for a single 60-s sequence, and was generally localized over the right occipitotemporal cortex. The periodicity constraint and the large selection of stimuli ensured that this selective response to natural face images was free of low-level visual confounds, as confirmed by the absence of any oddball response for phase-scrambled stimuli. Without any subtraction procedure, time-domain analysis revealed a sequence of differential face-selective EEG components between 120 and 400 ms after oddball face image onset, progressing from medial occipital (P1-faces) to occipitotemporal (N1-faces) and anterior temporal (P2-faces) regions. Overall, this fast periodic visual stimulation approach provides a direct signature of natural face categorization and opens an avenue for efficiently measuring categorization responses of complex visual stimuli in the human brain. © 2015 ARVO.
ERIC Educational Resources Information Center
Roberts, Kenneth
1997-01-01
Infants (N=24) with history of otitis media and tube placement were tested for categorical responding within a visual familiarization-discrimination model. Findings suggest that even mild hearing loss may adversely affect categorical responding under specific input conditions, which may persist after normal hearing is restored, possibly because…
Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J
2001-08-01
The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.
Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos
2015-11-01
The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of orientation in visual working memory.
Constantinidou, Fofi; Thomas, Robin D; Scharp, Victoria L; Laske, Kate M; Hammerly, Mark D; Guitonde, Suchita
2005-01-01
Previous research suggests that traumatic brain injury (TBI) interferes with the ability to extract and use attributes to describe objects. This study explored the effects of a systematic Categorization Program (CP) in participants with TBI and noninjured controls. Ten persons with moderate to severe TBI who received comprehensive postacute rehabilitation services and 13 matched noninjured controls participated in the study. All participants received CP training for 3 to 5 hours per week for 10 to 12 weeks that consisted of 8 levels and targeted concept formation, object categorization, and decision-making abilities. The Mayo-Portland Adaptability Inventory-3 (MPAI-3) and the Community Integration Questionnaire (CIQ). Two Categorization Tests (administered pretraining and posttraining) and 3 Probe Tasks (administered at specified intervals during training) assessed skills relating to categorization. Both groups showed significant improvement in categorization performance after the CP training on the 2 Categorization Tests related to the CP. They also were able to generalize and apply categorization and sorting skills in new situations (as measured by the Probe Tasks). Participants with TBI had improved functional outcome performance measured by the MPAI-3 and the CIQ. The systematic and hierarchical structure of the CP is beneficial to participants with TBI during postacute rehabilitation. This study contributes to the growing body of evidence supporting cognitive rehabilitation after moderate to severe TBI.
Tsai, Chen-Gia; Chen, Chien-Chung; Wen, Ya-Chien; Chou, Tai-Li
2015-01-01
In human cultures, the perceptual categorization of musical pitches relies on pitch-naming systems. A sung pitch name concurrently holds the information of fundamental frequency and pitch name. These two aspects may be either congruent or incongruent with regard to pitch categorization. The present study aimed to compare the neuromagnetic responses to musical and verbal stimuli for congruency judgments, for example a congruent pair for the pitch C4 sung with the pitch name do in a C-major context (the pitch-semantic task) or for the meaning of a word to match the speaker’s identity (the voice-semantic task). Both the behavioral data and neuromagnetic data showed that congruency detection of the speaker’s identity and word meaning was slower than that of the pitch and pitch name. Congruency effects of musical stimuli revealed that pitch categorization and semantic processing of pitch information were associated with P2m and N400m, respectively. For verbal stimuli, P2m and N400m did not show any congruency effect. In both the pitch-semantic task and the voice-semantic task, we found that incongruent stimuli evoked stronger slow waves with the latency of 500–600 ms than congruent stimuli. These findings shed new light on the neural mechanisms underlying pitch-naming processes. PMID:26347638
Task Switching Effects in Anticipation Timing
ERIC Educational Resources Information Center
Fairbrother, Jeffrey T.; Brueckner, Sebastian
2008-01-01
To understand how task switching affects human performance, there is a need to investigate how it influences the performance of tasks other than those involving bivalent stimulus categorization. The purpose of this study, therefore, was to investigate the effects of task switching on anticipation timing performance, which typically requires…
Direct manipulation of virtual objects
NASA Astrophysics Data System (ADS)
Nguyen, Long K.
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
Lupyan, Gary
2009-08-01
In addition to its communicative functions, language has been argued to have a variety of extracommunicative functions, as assessed by its causal involvement in putatively nonlinguistic tasks. In the present work, I argue that language may be critically involved in the ability of human adults to categorize objects on a specific dimension (e.g., color) while abstracting over other dimensions (e.g., size). This ability is frequently impaired in aphasic patients. The present work demonstrates that normal participants placed under conditions of verbal interference show a pattern of deficits strikingly similar to that of aphasic patients: impaired taxonomic categorization along perceptual dimensions, and preserved thematic categorization. A control experiment using a visuospatial-interference task failed to find this selective pattern of deficits. The present work has implications for understanding the online role of language in normal cognition and supports the claim that language is causally involved in nonverbal cognition.
Implicit Self-Importance in an Interpersonal Pronoun Categorization Task
Fetterman, Adam K.; Robinson, Michael D.; Gilbertson, Elizabeth P.
2014-01-01
Object relations theories emphasize the manner in which the salience/importance of implicit representations of self and other guide interpersonal functioning. Two studies and a pilot test (total N = 304) sought to model such representations. In dyadic contexts, the self is a “you” and the other is a “me”, as verified in a pilot test. Study 1 then used a simple categorization task and found evidence for implicit self-importance: The pronoun “you” was categorized more quickly and accurately when presented in a larger font size, whereas the pronoun “me” was categorized more quickly and accurately when presented in a smaller font size. Study 2 showed that this pattern possesses value in understanding individual differences in interpersonal functioning. As predicted, arrogant people scored higher in implicit self-importance in the paradigm. Findings are discussed from the perspective of dyadic interpersonal dynamics. PMID:25419089
Pavlides, Michael; Birks, Jacqueline; Fryer, Eve; Delaney, David; Sarania, Nikita; Banerjee, Rajarshi; Neubauer, Stefan; Barnes, Eleanor; Fleming, Kenneth A; Wang, Lai Mun
2017-04-01
The aim of the study was to investigate the interobserver agreement for categorical and quantitative scores of liver fibrosis. Sixty-five consecutive biopsy specimens from patients with mixed liver disease etiologies were assessed by three pathologists using the Ishak and nonalcoholic steatohepatitis Clinical Research Network (NASH CRN) scoring systems, and the fibrosis area (collagen proportionate area [CPA]) was estimated by visual inspection (visual-CPA). A subset of 20 biopsy specimens was analyzed using digital imaging analysis (DIA) for the measurement of CPA (DIA-CPA). The bivariate weighted κ between any two pathologists ranged from 0.57 to 0.67 for Ishak staging and from 0.47 to 0.57 for the NASH CRN staging. Bland-Altman analysis showed poor agreement between all possible pathologist pairings for visual-CPA but good agreement between all pathologist pairings for DIA-CPA. There was good agreement between the two pathologists who assessed biopsy specimens by visual-CPA and DIA-CPA. The intraclass correlation coefficient, which is equivalent to the κ statistic for continuous variables, was 0.78 for visual-CPA and 0.97 for DIA-CPA. These results suggest that DIA-CPA is the most robust method for assessing liver fibrosis followed by visual-CPA. Categorical scores perform less well than both the quantitative CPA scores assessed here. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Kéri, Szabolcs; Kiss, Imre; Kelemen, Oguz; Benedek, György; Janka, Zoltán
2005-10-01
Schizophrenia is associated with impaired visual information processing. The aim of this study was to investigate the relationship between anomalous perceptual experiences, positive and negative symptoms, perceptual organization, rapid categorization of natural images and magnocellular (M) and parvocellular (P) visual pathway functioning. Thirty-five unmedicated patients with schizophrenia and 20 matched healthy control volunteers participated. Anomalous perceptual experiences were assessed with the Bonn Scale for the Assessment Basic Symptoms (BSABS). General intellectual functions were evaluated with the revised version of the Wechsler Adult Intelligence Scale. The 1-9 version of the Continuous Performance Test (CPT) was used to investigate sustained attention. The following psychophysical tests were used: detection of Gabor patches with collinear and orthogonal flankers (perceptual organization), categorization of briefly presented natural scenes (rapid visual processing), low-contrast and frequency-doubling vernier threshold (M pathway functioning), isoluminant colour vernier threshold and high spatial frequency discrimination (P pathway functioning). The patients with schizophrenia were impaired on test of perceptual organization, rapid visual processing and M pathway functioning. There was a significant correlation between BSABS scores, negative symptoms, perceptual organization, rapid visual processing and M pathway functioning. Positive symptoms, IQ, CPT and P pathway measures did not correlate with these parameters. The best predictor of the BSABS score was the perceptual organization deficit. These results raise the possibility that multiple facets of visual information processing deficits can be explained by M pathway dysfunctions in schizophrenia, resulting in impaired attentional modulation of perceptual organization and of natural image categorization.
ERIC Educational Resources Information Center
Gavin, Amanda; Roche, Bryan; Ruiz, Maria R.; Hogan, Maria; O'Reilly, Anthony
2012-01-01
The current study assessed the sexual categorization of children among a random sample of adults from the general population. Twenty-seven males and 27 females (N = 54) were exposed to a categorization task that assessed their ability to discriminate adult- from child-related words and sexual from non-sexual words. Then, in a modified Implicit…
Effects of spatial frequency content on classification of face gender and expression.
Aguado, Luis; Serrano-Pedraza, Ignacio; Rodríguez, Sonia; Román, Francisco J
2010-11-01
The role of different spatial frequency bands on face gender and expression categorization was studied in three experiments. Accuracy and reaction time were measured for unfiltered, low-pass (cut-off frequency of 1 cycle/deg) and high-pass (cutoff frequency of 3 cycles/deg) filtered faces. Filtered and unfiltered faces were equated in root-mean-squared contrast. For low-pass filtered faces reaction times were higher than unfiltered and high-pass filtered faces in both categorization tasks. In the expression task, these results were obtained with expressive faces presented in isolation (Experiment 1) and also with neutral-expressive dynamic sequences where each expressive face was preceded by a briefly presented neutral version of the same face (Experiment 2). For high-pass filtered faces different effects were observed on gender and expression categorization. While both speed and accuracy of gender categorization were reduced comparing to unfiltered faces, the efficiency of expression classification remained similar. Finally, we found no differences between expressive and non expressive faces in the effects of spatial frequency filtering on gender categorization (Experiment 3). These results show a common role of information from the high spatial frequency band in the categorization of face gender and expression.
Daemi, Mehdi; Harris, Laurence R; Crawford, J Douglas
2016-01-01
Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations.
Lewellen, Mary Jo; Goldinger, Stephen D.; Pisoni, David B.; Greene, Beth G.
2012-01-01
College students were separated into 2 groups (high and low) on the basis of 3 measures: subjective familiarity ratings of words, self-reported language experiences, and a test of vocabulary knowledge. Three experiments were conducted to determine if the groups also differed in visual word naming, lexical decision, and semantic categorization. High Ss were consistently faster than low Ss in naming visually presented words. They were also faster and more accurate in making difficult lexical decisions and in rejecting homophone foils in semantic categorization. Taken together, the results demonstrate that Ss who differ in lexical familiarity also differ in processing efficiency. The relationship between processing efficiency and working memory accounts of individual differences in language processing is also discussed. PMID:8371087
A Functional Cartography of Cognitive Systems
Mattar, Marcelo G.; Cole, Michael W.; Thompson-Schill, Sharon L.; Bassett, Danielle S.
2015-01-01
One of the most remarkable features of the human brain is its ability to adapt rapidly and efficiently to external task demands. Novel and non-routine tasks, for example, are implemented faster than structural connections can be formed. The neural underpinnings of these dynamics are far from understood. Here we develop and apply novel methods in network science to quantify how patterns of functional connectivity between brain regions reconfigure as human subjects perform 64 different tasks. By applying dynamic community detection algorithms, we identify groups of brain regions that form putative functional communities, and we uncover changes in these groups across the 64-task battery. We summarize these reconfiguration patterns by quantifying the probability that two brain regions engage in the same network community (or putative functional module) across tasks. These tools enable us to demonstrate that classically defined cognitive systems—including visual, sensorimotor, auditory, default mode, fronto-parietal, cingulo-opercular and salience systems—engage dynamically in cohesive network communities across tasks. We define the network role that a cognitive system plays in these dynamics along the following two dimensions: (i) stability vs. flexibility and (ii) connected vs. isolated. The role of each system is therefore summarized by how stably that system is recruited over the 64 tasks, and how consistently that system interacts with other systems. Using this cartography, classically defined cognitive systems can be categorized as ephemeral integrators, stable loners, and anything in between. Our results provide a new conceptual framework for understanding the dynamic integration and recruitment of cognitive systems in enabling behavioral adaptability across both task and rest conditions. This work has important implications for understanding cognitive network reconfiguration during different task sets and its relationship to cognitive effort, individual variation in cognitive performance, and fatigue. PMID:26629847
Opposite brain laterality in analogous auditory and visual tests.
Oltedal, Leif; Hugdahl, Kenneth
2017-11-01
Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.
Summary document of tasks performed on the Shuttle-C/NLS contract
NASA Technical Reports Server (NTRS)
1993-01-01
During FY92, USBI performed many programmatic related tasks. These programmatic tasks have been categorized as follows: (1) acquisition; (2) project engineering/program planning; and (3) cost. The reports associated with these tasks follow in paragraphs 1.1.1 through 1.3.3. Proceeding each task report is a brief description of the contents contained within.
Anderson, Afrouz A; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Dashtestani, Hadis; Chowdhry, Fatima A; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H
2018-01-01
Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing.
Anderson, Afrouz A.; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Chowdhry, Fatima A.; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H.
2018-01-01
Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing. PMID:29870536
The time course of explicit and implicit categorization.
Smith, J David; Zakrzewski, Alexandria C; Herberger, Eric R; Boomer, Joseph; Roeder, Jessica L; Ashby, F Gregory; Church, Barbara A
2015-10-01
Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization.
ERIC Educational Resources Information Center
Berger, Carole; Donnadieu, Sophie
2006-01-01
This research explores the way in which young children (5 years of age) and adults use perceptual and conceptual cues for categorizing objects processed by vision or by audition. Three experiments were carried out using forced-choice categorization tasks that allowed responses based on taxonomic relations (e.g., vehicles) or on schema category…
Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.
Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng
2018-05-01
Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.
A task-dependent causal role for low-level visual processes in spoken word comprehension.
Ostarek, Markus; Huettig, Falk
2017-08-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Reimer, Christina B; Strobach, Tilo; Schubert, Torsten
2017-12-01
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.
Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Kulikov, M A; Mikhaĭlova, E S
2013-01-01
In 38 healthy subjects accuracy and response time were examined during recognition of two categories of images--animals andnonliving objects--under forward masking. We revealed new data that masking effects depended of categorical similarity of target and masking stimuli. The recognition accuracy was the lowest and the response time was the most slow, when the target and masking stimuli belongs to the same category, that was combined with high dispersion of response times. The revealed effects were more clear in the task of animal recognition in comparison with the recognition of nonliving objects. We supposed that the revealed effects connected with interference between cortical representations of the target and masking stimuli and discussed our results in context of cortical interference and negative priming.
How does creating a concept map affect item-specific encoding?
Grimaldi, Phillip J; Poston, Laurel; Karpicke, Jeffrey D
2015-07-01
Concept mapping has become a popular learning tool. However, the processes underlying the task are poorly understood. In the present study, we examined the effect of creating a concept map on the processing of item-specific information. In 2 experiments, subjects learned categorized or ad hoc word lists by making pleasantness ratings, sorting words into categories, or creating a concept map. Memory was tested using a free recall test and a recognition memory test, which is considered to be especially sensitive to item-specific processing. Typically, tasks that promote item-specific processing enhance free recall of categorized lists, relative to category sorting. Concept mapping resulted in lower recall performance than both the pleasantness rating and category sorting condition for categorized words. Moreover, concept mapping resulted in lower recognition memory performance than the other 2 tasks. These results converge on the conclusion that creating a concept map disrupts the processing of item-specific information. (c) 2015 APA, all rights reserved.
Demehri, S; Muhit, A; Zbijewski, W; Stayman, J W; Yorkston, J; Packard, N; Senn, R; Yang, D; Foos, D; Thawait, G K; Fayad, L M; Chhabra, A; Carrino, J A; Siewerdsen, J H
2015-06-01
To assess visualization tasks using cone-beam CT (CBCT) compared to multi-detector CT (MDCT) for musculoskeletal extremity imaging. Ten cadaveric hands and ten knees were examined using a dedicated CBCT prototype and a clinical multi-detector CT using nominal protocols (80 kVp-108mAs for CBCT; 120 kVp- 300 mAs for MDCT). Soft tissue and bone visualization tasks were assessed by four radiologists using five-point satisfaction (for CBCT and MDCT individually) and five-point preference (side-by-side CBCT versus MDCT image quality comparison) rating tests. Ratings were analyzed using Kruskal-Wallis and Wilcoxon signed-rank tests, and observer agreement was assessed using the Kappa-statistic. Knee CBCT images were rated "excellent" or "good" (median scores 5 and 4) for "bone" and "soft tissue" visualization tasks. Hand CBCT images were rated "excellent" or "adequate" (median scores 5 and 3) for "bone" and "soft tissue" visualization tasks. Preference tests rated CBCT equivalent or superior to MDCT for bone visualization and favoured the MDCT for soft tissue visualization tasks. Intraobserver agreement for CBCT satisfaction tests was fair to almost perfect (κ ~ 0.26-0.92), and interobserver agreement was fair to moderate (κ ~ 0.27-0.54). CBCT provided excellent image quality for bone visualization and adequate image quality for soft tissue visualization tasks. • CBCT provided adequate image quality for diagnostic tasks in extremity imaging. • CBCT images were "excellent" for "bone" and "good/adequate" for "soft tissue" visualization tasks. • CBCT image quality was equivalent/superior to MDCT for bone visualization tasks.
Beyond the real world: attention debates in auditory mismatch negativity.
Chung, Kyungmi; Park, Jin Young
2018-04-11
The aim of this study was to address the potential for the auditory mismatch negativity (aMMN) to be used in applied event-related potential (ERP) studies by determining whether the aMMN would be an attention-dependent ERP component and could be differently modulated across visual tasks or virtual reality (VR) stimuli with different visual properties and visual complexity levels. A total of 80 participants, aged 19-36 years, were assigned to either a reading-task (21 men and 19 women) or a VR-task (22 men and 18 women) group. Two visual-task groups of healthy young adults were matched in age, sex, and handedness. All participants were instructed to focus only on the given visual tasks and ignore auditory change detection. While participants in the reading-task group read text slides, those in the VR-task group viewed three 360° VR videos in a random order and rated how visually complex the given virtual environment was immediately after each VR video ended. Inconsistent with the finding of a partial significant difference in perceived visual complexity in terms of brightness of virtual environments, both visual properties of distance and brightness showed no significant differences in the modulation of aMMN amplitudes. A further analysis was carried out to compare elicited aMMN amplitudes of a typical MMN task and an applied VR task. No significant difference in the aMMN amplitudes was found across the two groups who completed visual tasks with different visual-task demands. In conclusion, the aMMN is a reliable ERP marker of preattentive cognitive processing for auditory deviance detection.
Multiple systems of category learning.
Smith, Edward E; Grossman, Murray
2008-01-01
We review neuropsychological and neuroimaging evidence for the existence of three qualitatively different categorization systems. These categorization systems are themselves based on three distinct memory systems: working memory (WM), explicit long-term memory (explicit LTM), and implicit long-term memory (implicit LTM). We first contrast categorization based on WM with that based on explicit LTM, where the former typically involves applying rules to a test item and the latter involves determining the similarity between stored exemplars or prototypes and a test item. Neuroimaging studies show differences between brain activity in normal participants as a function of whether they are instructed to categorize novel test items by rule or by similarity to known category members. Rule instructions typically lead to more activation in frontal or parietal areas, associated with WM and selective attention, whereas similarity instructions may activate parietal areas associated with the integration of perceptual features. Studies with neurological patients in the same paradigms provide converging evidence, e.g., patients with Alzheimer's disease, who have damage in prefrontal regions, are more impaired with rule than similarity instructions. Our second contrast is between categorization based on explicit LTM with that based on implicit LTM. Neuropsychological studies with patients with medial-temporal lobe damage show that patients are impaired on tasks requiring explicit LTM, but perform relatively normally on an implicit categorization task. Neuroimaging studies provide converging evidence: whereas explicit categorization is mediated by activation in numerous frontal and parietal areas, implicit categorization is mediated by a deactivation in posterior cortex.
ERIC Educational Resources Information Center
Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.
2005-01-01
Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…
NASA Astrophysics Data System (ADS)
Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo
2009-04-01
In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.
Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias
2018-05-16
There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.
Lobier, Muriel; Peyrin, Carole; Le Bas, Jean-François; Valdois, Sylviane
2012-07-01
The visual front-end of reading is most often associated with orthographic processing. The left ventral occipito-temporal cortex seems to be preferentially tuned for letter string and word processing. In contrast, little is known of the mechanisms responsible for pre-orthographic processing: the processing of character strings regardless of character type. While the superior parietal lobule has been shown to be involved in multiple letter processing, further data is necessary to extend these results to non-letter characters. The purpose of this study is to identify the neural correlates of pre-orthographic character string processing independently of character type. Fourteen skilled adult readers carried out multiple and single element visual categorization tasks with alphanumeric (AN) and non-alphanumeric (nAN) characters under fMRI. The role of parietal cortex in multiple element processing was further probed with a priori defined anatomical regions of interest (ROIs). Participants activated posterior parietal cortex more strongly for multiple than single element processing. ROI analyses showed that bilateral SPL/BA7 was more strongly activated for multiple than single element processing, regardless of character type. In contrast, no multiple element specific activity was found in inferior parietal lobules. These results suggests that parietal mechanisms are involved in pre-orthographic character string processing. We argue that in general, attentional mechanisms are involved in visual word recognition, as an early step of word visual analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.
Harel, Assaf; Ullman, Shimon; Harari, Danny; Bentin, Shlomo
2011-07-28
Visual expertise is usually defined as the superior ability to distinguish between exemplars of a homogeneous category. Here, we ask how real-world expertise manifests at basic-level categorization and assess the contribution of stimulus-driven and top-down knowledge-based factors to this manifestation. Car experts and novices categorized computer-selected image fragments of cars, airplanes, and faces. Within each category, the fragments varied in their mutual information (MI), an objective quantifiable measure of feature diagnosticity. Categorization of face and airplane fragments was similar within and between groups, showing better performance with increasing MI levels. Novices categorized car fragments more slowly than face and airplane fragments, while experts categorized car fragments as fast as face and airplane fragments. The experts' advantage with car fragments was similar across MI levels, with similar functions relating RT with MI level for both groups. Accuracy was equal between groups for cars as well as faces and airplanes, but experts' response criteria were biased toward cars. These findings suggest that expertise does not entail only specific perceptual strategies. Rather, at the basic level, expertise manifests as a general processing advantage arguably involving application of top-down mechanisms, such as knowledge and attention, which helps experts to distinguish between object categories. © ARVO
To call a cloud 'cirrus': sound symbolism in names for categories or items.
Ković, Vanja; Sučević, Jelena; Styles, Suzy J
2017-01-01
The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects ('alien life forms'), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.
Wu, Lin; Wang, Yang; Pan, Shirui
2017-12-01
It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.
Watson, Rebecca; Latinus, Marianne; Noguchi, Takao; Garrod, Oliver; Crabbe, Frances; Belin, Pascal
2014-05-14
The integration of emotional information from the face and voice of other persons is known to be mediated by a number of "multisensory" cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion-although there was a greater weighting of face information-and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices. Copyright © 2014 the authors 0270-6474/14/346813-09$15.00/0.
Olichney, John M; Riggins, Brock R; Hillert, Dieter G; Nowacki, Ralph; Tecoma, Evelyn; Kutas, Marta; Iragui, Vicente J
2002-07-01
We studied 14 patients with well-characterized refractory temporal lobe epilepsy (TLE), 7 with right temporal lobe epilepsy (RTE) and 7 with left temporal lobe epilepsy (LTE), on a word repetition ERP experiment. Much prior literature supports the view that patients with left TLE are more likely to develop verbal memory deficits, often attributable to left hippocampal sclerosis. Our main objectives were to test if abnormalities of the N400 or Late Positive Component (LPC, P600) were associated with a left temporal seizure focus, or left temporal lobe dysfunction. A minimum of 19 channels of EEG/EOG data were collected while subjects performed a semantic categorization task. Auditory category statements were followed by a visual target word, which were 50% "congruous" (category exemplars) and 50% "incongruous" (non-category exemplars) with the preceding semantic context. These auditory-visual pairings were repeated pseudo-randomly at time intervals ranging from approximately 10-140 seconds later. The ERP data were submitted to repeated-measures ANOVAs, which showed the RTE group had generally normal effects of word repetition on the LPC and the N400. Also, the N400 component was larger to incongruous than congruous new words, as is normally the case. In contrast, the LTE group did not have statistically significant effects of either word repetition or congruity on their ERPs (N400 or LPC), suggesting that this ERP semantic categorization paradigm is sensitive to left temporal lobe dysfunction. Further studies are ongoing to determine if these ERP abnormalities predict hippocampal sclerosis on histopathology, or outcome after anterior temporal lobectomy.
Latinus, Marianne; Noguchi, Takao; Garrod, Oliver; Crabbe, Frances; Belin, Pascal
2014-01-01
The integration of emotional information from the face and voice of other persons is known to be mediated by a number of “multisensory” cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion—although there was a greater weighting of face information—and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices. PMID:24828635
Dual PECCS: a cognitive system for conceptual representation and categorization
NASA Astrophysics Data System (ADS)
Lieto, Antonio; Radicioni, Daniele P.; Rho, Valentina
2017-03-01
In this article we present an advanced version of Dual-PECCS, a cognitively-inspired knowledge representation and reasoning system aimed at extending the capabilities of artificial systems in conceptual categorization tasks. It combines different sorts of common-sense categorization (prototypical and exemplars-based categorization) with standard monotonic categorization procedures. These different types of inferential procedures are reconciled according to the tenets coming from the dual process theory of reasoning. On the other hand, from a representational perspective, the system relies on the hypothesis of conceptual structures represented as heterogeneous proxytypes. Dual-PECCS has been experimentally assessed in a task of conceptual categorization where a target concept illustrated by a simple common-sense linguistic description had to be identified by resorting to a mix of categorization strategies, and its output has been compared to human responses. The obtained results suggest that our approach can be beneficial to improve the representational and reasoning conceptual capabilities of standard cognitive artificial systems, and - in addition - that it may be plausibly applied to different general computational models of cognition. The current version of the system, in fact, extends our previous work, in that Dual- PECCS is now integrated and tested into two cognitive architectures, ACT-R and CLARION, implementing different assumptions on the underlying invariant structures governing human cognition. Such integration allowed us to extend our previous evaluation.
Furl, N; van Rijsbergen, N J; Treves, A; Dolan, R J
2007-08-01
Previous studies have shown reductions of the functional magnetic resonance imaging (fMRI) signal in response to repetition of specific visual stimuli. We examined how adaptation affects the neural responses associated with categorization behavior, using face adaptation aftereffects. Adaptation to a given facial category biases categorization towards non-adapted facial categories in response to presentation of ambiguous morphs. We explored a hypothesis, posed by recent psychophysical studies, that these adaptation-induced categorizations are mediated by activity in relatively advanced stages within the occipitotemporal visual processing stream. Replicating these studies, we find that adaptation to a facial expression heightens perception of non-adapted expressions. Using comparable behavioral methods, we also show that adaptation to a specific identity heightens perception of a second identity in morph faces. We show both expression and identity effects to be associated with heightened anterior medial temporal lobe activity, specifically when perceiving the non-adapted category. These regions, incorporating bilateral anterior ventral rhinal cortices, perirhinal cortex and left anterior hippocampus are regions previously implicated in high-level visual perception. These categorization effects were not evident in fusiform or occipital gyri, although activity in these regions was reduced to repeated faces. The findings suggest that adaptation-induced perception is mediated by activity in regions downstream to those showing reductions due to stimulus repetition.
Edge co-occurrences can account for rapid categorization of natural versus animal images
NASA Astrophysics Data System (ADS)
Perrinet, Laurent U.; Bednar, James A.
2015-06-01
Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the “association field” for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.
Task-specific reorganization of the auditory cortex in deaf humans
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-01
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964
Task-specific reorganization of the auditory cortex in deaf humans.
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-24
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
Prototype Abstraction by Monkeys ("Macaca Mulatta")
ERIC Educational Resources Information Center
Smith, J. David; Redford, Joshua S.; Haas, Sarah M.
2008-01-01
The authors analyze the shape categorization of rhesus monkeys ("Macaca mulatta") and the role of prototype- and exemplar-based comparison processes in monkeys' category learning. Prototype and exemplar theories make contrasting predictions regarding performance on the Posner-Homa dot-distortion categorization task. Prototype theory--which…
Interference with olfactory memory by visual and verbal tasks.
Annett, J M; Cook, N M; Leslie, J C
1995-06-01
It has been claimed that olfactory memory is distinct from memory in other modalities. This study investigated the effectiveness of visual and verbal tasks in interfering with olfactory memory and included methodological changes from other recent studies. Subjects were allocated to one of four experimental conditions involving interference tasks [no interference task; visual task; verbal task; visual-plus-verbal task] and presented 15 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Recognition and recall performance both showed effects of interference of visual and verbal tasks but there was no effect for time of testing. While the results may be accommodated within a dual coding framework, further work is indicated to resolve theoretical issues relating to task complexity.
Visual body recognition in a prosopagnosic patient.
Moro, V; Pernigo, S; Avesani, R; Bulgarelli, C; Urgesi, C; Candidi, M; Aglioti, S M
2012-01-01
Conspicuous deficits in face recognition characterize prosopagnosia. Information on whether agnosic deficits may extend to non-facial body parts is lacking. Here we report the neuropsychological description of FM, a patient affected by a complete deficit in face recognition in the presence of mild clinical signs of visual object agnosia. His deficit involves both overt and covert recognition of faces (i.e. recognition of familiar faces, but also categorization of faces for gender or age) as well as the visual mental imagery of faces. By means of a series of matching-to-sample tasks we investigated: (i) a possible association between prosopagnosia and disorders in visual body perception; (ii) the effect of the emotional content of stimuli on the visual discrimination of faces, bodies and objects; (iii) the existence of a dissociation between identity recognition and the emotional discrimination of faces and bodies. Our results document, for the first time, the co-occurrence of body agnosia, i.e. the visual inability to discriminate body forms and body actions, and prosopagnosia. Moreover, the results show better performance in the discrimination of emotional face and body expressions with respect to body identity and neutral actions. Since FM's lesions involve bilateral fusiform areas, it is unlikely that the amygdala-temporal projections explain the relative sparing of emotion discrimination performance. Indeed, the emotional content of the stimuli did not improve the discrimination of their identity. The results hint at the existence of two segregated brain networks involved in identity and emotional discrimination that are at least partially shared by face and body processing. Copyright © 2011 Elsevier Ltd. All rights reserved.
Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.
Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik
2015-01-01
Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990) Theory of Visual Attention (TVA), was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.
The role of early visual cortex in visual short-term memory and visual attention.
Offen, Shani; Schluppeck, Denis; Heeger, David J
2009-06-01
We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.
Secondary visual workload capability with primary visual and kinesthetic-tactual displays
NASA Technical Reports Server (NTRS)
Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.
1978-01-01
Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.
Exploring responses to art in adolescence: a behavioral and eye-tracking study.
Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2014-01-01
Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception.
Exploring Responses to Art in Adolescence: A Behavioral and Eye-Tracking Study
Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2014-01-01
Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception. PMID:25048813
Wang, Hua-Chen; Savage, Greg; Gaskell, M Gareth; Paulin, Tamara; Robidoux, Serje; Castles, Anne
2017-08-01
Lexical competition processes are widely viewed as the hallmark of visual word recognition, but little is known about the factors that promote their emergence. This study examined for the first time whether sleep may play a role in inducing these effects. A group of 27 participants learned novel written words, such as banara, at 8 am and were tested on their learning at 8 pm the same day (AM group), while 29 participants learned the words at 8 pm and were tested at 8 am the following day (PM group). Both groups were retested after 24 hours. Using a semantic categorization task, we showed that lexical competition effects, as indexed by slowed responses to existing neighbor words such as banana, emerged 12 h later in the PM group who had slept after learning but not in the AM group. After 24 h the competition effects were evident in both groups. These findings have important implications for theories of orthographic learning and broader neurobiological models of memory consolidation.
Visual cue-specific craving is diminished in stressed smokers.
Cochran, Justinn R; Consedine, Nathan S; Lee, John M J; Pandit, Chinmay; Sollers, John J; Kydd, Robert R
2017-09-01
Craving among smokers is increased by stress and exposure to smoking-related visual cues. However, few experimental studies have tested both elicitors concurrently and considered how exposures may interact to influence craving. The current study examined craving in response to stress and visual cue exposure, separately and in succession, in order to better understand the relationship between craving elicitation and the elicitor. Thirty-nine smokers (21 males) who forwent smoking for 30 minutes were randomized to complete a stress task and a visual cue task in counterbalanced orders (creating the experimental groups); for the cue task, counterbalanced blocks of neutral, motivational control, and smoking images were presented. Self-reported craving was assessed after each block of visual stimuli and stress task, and after a recovery period following each task. As expected, the stress and smoking images generated greater craving than neutral or motivational control images (p < .001). Interactions indicated craving in those who completed the stress task first differed from those who completed the visual cues task first (p < .05), such that stress task craving was greater than all image type craving (all p's < .05) only if the visual cue task was completed first. Conversely, craving was stable across image types when the stress task was completed first. Findings indicate when smokers are stressed, visual cues have little additive effect on craving, and different types of visual cues elicit comparable craving. These findings may imply that once stressed, smokers will crave cigarettes comparably notwithstanding whether they are exposed to smoking image cues.
Sexual orientation and spatial position effects on selective forms of object location memory.
Rahman, Qazi; Newland, Cherie; Smyth, Beatrice Mary
2011-04-01
Prior research has demonstrated robust sex and sexual orientation-related differences in object location memory in humans. Here we show that this sexual variation may depend on the spatial position of target objects and the task-specific nature of the spatial array. We tested the recovery of object locations in three object arrays (object exchanges, object shifts, and novel objects) relative to veridical center (left compared to right side of the arrays) in a sample of 35 heterosexual men, 35 heterosexual women, and 35 homosexual men. Relative to heterosexual men, heterosexual women showed better location recovery in the right side of the array during object exchanges and homosexual men performed better in the right side during novel objects. However, the difference between heterosexual and homosexual men disappeared after controlling for IQ. Heterosexual women and homosexual men did not differ significantly from each other in location change detection with respect to task or side of array. These data suggest that visual space biases in processing categorical spatial positions may enhance aspects of object location memory in heterosexual women. Copyright © 2010 Elsevier Inc. All rights reserved.
The Time Course of Explicit and Implicit Categorization
Zakrzewski, Alexandria C.; Herberger, Eric; Boomer, Joseph; Roeder, Jessica; Ashby, F. Gregory; Church, Barbara A.
2015-01-01
Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization. PMID:26025556
Task relevance induces momentary changes in the functional visual field during reading.
Kaakinen, Johanna K; Hyönä, Jukka
2014-02-01
In the research reported here, we examined whether task demands can induce momentary tunnel vision during reading. More specifically, we examined whether the size of the functional visual field depends on task relevance. Forty participants read an expository text with a specific task in mind while their eye movements were recorded. A display-change paradigm with random-letter strings as preview masks was used to study the size of the functional visual field within sentences that contained task-relevant and task-irrelevant information. The results showed that orthographic parafoveal-on-foveal effects and preview benefits were observed for words within task-irrelevant but not task-relevant sentences. The results indicate that the size of the functional visual field is flexible and depends on the momentary processing demands of a reading task. The higher cognitive processing requirements experienced when reading task-relevant text rather than task-irrelevant text induce momentary tunnel vision, which narrows the functional visual field.
Visual perspective taking impairment in children with autistic spectrum disorder.
Hamilton, Antonia F de C; Brindley, Rachel; Frith, Uta
2009-10-01
Evidence from typical development and neuroimaging studies suggests that level 2 visual perspective taking - the knowledge that different people may see the same thing differently at the same time - is a mentalising task. Thus, we would expect children with autism, who fail typical mentalising tasks like false belief, to perform poorly on level 2 visual perspective taking as well. However, prior data on this issue are inconclusive. We re-examined this question, testing a group of 23 young autistic children, aged around 8years with a verbal mental age of around 4years and three groups of typical children (n=60) ranging in age from 4 to 8years on a level 2 visual perspective task and a closely matched mental rotation task. The results demonstrate that autistic children have difficulty with visual perspective taking compared to a task requiring mental rotation, relative to typical children. Furthermore, performance on the level 2 visual perspective taking task correlated with theory of mind performance. These findings resolve discrepancies in previous studies of visual perspective taking in autism, and demonstrate that level 2 visual perspective taking is a mentalising task.
Multi-modal information processing for visual workload relief
NASA Technical Reports Server (NTRS)
Burke, M. W.; Gilson, R. D.; Jagacinski, R. J.
1980-01-01
The simultaneous performance of two single-dimensional compensatory tracking tasks, one with the left hand and one with the right hand, is discussed. The tracking performed with the left hand was considered the primary task and was performed with a visual display or a quickened kinesthetic-tactual (KT) display. The right-handed tracking was considered the secondary task and was carried out only with a visual display. Although the two primary task displays had afforded equivalent performance in a critical tracking task performed alone, in the dual-task situation the quickened KT primary display resulted in superior secondary visual task performance. Comparisons of various combinations of primary and secondary visual displays in integrated or separated formats indicate that the superiority of the quickened KT display is not simply due to the elimination of visual scanning. Additional testing indicated that quickening per se also is not the immediate cause of the observed KT superiority.
ERIC Educational Resources Information Center
Hargreaves, Ian S.; White, Michelle; Pexman, Penny M.; Pittman, Dan; Goodyear, Brad G.
2012-01-01
Task effects in semantic processing were investigated by contrasting the neural activation associated with two semantic categorization tasks (SCT) using event-related fMRI. The two SCTs involved different decision categories: "is it an animal?" vs. "is it a concrete thing?" Participants completed both tasks and, across participants, the same core…
1994-01-01
DB Task Group co-chairs would like to thank Mrs. Janice Hirst of IDA for her help in coordinating and supporting the I/DBTG meetings and Ms. Linda ...Data (co-chairs Len Seligman (Mitre) and Pete Valentine (Army/JDBE)): will start with several recent categorization attempts including those offered in...fbrmed to address categorization of complex data co-chaired by Len Seligman (Mitre) and Pete Valentine (Army/JDBE)); (2) a subgroup to deal with
Hoyau, E; Cousin, E; Jaillard, A; Baciu, M
2016-12-01
We evaluated the effect of normal aging on the inter-hemispheric processing of semantic information by using the divided visual field (DVF) method, with words and pictures. Two main theoretical models have been considered, (a) the HAROLD model which posits that aging is associated with supplementary recruitment of the right hemisphere (RH) and decreased hemispheric specialization, and (b) the RH decline theory, which assumes that the RH becomes less efficient with aging, associated with increased LH specialization. Two groups of subjects were examined, a Young Group (YG) and an Old Group (OG), while participants performed a semantic categorization task (living vs. non-living) in words and pictures. The DVF was realized in two steps: (a) unilateral DVF presentation with stimuli presented separately in each visual field, left or right, allowing for their initial processing by only one hemisphere, right or left, respectively; (b) bilateral DVF presentation (BVF) with stimuli presented simultaneously in both visual fields, followed by their processing by both hemispheres. These two types of presentation permitted the evaluation of two main characteristics of the inter-hemispheric processing of information, the hemispheric specialization (HS) and the inter-hemispheric cooperation (IHC). Moreover, the BVF allowed determining the driver-hemisphere for processing information presented in BVF. Results obtained in OG indicated that: (a) semantic categorization was performed as accurately as YG, even if more slowly, (b) a non-semantic RH decline was observed, and (c) the LH controls the semantic processing during the BVF, suggesting an increased role of the LH in aging. However, despite the stronger involvement of the LH in OG, the RH is not completely devoid of semantic abilities. As discussed in the paper, neither the HAROLD nor the RH decline does fully explain this pattern of results. We rather suggest that the effect of aging on the hemispheric specialization and inter-hemispheric cooperation during semantic processing is explained not by only one model, but by an interaction between several complementary mechanisms and models. Copyright © 2015 Elsevier Ltd. All rights reserved.
An enquiry into the process of categorization of pictures and words.
Viswanathan, Madhubalan; Childers, Terry L
2003-02-01
This paper reports a series of experiments conducted to study the categorization of pictures and words. Whereas some studies reported in the past have found a picture advantage in categorization, other studies have yielded no differences between pictures and words. This paper used an experimental paradigm designed to overcome some methodological problems to examine picture-word categorization. The results of one experiment were consistent with an advantage for pictures in categorization. To identify the source of the picture advantage in categorization, two more experiments were conducted. Findings suggest that semantic relatedness may play an important role in the categorization of both pictures and words. We explain these findings by suggesting that pictures simultaneously access both their concept and visually salient features whereas words may initially access their concept and may subsequently activate features. Therefore, pictures have an advantage in categorization by offering multiple routes to semantic processing.
The modality effect of ego depletion: Auditory task modality reduces ego depletion.
Li, Qiong; Wang, Zhenhong
2016-08-01
An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
The Categorisation of Non-Categorical Colours: A Novel Paradigm in Colour Perception
Cropper, Simon J.; Kvansakul, Jessica G. S.; Little, Daniel R.
2013-01-01
In this paper, we investigate a new paradigm for studying the development of the colour ‘signal’ by having observers discriminate and categorize the same set of controlled and calibrated cardinal coloured stimuli. Notably, in both tasks, each observer was free to decide whether two pairs of colors were the same or belonged to the same category. The use of the same stimulus set for both tasks provides, we argue, an incremental behavioural measure of colour processing from detection through discrimination to categorisation. The measured data spaces are different for the two tasks, and furthermore the categorisation data is unique to each observer. In addition, we develop a model which assumes that the principal difference between the tasks is the degree of similarity between the stimuli which has different constraints for the categorisation task compared to the discrimination task. This approach not only makes sense of the current (and associated) data but links the processes of discrimination and categorisation in a novel way and, by implication, expands upon the previous research linking categorisation to other tasks not limited to colour perception. PMID:23536899
The categorisation of non-categorical colours: a novel paradigm in colour perception.
Cropper, Simon J; Kvansakul, Jessica G S; Little, Daniel R
2013-01-01
In this paper, we investigate a new paradigm for studying the development of the colour 'signal' by having observers discriminate and categorize the same set of controlled and calibrated cardinal coloured stimuli. Notably, in both tasks, each observer was free to decide whether two pairs of colors were the same or belonged to the same category. The use of the same stimulus set for both tasks provides, we argue, an incremental behavioural measure of colour processing from detection through discrimination to categorisation. The measured data spaces are different for the two tasks, and furthermore the categorisation data is unique to each observer. In addition, we develop a model which assumes that the principal difference between the tasks is the degree of similarity between the stimuli which has different constraints for the categorisation task compared to the discrimination task. This approach not only makes sense of the current (and associated) data but links the processes of discrimination and categorisation in a novel way and, by implication, expands upon the previous research linking categorisation to other tasks not limited to colour perception.
Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai
2012-10-01
Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.
Nguyen, Ngan; Mulla, Ali; Nelson, Andrew J; Wilson, Timothy D
2014-01-01
The present study explored the problem-solving strategies of high- and low-spatial visualization ability learners on a novel spatial anatomy task to determine whether differences in strategies contribute to differences in task performance. The results of this study provide further insights into the processing commonalities and differences among learners beyond the classification of spatial visualization ability alone, and help elucidate what, if anything, high- and low-spatial visualization ability learners do differently while solving spatial anatomy task problems. Forty-two students completed a standardized measure of spatial visualization ability, a novel spatial anatomy task, and a questionnaire involving personal self-analysis of the processes and strategies used while performing the spatial anatomy task. Strategy reports revealed that there were different ways students approached answering the spatial anatomy task problems. However, chi-square test analyses established that differences in problem-solving strategies did not contribute to differences in task performance. Therefore, underlying spatial visualization ability is the main source of variation in spatial anatomy task performance, irrespective of strategy. In addition to scoring higher and spending less time on the anatomy task, participants with high spatial visualization ability were also more accurate when solving the task problems. © 2013 American Association of Anatomists.
Instance-based categorization: automatic versus intentional forms of retrieval.
Neal, A; Hesketh, B; Andrews, S
1995-03-01
Two experiments are reported which attempt to disentangle the relative contribution of intentional and automatic forms of retrieval to instance-based categorization. A financial decision-making task was used in which subjects had to decide whether a bank would approve loans for a series of applicants. Experiment 1 found that categorization was sensitive to instance-specific knowledge, even when subjects had practiced using a simple rule. L. L. Jacoby's (1991) process-dissociation procedure was adapted for use in Experiment 2 to infer the relative contribution of intentional and automatic retrieval processes to categorization decisions. The results provided (1) strong evidence that intentional retrieval processes influence categorization, and (2) some preliminary evidence suggesting that automatic retrieval processes may also contribute to categorization decisions.
The Use of Computer-Generated Fading Materials to Teach Visual-Visual Non-Identity Matching Tasks
ERIC Educational Resources Information Center
Murphy, Colleen; Figueroa, Maria; Martin, Garry L.; Yu, C. T.; Figueroa, Josue
2008-01-01
Many everyday matching tasks taught to persons with developmental disabilities are visual-visual non-identity matching (VVNM) tasks, such as matching the printed word DOG to a picture of a dog, or matching a sock to a shoe. Research has shown that, for participants who have failed a VVNM prototype task, it is very difficult to teach them various…
Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex
Vaden, Ryan J.; Visscher, Kristina M.
2015-01-01
Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. PMID:26163806
Classification of visual and linguistic tasks using eye-movement features.
Coco, Moreno I; Keller, Frank
2014-03-07
The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
Use of evidence in a categorization task: analytic and holistic processing modes.
Greco, Alberto; Moretti, Stefania
2017-11-01
Category learning performance can be influenced by many contextual factors, but the effects of these factors are not the same for all learners. The present study suggests that these differences can be due to the different ways evidence is used, according to two main basic modalities of processing information, analytically or holistically. In order to test the impact of the information provided, an inductive rule-based task was designed, in which feature salience and comparison informativeness between examples of two categories were manipulated during the learning phases, by introducing and progressively reducing some perceptual biases. To gather data on processing modalities, we devised the Active Feature Composition task, a production task that does not require classifying new items but reproducing them by combining features. At the end, an explicit rating task was performed, which entailed assessing the accuracy of a set of possible categorization rules. A combined analysis of the data collected with these two different tests enabled profiling participants in regard to the kind of processing modality, the structure of representations and the quality of categorial judgments. Results showed that despite the fact that the information provided was the same for all participants, those who adopted analytic processing better exploited evidence and performed more accurately, whereas with holistic processing categorization is perfectly possible but inaccurate. Finally, the cognitive implications of the proposed procedure, with regard to involved processes and representations, are discussed.
Same-Different Categorization in Rats
ERIC Educational Resources Information Center
Wasserman, Edward A.; Castro, Leyre; Freeman, John H.
2012-01-01
Same-different categorization is a fundamental feat of human cognition. Although birds and nonhuman primates readily learn same-different discriminations and successfully transfer them to novel stimuli, no such demonstration exists for rats. Using a spatial discrimination learning task, we show that rats can both learn to discriminate arrays of…
Mehrotra, Abhishek; Sklar, David P; Tayal, Vivek S; Kocher, Keith E; Handel, Daniel A; Myles Riner, R
2010-12-01
This article is drawn from a report created for the American College of Emergency Physicians (ACEP) Emergency Department (ED) Categorization Task Force and also reflects the proceedings of a breakout session, "Beyond ED Categorization-Matching Networks to Patient Needs," at the 2010 Academic Emergency Medicine consensus conference, "Beyond Regionalization: Integrated Networks of Emergency Care." The authors describe a brief history of the significant national and state efforts at categorization and suggest reasons why many of these efforts failed to persevere or gain wider implementation. The history of efforts to categorize hospital (and ED) emergency services demonstrates recognition of the potential benefits of categorization, but reflects repeated failures to implement full categorization systems or limited excursions into categorization through licensing of EDs or designation of receiving and referral facilities. An understanding of the history of hospital and ED categorization could better inform current efforts to develop categorization schemes and processes. 2010 by the Society for Academic Emergency Medicine.
Use of nontraditional flight displays for the reduction of central visual overload in the cockpit
NASA Technical Reports Server (NTRS)
Weinstein, Lisa F.; Wickens, Christopher D.
1992-01-01
The use of nontraditional flight displays to reduce visual overload in the cockpit was investigated in a dual-task paradigm. Three flight displays (central, peripheral, and ecological) were used between subjects for the primary tasks, and the type of secondary task (object identification or motion judgment) and the presentation of the location of the task in the visual field (central or peripheral) were manipulated with groups. The two visual-spatial tasks were time-shared to study the possibility of a compatibility mapping between task type and task location. The ecological display was found to allow for the most efficient time-sharing.
Are Categorical Spatial Relations Encoded by Shifting Visual Attention between Objects?
ERIC Educational Resources Information Center
Yuan, Lei; Uttal, David; Franconeri, Steven
2016-01-01
Perceiving not just values, but relations between values, is critical to human cognition. We tested the predictions of a proposed mechanism for processing categorical spatial relations between two objects--the "shift account" of relation processing--which states that relations such as "above" or "below" are extracted…
Prete, Giulia; Fabri, Mara; Foschi, Nicoletta; Tommasi, Luca
2016-12-17
We investigated hemispheric asymmetries in categorization of face gender by means of a divided visual field paradigm, in which female and male faces were presented unilaterally for 150ms each. A group of 60 healthy participants (30 males) and a male split-brain patient (D.D.C.) were asked to categorize the gender of the stimuli. Healthy participants categorized male faces presented in the right visual field (RVF) better and faster than when presented in the left visual field (LVF), and female faces presented in the LVF than in the RVF, independently of the participants' sex. Surprisingly, the recognition rates of D.D.C. were at chance levels - and significantly lower than those of the healthy participants - for both female and male faces presented in the RVF, as well as for female faces presented in the LVF. His performance was higher than expected by chance - and did not differ from controls - only for male faces presented in the LVF. The residual right-hemispheric ability of the split-brain patient in categorizing male faces reveals an own-gender bias lateralized in the right hemisphere, in line with the rightward own-identity and own-age bias previously shown in split-brain patients. The gender-contingent hemispheric dominance found in healthy participants confirms the previously shown right-hemispheric superiority in recognizing female faces, and also reveals a left-hemispheric superiority in recognizing male faces, adding an important evidence of hemispheric imbalance in the field of face and gender perception. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Foraging in Semantic Fields: How We Search Through Memory.
Hills, Thomas T; Todd, Peter M; Jones, Michael N
2015-07-01
When searching for concepts in memory--as in the verbal fluency task of naming all the animals one can think of--people appear to explore internal mental representations in much the same way that animals forage in physical space: searching locally within patches of information before transitioning globally between patches. However, the definition of the patches being searched in mental space is not well specified. Do we search by activating explicit predefined categories (e.g., pets) and recall items from within that category (categorical search), or do we activate and recall a connected sequence of individual items without using categorical information, with each item recalled leading to the retrieval of an associated item in a stream (associative search), or both? Using semantic representations in a search of associative memory framework and data from the animal fluency task, we tested competing hypotheses based on associative and categorical search models. Associative, but not categorical, patch transitions took longer to make than position-matched productions, suggesting that categorical transitions were not true transitions. There was also clear evidence of associative search even within categorical patch boundaries. Furthermore, most individuals' behavior was best explained by an associative search model without the addition of categorical information. Thus, our results support a search process that does not use categorical information, but for which patch boundaries shift with each recall and local search is well described by a random walk in semantic space, with switches to new regions of the semantic space when the current region is depleted. Copyright © 2015 Cognitive Science Society, Inc.
MacLean, Mary H; Giesbrecht, Barry
2015-07-01
Task-relevant and physically salient features influence visual selective attention. In the present study, we investigated the influence of task-irrelevant and physically nonsalient reward-associated features on visual selective attention. Two hypotheses were tested: One predicts that the effects of target-defining task-relevant and task-irrelevant features interact to modulate visual selection; the other predicts that visual selection is determined by the independent combination of relevant and irrelevant feature effects. These alternatives were tested using a visual search task that contained multiple targets, placing a high demand on the need for selectivity, and that was data-limited and required unspeeded responses, emphasizing early perceptual selection processes. One week prior to the visual search task, participants completed a training task in which they learned to associate particular colors with a specific reward value. In the search task, the reward-associated colors were presented surrounding targets and distractors, but were neither physically salient nor task-relevant. In two experiments, the irrelevant reward-associated features influenced performance, but only when they were presented in a task-relevant location. The costs induced by the irrelevant reward-associated features were greater when they oriented attention to a target than to a distractor. In a third experiment, we examined the effects of selection history in the absence of reward history and found that the interaction between task relevance and selection history differed, relative to when the features had previously been associated with reward. The results indicate that under conditions that demand highly efficient perceptual selection, physically nonsalient task-irrelevant and task-relevant factors interact to influence visual selective attention.
Dementia alters standing postural adaptation during a visual search task in older adult men.
Jor'dan, Azizah J; McCarten, J Riley; Rottunda, Susan; Stoffregen, Thomas A; Manor, Brad; Wade, Michael G
2015-04-23
This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance--in the non-dementia group only--suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus, appears to disrupt this perception-action synergy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Common angle plots as perception-true visualizations of categorical associations.
Hofmann, Heike; Vendettuoli, Marie
2013-12-01
Visualizations are great tools of communications-they summarize findings and quickly convey main messages to our audience. As designers of charts we have to make sure that information is shown with a minimum of distortion. We have to also consider illusions and other perceptual limitations of our audience. In this paper we discuss the effect and strength of the line width illusion, a Muller-Lyer type illusion, on designs related to displaying associations between categorical variables. Parallel sets and hammock plots are both affected by line width illusions. We introduce the common-angle plot as an alternative method for displaying categorical data in a manner that minimizes the effect from perceptual illusions. Results from user studies both highlight the need for addressing line-width illusions in displays and provide evidence that common angle charts successfully resolve this issue.
Seemüller, Anna; Fiehler, Katja; Rösler, Frank
2011-01-01
The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding. Copyright © 2010 Elsevier B.V. All rights reserved.
Defever, Emmy; Reynvoet, Bert; Gebuis, Titia
2013-10-01
Researchers investigating numerosity processing manipulate the visual stimulus properties (e.g., surface). This is done to control for the confound between numerosity and its visual properties and should allow the examination of pure number processes. Nevertheless, several studies have shown that, despite different visual controls, visual cues remained to exert their influence on numerosity judgments. This study, therefore, investigated whether the impact of the visual stimulus manipulations on numerosity judgments is dependent on the task at hand (comparison task vs. same-different task) and whether this impact changes throughout development. In addition, we examined whether the influence of visual stimulus manipulations on numerosity judgments plays a role in the relation between performance on numerosity tasks and mathematics achievement. Our findings confirmed that the visual stimulus manipulations affect numerosity judgments; more important, we found that these influences changed with increasing age and differed between the comparison and the same-different tasks. Consequently, direct comparisons between numerosity studies using different tasks and age groups are difficult. No meaningful relationship between the performance on the comparison and same-different tasks and mathematics achievement was found in typically developing children, nor did we find consistent differences between children with and without mathematical learning disability (MLD). Copyright © 2013 Elsevier Inc. All rights reserved.
Visual search in a forced-choice paradigm
NASA Technical Reports Server (NTRS)
Holmgren, J. E.
1974-01-01
The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.
Rioux, Camille; Lafraire, Jérémie; Picard, Delphine
2018-01-01
The present research focuses on the effectiveness of visual exposure to vegetables in reducing food neophobia and pickiness among young children. We tested the hypotheses that (1) simple visual exposure to vegetables leads to an increase in the consumption of this food category, (2) diverse visual exposure to vegetables (i.e., vegetables varying in color are shown to children) leads to a greater increase in the consumption of this food category than classical exposure paradigms (i.e. the same mode of presentation of a given food across exposure sessions) and (3) visual exposure to vegetables leads to an increase in the consumption of this food category through a mediating effect of an increase in ease of categorization. We recruited 70 children aged 3-6 years who performed a 4-week study consisting of three phases: a 2-week visual exposure phase where place mats with pictures of vegetables were set on tables in school cafeterias, and pre and post intervention phases where willingness to try vegetables as well as cognitive performances were assessed for each child. Results indicated that visual exposure led to an increased consumption of exposed and non-exposed vegetables after the intervention period. Nevertheless, the exposure intervention where vegetables varying in color were shown to children was no more effective. Finally, results showed that an ease of categorization led to a larger impact after the exposure manipulation. The findings suggest that vegetable pictures might help parents to deal with some of the difficulties associated with the introduction of novel vegetables and furthermore that focusing on conceptual development could be an efficient way to tackle food neophobia and pickiness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Categorisation of Colour Terms Using New Validation Tools: A Case Study and Implications
Arbab, Shabnam; Brindle, Jonathan A.; Matusiak, Barbara S.; Klöckner, Christian A.
2018-01-01
This article elaborates on the results of a field experiment conducted among speakers of the Chakali language, spoken in northern Ghana. In the original study, the Color-aid Corporation Chart was used to perform the focal task in which consultants were asked to point at a single colour tile on the chart. However, data from the focal task could not be analysed since the Color-aid tiles had not yet been converted into numerical values set forth by the Commission internationale de l’éclairage (CIE). In this study, the full set of 314 Color-aid tiles were measured for chromaticity and converted into the CIE values at the Daylight Laboratory of the Norwegian University of Science and Technology. This article presents the conversion methodology and makes the results of the measurements, which are available in the Online Appendix. We argue that some visual-perception terms cannot be reliably ascribed to colour categories established by the Color-aid Corporation. This suggests that the ideophonic expressions in the dataset do not denote ‘colours’, as categorised in the Color-aid system, as it was impossible to average the consultants’ data into a CIE chromaticity diagram, illustrate the phenomena on the Natural Colour System (NCS) Circle and Triangle diagrams, and conduct a statistical analysis. One of the implications of this study is that a line between a visual-perception term and a colour term could be systematically established using a method with predefined categorical thresholds. PMID:29755718
Categorisation of Colour Terms Using New Validation Tools: A Case Study and Implications.
Arbab, Shabnam; Brindle, Jonathan A; Matusiak, Barbara S; Klöckner, Christian A
2018-01-01
This article elaborates on the results of a field experiment conducted among speakers of the Chakali language, spoken in northern Ghana. In the original study, the Color-aid Corporation Chart was used to perform the focal task in which consultants were asked to point at a single colour tile on the chart. However, data from the focal task could not be analysed since the Color-aid tiles had not yet been converted into numerical values set forth by the Commission internationale de l'éclairage (CIE). In this study, the full set of 314 Color-aid tiles were measured for chromaticity and converted into the CIE values at the Daylight Laboratory of the Norwegian University of Science and Technology. This article presents the conversion methodology and makes the results of the measurements, which are available in the Online Appendix. We argue that some visual-perception terms cannot be reliably ascribed to colour categories established by the Color-aid Corporation. This suggests that the ideophonic expressions in the dataset do not denote 'colours', as categorised in the Color-aid system, as it was impossible to average the consultants' data into a CIE chromaticity diagram, illustrate the phenomena on the Natural Colour System (NCS) Circle and Triangle diagrams, and conduct a statistical analysis. One of the implications of this study is that a line between a visual-perception term and a colour term could be systematically established using a method with predefined categorical thresholds.
Fujimaki, Norio; Hayakawa, Tomoe; Ihara, Aya; Matani, Ayumu; Wei, Qiang; Terazono, Yasushi; Murata, Tsutomu
2010-10-01
A masked priming paradigm has been used to measure unconscious and automatic context effects on the processing of words. However, its spatiotemporal neural basis has not yet been clarified. To test the hypothesis that masked repetition priming causes enhancement of neural activation, we conducted a magnetoencephalography experiment in which a prime was visually presented for a short duration (50 ms), preceded by a mask pattern, and followed by a target word that was represented by a Japanese katakana syllabogram. The prime, which was identical to the target, was represented by another hiragana syllabogram in the "Repeated" condition, whereas it was a string of unreadable pseudocharacters in the "Unrepeated" condition. Subjects executed a categorical decision task on the target. Activation was significantly larger for the Repeated condition than for the Unrepeated condition at a time window of 150-250 ms in the right occipital area, 200-250 ms in the bilateral ventral occipitotemporal areas, and 200-250 ms and 200-300 ms in the left and right anterior temporal areas, respectively. These areas have been reported to be related to processing of visual-form/orthography and lexico-semantics, and the enhanced activation supports the hypothesis. However, the absence of the priming effect in the areas related to phonological processing implies that automatic phonological priming effect depends on task requirements. 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Relative Contribution of Perception/Cognition and Language on Spatial Categorization
ERIC Educational Resources Information Center
Choi, Soonja; Hattrup, Kate
2012-01-01
This study investigated the relative contribution of perception/cognition and language-specific semantics in nonverbal categorization of spatial relations. English and Korean speakers completed a video-based similarity judgment task involving containment, support, tight fit, and loose fit. Both perception/cognition and language served as resources…
Assessing Expertise in Introductory Physics Using Categorization Task
ERIC Educational Resources Information Center
Mason, Andrew; Singh, Chandralekha
2011-01-01
The ability to categorize problems based upon underlying principles, rather than surface features or contexts, is considered one of several proxy predictors of expertise in problem solving. With inspiration from the classic study by Chi, Feltovich, and Glaser, we assess the distribution of expertise among introductory physics students by asking…
Influence of Familiar Features on Diagnosis: Instantiated Features in an Applied Setting
ERIC Educational Resources Information Center
Dore, Kelly L.; Brooks, Lee R.; Weaver, Bruce; Norman, Geoffrey R.
2012-01-01
Medical diagnosis can be viewed as a categorization task. There are two mechanisms whereby humans make categorical judgments: "analytical reasoning," based on explicit consideration of features and "nonanalytical reasoning," an unconscious holistic process of matching against prior exemplars. However, there is evidence that prior experience can…
Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2011-10-01
The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Seger, Carol A.; Dennison, Christina S.; Lopez-Paniagua, Dan; Peterson, Erik J.; Roark, Aubrey A.
2011-01-01
We identified factors leading to hippocampal and basal ganglia recruitment during categorization learning. Subjects alternated between blocks of a standard trial and error category learning task and a subjective judgment task. In the subjective judgments task subjects categorized the stimulus and then instead of receiving feedback they indicated the basis of their response using 4 options: Remember: Conscious episodic memory of previous trials. Know-Automatic: Automatic, rapid response accompanied by conscious awareness of category membership. Know-Intuition: A “gut feeling” without fully conscious knowledge of category membership. Guess: Guessing. In addition, new stimuli were introduced throughout the experiment to examine effects of novelty. Categorization overall recruited both the basal ganglia and posterior hippocampus. However, basal ganglia activity was found during Know judgments (both Automatic and Intuition), whereas posterior hippocampus activity was found during Remember judgments. Granger causality mapping indicated interactions between the basal ganglia and hippocampus, with the putamen exerting directed influence on the posterior hippocampus, which in turn exerted directed influence on the posterior caudate nucleus. We also found a region of anterior hippocampus that showed decreased activity relative to baseline during categorization overall, and showed a strong novelty effect. Our results indicate that subjective measures may be effective in dissociating basal ganglia from hippocampal dependent learning, and that the basal ganglia are involved in both conscious and unconscious learning. They also indicate a dissociation within the hippocampus, in which the anterior regions are sensitive to novelty, and the posterior regions are involved in memory based categorization learning. PMID:21255655
Observers' cognitive states modulate how visual inputs relate to gaze control.
Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G
2016-09-01
Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Task set induces dynamic reallocation of resources in visual short-term memory.
Sheremata, Summer L; Shomstein, Sarah
2017-08-01
Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.
Testing the distinctiveness of visual imagery and motor imagery in a reach paradigm.
Gabbard, Carl; Ammar, Diala; Cordova, Alberto
2009-01-01
We examined the distinctiveness of motor imagery (MI) and visual imagery (VI) in the context of perceived reachability. The aim was to explore the notion that the two visual modes have distinctive processing properties tied to the two-visual-system hypothesis. The experiment included an interference tactic whereby participants completed two tasks at the same time: a visual or motor-interference task combined with a MI or VI-reaching task. We expected increased error would occur when the imaged task and the interference task were matched (e.g., MI with the motor task), suggesting an association based on the assumption that the two tasks were in competition for space on the same processing pathway. Alternatively, if there were no differences, dissociation could be inferred. Significant increases in the number of errors were found when the modalities for the imaged (both MI and VI) task and the interference task were matched. Therefore, it appears that MI and VI in the context of perceived reachability recruit different processing mechanisms.
Sensory and non-sensory factors and the concept of externality in obese subjects.
Gardner, R M; Brake, S J; Reyes, B; Maestas, D
1983-08-01
9 obese and 9 normal subjects performed a psychophysical task in which food- or non-food-related stimuli were briefly flashed tachistoscopically at a speed and intensity near the visual threshold. A signal was presented on one-half the trials and noise only on the other one-half of the trials. Using signal detection theory methodology, separate measures of sensory sensitivity (d') and response bias (beta) were calculated. No differences were noted between obese and normal subjects on measures of sensory sensitivity but significant differences on response bias. Obese subjects had consistently lower response criteria than normal ones. Analysis for subjects categorized by whether they were restrained or unrestrained eaters gave findings identical to those for obese and normal. The importance of using a methodology that separates sensory and non-sensory factors in research on obesity is discussed.
Information processing and dynamics in minimally cognitive agents.
Beer, Randall D; Williams, Paul L
2015-01-01
There has been considerable debate in the literature about the relative merits of information processing versus dynamical approaches to understanding cognitive processes. In this article, we explore the relationship between these two styles of explanation using a model agent evolved to solve a relational categorization task. Specifically, we separately analyze the operation of this agent using the mathematical tools of information theory and dynamical systems theory. Information-theoretic analysis reveals how task-relevant information flows through the system to be combined into a categorization decision. Dynamical analysis reveals the key geometrical and temporal interrelationships underlying the categorization decision. Finally, we propose a framework for directly relating these two different styles of explanation and discuss the possible implications of our analysis for some of the ongoing debates in cognitive science. Copyright © 2014 Cognitive Science Society, Inc.
Multitasking During Degraded Speech Recognition in School-Age Children
Ward, Kristina M.; Brehm, Laurel
2017-01-01
Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition. PMID:28105890
Multitasking During Degraded Speech Recognition in School-Age Children.
Grieco-Calub, Tina M; Ward, Kristina M; Brehm, Laurel
2017-01-01
Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children's multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children's accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children's dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children's proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.
Neural Trade-Offs between Recognizing and Categorizing Own- and Other-Race Faces
Liu, Jiangang; Wang, Zhe; Feng, Lu; Li, Jun; Tian, Jie; Lee, Kang
2015-01-01
Behavioral research has suggested a trade-off relationship between individual recognition and race categorization of own- and other-race faces, which is an important behavioral marker of face processing expertise. However, little is known about the neural mechanisms underlying this trade-off. Using functional magnetic resonance imaging (fMRI) methodology, we concurrently asked participants to recognize and categorize own- and other-race faces to examine the neural correlates of this trade-off relationship. We found that for other-race faces, the fusiform face area (FFA) and occipital face area (OFA) responded more to recognition than categorization, whereas for own-race faces, the responses were equal for the 2 tasks. The right superior temporal sulcus (STS) responses were the opposite to those of the FFA and OFA. Further, recognition enhanced the functional connectivity from the right FFA to the right STS, whereas categorization enhanced the functional connectivity from the right OFA to the right STS. The modulatory effects of these 2 couplings were negatively correlated. Our findings suggested that within the core face processing network, although recognizing and categorizing own- and other-race faces activated the same neural substrates, there existed neural trade-offs whereby their activations and functional connectivities were modulated by face race type and task demand due to one's differential processing expertise with own- and other-race faces. PMID:24591523
2013-01-01
Background Autism Spectrum Conditions (ASC) are a set of pervasive neurodevelopmental conditions characterized by a wide range of lifelong signs and symptoms. Recent explanatory models of autism propose abnormal neural connectivity and are supported by studies showing decreased interhemispheric coherence in individuals with ASC. The first aim of this study was to test the hypothesis of reduced interhemispheric coherence in ASC, and secondly to investigate specific effects of task performance on interhemispheric coherence in ASC. Methods We analyzed electroencephalography (EEG) data from 15 participants with ASC and 15 typical controls, using Wavelet Transform Coherence (WTC) to calculate interhemispheric coherence during face and chair matching tasks, for EEG frequencies from 5 to 40 Hz and during the first 400 ms post-stimulus onset. Results Results demonstrate a reduction of interhemispheric coherence in the ASC group, relative to the control group, in both tasks and for all electrode pairs studied. For both tasks, group differences were generally observed after around 150 ms and at frequencies lower than 13 Hz. Regarding within-group task comparisons, while the control group presented differences in interhemispheric coherence between faces and chairs tasks at various electrode pairs (FT7-FT8, TP7-TP8, P7-P8), such differences were only seen for one electrode pair in the ASC group (T7-T8). No significant differences in EEG power spectra were observed between groups. Conclusions Interhemispheric coherence is reduced in people with ASC, in a time and frequency specific manner, during visual perception and categorization of both social and inanimate stimuli and this reduction in coherence is widely dispersed across the brain. Results of within-group task comparisons may reflect an impairment in task differentiation in people with ASC relative to typically developing individuals. Overall, the results of this research support the value of WTC in examining the time-frequency microstructure of task-related interhemispheric EEG coherence in people with ASC. PMID:23311570
Effects of Stimulus Duration and Choice Delay on Visual Categorization in Pigeons
ERIC Educational Resources Information Center
Lazareva, Olga F.; Wasserman, Edward A.
2009-01-01
We [Lazareva, O. F., Freiburger, K. L., & Wasserman, E. A. (2004). "Pigeons concurrently categorize photographs at both basic and superordinate levels." "Psychonomic Bulletin and Review," 11, 1111-1117] previously trained four pigeons to classify color photographs into their basic-level categories (cars, chairs, flowers, or people) or into their…
Spatial and temporal features of superordinate semantic processing studied with fMRI and EEG.
Costanzo, Michelle E; McArdle, Joseph J; Swett, Bruce; Nechaev, Vladimir; Kemeny, Stefan; Xu, Jiang; Braun, Allen R
2013-01-01
The relationships between the anatomical representation of semantic knowledge in the human brain and the timing of neurophysiological mechanisms involved in manipulating such information remain unclear. This is the case for superordinate semantic categorization-the extraction of general features shared by broad classes of exemplars (e.g., living vs. non-living semantic categories). We proposed that, because of the abstract nature of this information, input from diverse input modalities (visual or auditory, lexical or non-lexical) should converge and be processed in the same regions of the brain, at similar time scales during superordinate categorization-specifically in a network of heteromodal regions, and late in the course of the categorization process. In order to test this hypothesis, we utilized electroencephalography and event related potentials (EEG/ERP) with functional magnetic resonance imaging (fMRI) to characterize subjects' responses as they made superordinate categorical decisions (living vs. non-living) about objects presented as visual pictures or auditory words. Our results reveal that, consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: fMRI showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli. The ERP results suggest that superordinate categorization is reflected as a late positive component (LPC) with a parietal distribution and long latencies for both stimulus types. Within the areas and times in which modality independent responses were identified, some differences between living and non-living categories were observed, with a more widespread spatial extent and longer latency responses for categorization of non-living items.
Mental Effort in Binary Categorization Aided by Binary Cues
ERIC Educational Resources Information Center
Botzer, Assaf; Meyer, Joachim; Parmet, Yisrael
2013-01-01
Binary cueing systems assist in many tasks, often alerting people about potential hazards (such as alarms and alerts). We investigate whether cues, besides possibly improving decision accuracy, also affect the effort users invest in tasks and whether the required effort in tasks affects the responses to cues. We developed a novel experimental tool…
The Effects of Distraction on Cognitive Task Performance during Toddlerhood
ERIC Educational Resources Information Center
Wyss, Nancy M.; Kannass, Kathleen N.; Haden, Catherine A.
2013-01-01
We investigated the effects of distraction on attention and task performance during toddlerhood. Thirty toddlers (24- to 26-month-olds) completed different tasks (2 of each: categorization, problem solving, memory, free play) in one of two conditions: No Distraction or Distraction. The results revealed that the distractor had varying effects on…
Logistic Regression Modeling for Predicting Task-Related ICT Use in Teaching
ERIC Educational Resources Information Center
Askar, Petek; Usluel, Yasemin Kocak; Mumcu, Filiz Kuskaya
2006-01-01
The main goal of this study is to estimate the extent to which perceived innovation characteristics are associated with the probability of task related ICT use among secondary school teachers. The tasks were categorized as teaching preparation, teaching delivery, and management. Four hundred and sixteen teachers from secondary schools in Turkey,…
Task-Driven Evaluation of Aggregation in Time Series Visualization
Albers, Danielle; Correll, Michael; Gleicher, Michael
2014-01-01
Many visualization tasks require the viewer to make judgments about aggregate properties of data. Recent work has shown that viewers can perform such tasks effectively, for example to efficiently compare the maximums or means over ranges of data. However, this work also shows that such effectiveness depends on the designs of the displays. In this paper, we explore this relationship between aggregation task and visualization design to provide guidance on matching tasks with designs. We combine prior results from perceptual science and graphical perception to suggest a set of design variables that influence performance on various aggregate comparison tasks. We describe how choices in these variables can lead to designs that are matched to particular tasks. We use these variables to assess a set of eight different designs, predicting how they will support a set of six aggregate time series comparison tasks. A crowd-sourced evaluation confirms these predictions. These results not only provide evidence for how the specific visualizations support various tasks, but also suggest using the identified design variables as a tool for designing visualizations well suited for various types of tasks. PMID:25343147
A neurocomputational theory of how explicit learning bootstraps early procedural learning.
Paul, Erick J; Ashby, F Gregory
2013-01-01
It is widely accepted that human learning and memory is mediated by multiple memory systems that are each best suited to different requirements and demands. Within the domain of categorization, at least two systems are thought to facilitate learning: an explicit (declarative) system depending largely on the prefrontal cortex, and a procedural (non-declarative) system depending on the basal ganglia. Substantial evidence suggests that each system is optimally suited to learn particular categorization tasks. However, it remains unknown precisely how these systems interact to produce optimal learning and behavior. In order to investigate this issue, the present research evaluated the progression of learning through simulation of categorization tasks using COVIS, a well-known model of human category learning that includes both explicit and procedural learning systems. Specifically, the model's parameter space was thoroughly explored in procedurally learned categorization tasks across a variety of conditions and architectures to identify plausible interaction architectures. The simulation results support the hypothesis that one-way interaction between the systems occurs such that the explicit system "bootstraps" learning early on in the procedural system. Thus, the procedural system initially learns a suboptimal strategy employed by the explicit system and later refines its strategy. This bootstrapping could be from cortical-striatal projections that originate in premotor or motor regions of cortex, or possibly by the explicit system's control of motor responses through basal ganglia-mediated loops.
Surgical simulation tasks challenge visual working memory and visual-spatial ability differently.
Schlickum, Marcus; Hedman, Leif; Enochsson, Lars; Henningsohn, Lars; Kjellin, Ann; Felländer-Tsai, Li
2011-04-01
New strategies for selection and training of physicians are emerging. Previous studies have demonstrated a correlation between visual-spatial ability and visual working memory with surgical simulator performance. The aim of this study was to perform a detailed analysis on how these abilities are associated with metrics in simulator performance with different task content. The hypothesis is that the importance of visual-spatial ability and visual working memory varies with different task contents. Twenty-five medical students participated in the study that involved testing visual-spatial ability using the MRT-A test and visual working memory using the RoboMemo computer program. Subjects were also trained and tested for performance in three different surgical simulators. The scores from the psychometric tests and the performance metrics were then correlated using multivariate analysis. MRT-A score correlated significantly with the performance metrics Efficiency of screening (p = 0.006) and Total time (p = 0.01) in the GI Mentor II task and Total score (p = 0.02) in the MIST-VR simulator task. In the Uro Mentor task, both the MRT-A score and the visual working memory 3-D cube test score as presented in the RoboMemo program (p = 0.02) correlated with Total score (p = 0.004). In this study we have shown that some differences exist regarding the impact of visual abilities and task content on simulator performance. When designing future cognitive training programs and testing regimes, one might have to consider that the design must be adjusted in accordance with the specific surgical task to be trained in mind.
Preschoolers’ Novel Noun Extensions: Shape in Spite of Knowing Better
Saalbach, Henrik; Schalk, Lennart
2011-01-01
We examined the puzzling research findings that when extending novel nouns, preschoolers rely on shape similarity (rather than categorical relations) while in other task contexts (e.g., property induction) they rely on categorical relations. Taking into account research on children’s word learning, categorization, and inductive inference we assume that preschoolers have both a shape-based and a category-based word extension strategy available and can switch between these two depending on which information is easily available. To this end, we tested preschoolers on two versions of a novel-noun label extension task. First, we paralleled the standard extension task commonly used by previous research. In this case, as expected, preschoolers predominantly selected same-shape items. Second, we supported preschoolers’ retrieval of item-related information from memory by asking them simple questions about each item prior to the label extension task. Here, they switched to a category-based strategy, thus, predominantly selecting same-category items. Finally, we revealed that this shape-to-category shift is specific to the word learning context as we did not find it in a non-lexical classification task. These findings support our assumption that preschoolers’ decision about word extension change in accordance with the availability of information (from task context or by memory retrieval). We conclude by suggesting that preschoolers’ noun extensions can be conceptualized within the framework of heuristic decision-making. This provides an ecologically plausible processing account with respect to which information is selected and how this information is integrated to act as a guideline for decision-making when novel words have to be generalized. PMID:22073036
Beurskens, Rainer; Bock, Otmar
2013-12-01
Previous literature suggests that age-related deficits of dual-task walking are particularly pronounced with second tasks that require continuous visual processing. Here we evaluate whether the difficulty of the walking task matters as well. To this end, participants were asked to walk along a straight pathway of 20m length in four different walking conditions: (a) wide path and preferred pace; (b) narrow path and preferred pace, (c) wide path and fast pace, (d) obstacled wide path and preferred pace. Each condition was performed concurrently with a task requiring visual processing or fine motor control, and all tasks were also performed alone which allowed us to calculate the dual-task costs (DTC). Results showed that the age-related increase of DTC is substantially larger with the visually demanding than with the motor-demanding task, more so when walking on a narrow or obstacled path. We attribute these observations to the fact that visual scanning of the environment becomes more crucial when walking in difficult terrains: the higher visual demand of those conditions accentuates the age-related deficits in coordinating them with a visual non-walking task. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.
2016-01-01
Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630
ERIC Educational Resources Information Center
Janko, Tomáš; Knecht, Petr
2013-01-01
The article focuses on the issue of visuals in geography textbooks and their assessment. The issue is explained from the perspective of educational and cognitive psychological theory and the theory of textbooks. The general purpose of the presented research is to develop a research instrument for categorising the types of visuals in geography…
Rossion, Bruno; Jacques, Corentin; Jonas, Jacques
2018-02-26
The neural basis of face categorization has been widely investigated with functional magnetic resonance imaging (fMRI), identifying a set of face-selective local regions in the ventral occipitotemporal cortex (VOTC). However, indirect recording of neural activity with fMRI is associated with large fluctuations of signal across regions, often underestimating face-selective responses in the anterior VOTC. While direct recording of neural activity with subdural grids of electrodes (electrocorticography, ECoG) or depth electrodes (stereotactic electroencephalography, SEEG) offers a unique opportunity to fill this gap in knowledge, these studies rather reveal widely distributed face-selective responses. Moreover, intracranial recordings are complicated by interindividual variability in neuroanatomy, ambiguity in definition, and quantification of responses of interest, as well as limited access to sulci with ECoG. Here, we propose to combine SEEG in large samples of individuals with fast periodic visual stimulation to objectively define, quantify, and characterize face categorization across the whole VOTC. This approach reconciles the wide distribution of neural face categorization responses with their (right) hemispheric and regional specialization, and reveals several face-selective regions in anterior VOTC sulci. We outline the challenges of this research program to understand the neural basis of face categorization and high-level visual recognition in general. © 2018 New York Academy of Sciences.
Botzer, Assaf; Meyer, Joachim; Parmet, Yisrael
2016-09-01
Binary cues help operators perform binary categorization tasks, such as monitoring for system failures. They may also allow them to attend to other tasks they concurrently perform. If the time saved by using cues is allocated to other concurrent tasks, users' overall effort may remain unchanged. In 2 experiments, participants performed a simulated quality control task, together with a tracking task. In half the experimental blocks cues were available, and participants could use them in their decisions about the quality of products (intact or faulty). In Experiment 1, the difficulty of tracking was constant, while in Experiment 2, tracking difficulty differed in the 2 halves of the experiment. In both experiments, participants reported on the NASA Task Load Index that cues improved their performance and reduced their frustration. Consequently, their overall score on mental workload (MWL) was lower with cues. They also reported, however, that cues did not reduce their effort. We conclude that cues and other forms of automation may support task performance and reduce overall MWL, but this will not necessarily mean that users will work less hard. Thus, effort and overall MWL should be evaluated separately, if one wants to obtain a full picture of the effects of automation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Falasca, N W; D'Ascenzo, S; Di Domenico, A; Onofrj, M; Tommasi, L; Laeng, B; Franciotti, R
2015-04-01
Magnetoencephalography was recorded during a matching-to-sample plus cueing paradigm, in which participants judged the occurrence of changes in either categorical (CAT) or coordinate (COO) spatial relations. Previously, parietal and frontal lobes were identified as key areas in processing spatial relations and it was shown that each hemisphere was differently involved and modulated by the scope of the attention window (e.g. a large and small cue). In this study, Granger analysis highlighted the patterns of causality among involved brain areas--the direction of information transfer ran from the frontal to the visual cortex in the right hemisphere, whereas it ran in the opposite direction in the left side. Thus, the right frontal area seems to exert top-down influence, supporting the idea that, in this task, top-down signals are selectively related to the right side. Additionally, for CAT change preceded by a small cue, the right frontal gyrus was not involved in the information transfer, indicating a selective specialization of the left hemisphere for this condition. The present findings strengthen the conclusion of the presence of a remarkable hemispheric specialization for spatial relation processing and illustrate the complex interactions between the lateralized parts of the neural network. Moreover, they illustrate how focusing attention over large or small regions of the visual field engages these lateralized networks differently, particularly in the frontal regions of each hemisphere, consistent with the theory that spatial relation judgements require a fronto-parietal network in the left hemisphere for categorical relations and on the right hemisphere for coordinate spatial processing. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Visual scanning behavior and pilot workload
NASA Technical Reports Server (NTRS)
Harris, R. L., Sr.; Tole, J. R.; Stephens, A. T.; Ephrath, A. R.
1982-01-01
This paper describes an experimental paradigm and a set of results which demonstrate a relationship between the level of performance on a skilled man-machine control task, the skill of the operator, the level of mental difficulty induced by an additional task imposed on the basic control task, and visual scanning performance. During a constant, simulated piloting task, visual scanning of instruments was found to vary with the difficulty of a verbal mental loading task. The average dwell time of each fixation on the pilot's primary instrument increased with the estimated skill level of the pilots, with novices being affected by the loading task much more than experts. The results suggest that visual scanning of instruments in a controlled task may be an indicator of both workload and skill.
Emerging Object Representations in the Visual System Predict Reaction Times for Categorization
Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.
2015-01-01
Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634
Adaptive categorization of ART networks in robot behavior learning using game-theoretic formulation.
Fung, Wai-keung; Liu, Yun-hui
2003-12-01
Adaptive Resonance Theory (ART) networks are employed in robot behavior learning. Two of the difficulties in online robot behavior learning, namely, (1) exponential memory increases with time, (2) difficulty for operators to specify learning tasks accuracy and control learning attention before learning. In order to remedy the aforementioned difficulties, an adaptive categorization mechanism is introduced in ART networks for perceptual and action patterns categorization in this paper. A game-theoretic formulation of adaptive categorization for ART networks is proposed for vigilance parameter adaptation for category size control on the categories formed. The proposed vigilance parameter update rule can help improving categorization performance in the aspect of category number stability and solve the problem of selecting initial vigilance parameter prior to pattern categorization in traditional ART networks. Behavior learning using physical robot is conducted to demonstrate the effectiveness of the proposed adaptive categorization mechanism in ART networks.
Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach
Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik
2015-01-01
Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10–150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes’ reported grapheme-color association. A mathematical model, based on Bundesen’s (1990) Theory of Visual Attention (TVA), was fitted to each observer’s data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group’s model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes’ expertise regarding their specific grapheme-color associations. PMID:26252019
Age-related macular degeneration changes the processing of visual scenes in the brain.
Ramanoël, Stephen; Chokron, Sylvie; Hera, Ruxandra; Kauffmann, Louise; Chiquet, Christophe; Krainik, Alexandre; Peyrin, Carole
2018-01-01
In age-related macular degeneration (AMD), the processing of fine details in a visual scene, based on a high spatial frequency processing, is impaired, while the processing of global shapes, based on a low spatial frequency processing, is relatively well preserved. The present fMRI study aimed to investigate the residual abilities and functional brain changes of spatial frequency processing in visual scenes in AMD patients. AMD patients and normally sighted elderly participants performed a categorization task using large black and white photographs of scenes (indoors vs. outdoors) filtered in low and high spatial frequencies, and nonfiltered. The study also explored the effect of luminance contrast on the processing of high spatial frequencies. The contrast across scenes was either unmodified or equalized using a root-mean-square contrast normalization in order to increase contrast in high-pass filtered scenes. Performance was lower for high-pass filtered scenes than for low-pass and nonfiltered scenes, for both AMD patients and controls. The deficit for processing high spatial frequencies was more pronounced in AMD patients than in controls and was associated with lower activity for patients than controls not only in the occipital areas dedicated to central and peripheral visual fields but also in a distant cerebral region specialized for scene perception, the parahippocampal place area. Increasing the contrast improved the processing of high spatial frequency content and spurred activation of the occipital cortex for AMD patients. These findings may lead to new perspectives for rehabilitation procedures for AMD patients.
Ho, Michael R; Pezdek, Kathy
2016-06-01
The cross-race effect (CRE) describes the finding that same-race faces are recognized more accurately than cross-race faces. According to social-cognitive theories of the CRE, processes of categorization and individuation at encoding account for differential recognition of same- and cross-race faces. Recent face memory research has suggested that similar but distinct categorization and individuation processes also occur postencoding, at recognition. Using a divided-attention paradigm, in Experiments 1A and 1B we tested and confirmed the hypothesis that distinct postencoding categorization and individuation processes occur during the recognition of same- and cross-race faces. Specifically, postencoding configural divided-attention tasks impaired recognition accuracy more for same-race than for cross-race faces; on the other hand, for White (but not Black) participants, postencoding featural divided-attention tasks impaired recognition accuracy more for cross-race than for same-race faces. A social categorization paradigm used in Experiments 2A and 2B tested the hypothesis that the postencoding in-group or out-group social orientation to faces affects categorization and individuation processes during the recognition of same-race and cross-race faces. Postencoding out-group orientation to faces resulted in categorization for White but not for Black participants. This was evidenced by White participants' impaired recognition accuracy for same-race but not for cross-race out-group faces. Postencoding in-group orientation to faces had no effect on recognition accuracy for either same-race or cross-race faces. The results of Experiments 2A and 2B suggest that this social orientation facilitates White but not Black participants' individuation and categorization processes at recognition. Models of recognition memory for same-race and cross-race faces need to account for processing differences that occur at both encoding and recognition.
Early differential processing of material images: Evidence from ERP classification.
Wiebel, Christiane B; Valsecchi, Matteo; Gegenfurtner, Karl R
2014-06-24
Investigating the temporal dynamics of natural image processing using event-related potentials (ERPs) has a long tradition in object recognition research. In a classical Go-NoGo task two characteristic effects have been emphasized: an early task independent category effect and a later task-dependent target effect. Here, we set out to use this well-established Go-NoGo paradigm to study the time course of material categorization. Material perception has gained more and more interest over the years as its importance in natural viewing conditions has been ignored for a long time. In addition to analyzing standard ERPs, we conducted a single trial ERP pattern analysis. To validate this procedure, we also measured ERPs in two object categories (people and animals). Our linear classification procedure was able to largely capture the overall pattern of results from the canonical analysis of the ERPs and even extend it. We replicate the known target effect (differential Go-NoGo potential at frontal sites) for the material images. Furthermore, we observe task-independent differential activity between the two material categories as early as 140 ms after stimulus onset. Using our linear classification approach, we show that material categories can be differentiated consistently based on the ERP pattern in single trials around 100 ms after stimulus onset, independent of the target-related status. This strengthens the idea of early differential visual processing of material categories independent of the task, probably due to differences in low-level image properties and suggests pattern classification of ERP topographies as a strong instrument for investigating electrophysiological brain activity. © 2014 ARVO.
ERIC Educational Resources Information Center
Little, Daniel R.; Lewandowsky, Stephan
2009-01-01
Despite the fact that categories are often composed of correlated features, the evidence that people detect and use these correlations during intentional category learning has been overwhelmingly negative to date. Nonetheless, on other categorization tasks, such as feature prediction, people show evidence of correlational sensitivity. A…