Sample records for visual category learning

  1. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  2. Fine-grained temporal coding of visually-similar categories in the ventral visual pathway and prefrontal cortex

    PubMed Central

    Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.

    2013-01-01

    Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656

  3. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    PubMed Central

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations. PMID:25538637

  4. Associative learning in baboons (Papio papio) and humans (Homo sapiens): species differences in learned attention to visual features.

    PubMed

    Fagot, J; Kruschke, J K; Dépy, D; Vauclair, J

    1998-10-01

    We examined attention shifting in baboons and humans during the learning of visual categories. Within a conditional matching-to-sample task, participants of the two species sequentially learned two two-feature categories which shared a common feature. Results showed that humans encoded both features of the initially learned category, but predominantly only the distinctive feature of the subsequently learned category. Although baboons initially encoded both features of the first category, they ultimately retained only the distinctive features of each category. Empirical data from the two species were analyzed with the 1996 ADIT connectionist model of Kruschke. ADIT fits the baboon data when the attentional shift rate is zero, and the human data when the attentional shift rate is not zero. These empirical and modeling results suggest species differences in learned attention to visual features.

  5. Individual Differences in Learning Talker Categories: The Role of Working Memory

    PubMed Central

    Levi, Susannah V.

    2016-01-01

    The current study explores the question of how an auditory category is learned by having school-age listeners learn to categorize speech not in terms of linguistic categories, but instead in terms of talker categories (i.e., who is talking). Findings from visual-category learning indicate that working memory skills affect learning, but the literature is equivocal: sometimes better working memory is advantageous, and sometimes not. The current study examined the role of different components of working memory to test which component skills benefit, and which hinder, learning talker categories. Results revealed that the short-term storage component positively predicted learning, but that the Central Executive and Episodic Buffer negatively predicted learning. As with visual categories, better working memory is not always an advantage. PMID:25721393

  6. Supervised and Unsupervised Learning of Multidimensional Acoustic Categories

    ERIC Educational Resources Information Center

    Goudbeek, Martijn; Swingley, Daniel; Smits, Roel

    2009-01-01

    Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over 1 dimension are easy to learn and that learning multidimensional categories is…

  7. Learning, retention, and generalization of haptic categories

    NASA Astrophysics Data System (ADS)

    Do, Phuong T.

    This dissertation explored how haptic concepts are learned, retained, and generalized to the same or different modality. Participants learned to classify objects into three categories either visually or haptically via different training procedures, followed by an immediate or delayed transfer test. Experiment I involved visual versus haptic learning and transfer. Intermodal matching between vision and haptics was investigated in Experiment II. Experiments III and IV examined intersensory conflict in within- and between-category bimodal situations to determine the degree of perceptual dominance between sight and touch. Experiment V explored the intramodal relationship between similarity and categorization in a psychological space, as revealed by MDS analysis of similarity judgments. Major findings were: (1) visual examination resulted in relatively higher performance accuracy than haptic learning; (2) systematic training produced better category learning of haptic concepts across all modality conditions; (3) the category prototypes were rated newer than any transfer stimulus followed learning both immediately and after a week delay; and, (4) although they converged at the apex of two transformational trajectories, the category prototypes became more central to their respective categories and increasingly structured as a function of learning. Implications for theories of multimodal similarity and categorization behavior are discussed in terms of discrimination learning, sensory integration, and dominance relation.

  8. Perceptual category learning and visual processing: An exercise in computational cognitive neuroscience.

    PubMed

    Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory

    2017-05-01

    The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Impact of feature saliency on visual category learning.

    PubMed

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.

  10. Impact of feature saliency on visual category learning

    PubMed Central

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the ‘essence’ of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies. PMID:25954220

  11. Linguistic labels, dynamic visual features, and attention in infant category learning.

    PubMed

    Deng, Wei Sophia; Sloutsky, Vladimir M

    2015-06-01

    How do words affect categorization? According to some accounts, even early in development words are category markers and are different from other features. According to other accounts, early in development words are part of the input and are akin to other features. The current study addressed this issue by examining the role of words and dynamic visual features in category learning in 8- to 12-month-old infants. Infants were familiarized with exemplars from one category in a label-defined or motion-defined condition and then tested with prototypes from the studied category and from a novel contrast category. Eye-tracking results indicated that infants exhibited better category learning in the motion-defined condition than in the label-defined condition, and their attention was more distributed among different features when there was a dynamic visual feature compared with the label-defined condition. These results provide little evidence for the idea that linguistic labels are category markers that facilitate category learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Linguistic Labels, Dynamic Visual Features, and Attention in Infant Category Learning

    PubMed Central

    Deng, Wei (Sophia); Sloutsky, Vladimir M.

    2015-01-01

    How do words affect categorization? According to some accounts, even early in development, words are category markers and are different from other features. According to other accounts, early in development, words are part of the input and are akin to other features. The current study addressed this issue by examining the role of words and dynamic visual features in category learning in 8- to 12- month infants. Infants were familiarized with exemplars from one category in a label-defined or motion-defined condition and then tested with prototypes from the studied category and from a novel contrast category. Eye tracking results indicated that infants exhibited better category learning in the motion-defined than in the label-defined condition and their attention was more distributed among different features when there was a dynamic visual feature compared to the label-defined condition. These results provide little evidence for the idea that linguistic labels are category markers that facilitate category learning. PMID:25819100

  13. The cost of selective attention in category learning: Developmental differences between adults and infants

    PubMed Central

    Best, Catherine A.; Yim, Hyungwook; Sloutsky, Vladimir M.

    2013-01-01

    Selective attention plays an important role in category learning. However, immaturities of top-down attentional control during infancy coupled with successful category learning suggest that early category learning is achieved without attending selectively. Research presented here examines this possibility by focusing on category learning in infants (6–8 months old) and adults. Participants were trained on a novel visual category. Halfway through the experiment, unbeknownst to participants, the to-be-learned category switched to another category, where previously relevant features became irrelevant and previously irrelevant features became relevant. If participants attend selectively to the relevant features of the first category, they should incur a cost of selective attention immediately after the unknown category switch. Results revealed that adults demonstrated a cost, as evidenced by a decrease in accuracy and response time on test trials as well as a decrease in visual attention to newly relevant features. In contrast, infants did not demonstrate a similar cost of selective attention as adults despite evidence of learning both to-be-learned categories. Findings are discussed as supporting multiple systems of category learning and as suggesting that learning mechanisms engaged by adults may be different from those engaged by infants. PMID:23773914

  14. Visual variability affects early verb learning.

    PubMed

    Twomey, Katherine E; Lush, Lauren; Pearce, Ruth; Horst, Jessica S

    2014-09-01

    Research demonstrates that within-category visual variability facilitates noun learning; however, the effect of visual variability on verb learning is unknown. We habituated 24-month-old children to a novel verb paired with an animated star-shaped actor. Across multiple trials, children saw either a single action from an action category (identical actions condition, for example, travelling while repeatedly changing into a circle shape) or multiple actions from that action category (variable actions condition, for example, travelling while changing into a circle shape, then a square shape, then a triangle shape). Four test trials followed habituation. One paired the habituated verb with a new action from the habituated category (e.g., 'dacking' + pentagon shape) and one with a completely novel action (e.g., 'dacking' + leg movement). The others paired a new verb with a new same-category action (e.g., 'keefing' + pentagon shape), or a completely novel category action (e.g., 'keefing' + leg movement). Although all children discriminated novel verb/action pairs, children in the identical actions condition discriminated trials that included the completely novel verb, while children in the variable actions condition discriminated the out-of-category action. These data suggest that - as in noun learning - visual variability affects verb learning and children's ability to form action categories. © 2014 The British Psychological Society.

  15. The Effects of Concurrent Verbal and Visual Tasks on Category Learning

    ERIC Educational Resources Information Center

    Miles, Sarah J.; Minda, John Paul

    2011-01-01

    Current theories of category learning posit separate verbal and nonverbal learning systems. Past research suggests that the verbal system relies on verbal working memory and executive functioning and learns rule-defined categories; the nonverbal system does not rely on verbal working memory and learns non-rule-defined categories (E. M. Waldron…

  16. The cost of selective attention in category learning: developmental differences between adults and infants.

    PubMed

    Best, Catherine A; Yim, Hyungwook; Sloutsky, Vladimir M

    2013-10-01

    Selective attention plays an important role in category learning. However, immaturities of top-down attentional control during infancy coupled with successful category learning suggest that early category learning is achieved without attending selectively. Research presented here examines this possibility by focusing on category learning in infants (6-8months old) and adults. Participants were trained on a novel visual category. Halfway through the experiment, unbeknownst to participants, the to-be-learned category switched to another category, where previously relevant features became irrelevant and previously irrelevant features became relevant. If participants attend selectively to the relevant features of the first category, they should incur a cost of selective attention immediately after the unknown category switch. Results revealed that adults demonstrated a cost, as evidenced by a decrease in accuracy and response time on test trials as well as a decrease in visual attention to newly relevant features. In contrast, infants did not demonstrate a similar cost of selective attention as adults despite evidence of learning both to-be-learned categories. Findings are discussed as supporting multiple systems of category learning and as suggesting that learning mechanisms engaged by adults may be different from those engaged by infants. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Category learning increases discriminability of relevant object dimensions in visual cortex.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2013-04-01

    Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.

  18. Dual-learning systems during speech category learning

    PubMed Central

    Chandrasekaran, Bharath; Yi, Han-Gyol; Maddox, W. Todd

    2013-01-01

    Dual-systems models of visual category learning posit the existence of an explicit, hypothesis-testing ‘reflective’ system, as well as an implicit, procedural-based ‘reflexive’ system. The reflective and reflexive learning systems are competitive and neurally dissociable. Relatively little is known about the role of these domain-general learning systems in speech category learning. Given the multidimensional, redundant, and variable nature of acoustic cues in speech categories, our working hypothesis is that speech categories are learned reflexively. To this end, we examined the relative contribution of these learning systems to speech learning in adults. Native English speakers learned to categorize Mandarin tone categories over 480 trials. The training protocol involved trial-by-trial feedback and multiple talkers. Experiment 1 and 2 examined the effect of manipulating the timing (immediate vs. delayed) and information content (full vs. minimal) of feedback. Dual-systems models of visual category learning predict that delayed feedback and providing rich, informational feedback enhance reflective learning, while immediate and minimally informative feedback enhance reflexive learning. Across the two experiments, our results show feedback manipulations that targeted reflexive learning enhanced category learning success. In Experiment 3, we examined the role of trial-to-trial talker information (mixed vs. blocked presentation) on speech category learning success. We hypothesized that the mixed condition would enhance reflexive learning by not allowing an association between talker-related acoustic cues and speech categories. Our results show that the mixed talker condition led to relatively greater accuracies. Our experiments demonstrate that speech categories are optimally learned by training methods that target the reflexive learning system. PMID:24002965

  19. Mere exposure alters category learning of novel objects.

    PubMed

    Folstein, Jonathan R; Gauthier, Isabel; Palmeri, Thomas J

    2010-01-01

    We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning.

  20. Mere Exposure Alters Category Learning of Novel Objects

    PubMed Central

    Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.

    2010-01-01

    We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning. PMID:21833209

  1. Event-Related fMRI of Category Learning: Differences in Classification and Feedback Networks

    ERIC Educational Resources Information Center

    Little, Deborah M.; Shin, Silvia S.; Sisco, Shannon M.; Thulborn, Keith R.

    2006-01-01

    Eighteen healthy young adults underwent event-related (ER) functional magnetic resonance imaging (fMRI) of the brain while performing a visual category learning task. The specific category learning task required subjects to extract the rules that guide classification of quasi-random patterns of dots into categories. Following each classification…

  2. Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.

    PubMed

    Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi

    2014-02-01

    This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.

  3. Chromatic Perceptual Learning but No Category Effects without Linguistic Input.

    PubMed

    Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.

  4. How category learning affects object representations: Not all morphspaces stretch alike

    PubMed Central

    Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.

    2012-01-01

    How does learning to categorize objects affect how we visually perceive them? Behavioral, neurophysiological, and neuroimaging studies have tested the degree to which category learning influences object representations, with conflicting results. Some studies find that objects become more visually discriminable along dimensions relevant to previously learned categories, while others find no such effect. One critical factor we explore here lies in the structure of the morphspaces used in different studies. Studies finding no increase in discriminability often use “blended” morphspaces, with morphparents lying at corners of the space. By contrast, studies finding increases in discriminability use “factorial” morphspaces, defined by separate morphlines forming axes of the space. Using the same four morphparents, we created both factorial and blended morphspaces matched in pairwise discriminability. Category learning caused a selective increase in discriminability along the relevant dimension of the factorial space, but not in the blended space, and led to the creation of functional dimensions in the factorial space, but not in the blended space. These findings demonstrate that not all morphspaces stretch alike: Only some morphspaces support enhanced discriminability to relevant object dimensions following category learning. Our results have important implications for interpreting neuroimaging studies reporting little or no effect of category learning on object representations in the visual system: Those studies may have been limited by their use of blended morphspaces. PMID:22746950

  5. Chromatic Perceptual Learning but No Category Effects without Linguistic Input

    PubMed Central

    Grandison, Alexandra; Sowden, Paul T.; Drivonikou, Vicky G.; Notman, Leslie A.; Alexander, Iona; Davies, Ian R. L.

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest. PMID:27252669

  6. Incidental category learning and cognitive load in a multisensory environment across childhood.

    PubMed

    Broadbent, H J; Osborne, T; Rea, M; Peng, A; Mareschal, D; Kirkham, N Z

    2018-06-01

    Multisensory information has been shown to facilitate learning (Bahrick & Lickliter, 2000; Broadbent, White, Mareschal, & Kirkham, 2017; Jordan & Baker, 2011; Shams & Seitz, 2008). However, although research has examined the modulating effect of unisensory and multisensory distractors on multisensory processing, the extent to which a concurrent unisensory or multisensory cognitive load task would interfere with or support multisensory learning remains unclear. This study examined the role of concurrent task modality on incidental category learning in 6- to 10-year-olds. Participants were engaged in a multisensory learning task while also performing either a unisensory (visual or auditory only) or multisensory (audiovisual) concurrent task (CT). We found that engaging in an auditory CT led to poorer performance on incidental category learning compared with an audiovisual or visual CT, across groups. In 6-year-olds, category test performance was at chance in the auditory-only CT condition, suggesting auditory concurrent tasks may interfere with learning in younger children, but the addition of visual information may serve to focus attention. These findings provide novel insight into the use of multisensory concurrent information on incidental learning. Implications for the deployment of multisensory learning tasks within education across development and developmental changes in modality dominance and ability to switch flexibly across modalities are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning

    PubMed Central

    Yee, Meagan; Jones, Susan S.; Smith, Linda B.

    2012-01-01

    Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015

  8. Implementation of ICARE learning model using visualization animation on biotechnology course

    NASA Astrophysics Data System (ADS)

    Hidayat, Habibi

    2017-12-01

    ICARE is a learning model that directly ensure the students to actively participate in the learning process using animation media visualization. ICARE have five key elements of learning experience from children and adult that is introduction, connection, application, reflection and extension. The use of Icare system to ensure that participants have opportunity to apply what have been they learned. So that, the message delivered by lecture to students can be understood and recorded by students in a long time. Learning model that was deemed capable of improving learning outcomes and interest to learn in following learning process Biotechnology with applying the ICARE learning model using visualization animation. This learning model have been giving motivation to participate in the learning process and learning outcomes obtained becomes more increased than before. From the results of student learning in subjects Biotechnology by applying the ICARE learning model using Visualization Animation can improving study results of student from the average value of middle test amounted to 70.98 with the percentage of 75% increased value of final test to be 71.57 with the percentage of 68.63%. The interest to learn from students more increasing visits of student activities at each cycle, namely the first cycle obtained average value by 33.5 with enough category. The second cycle is obtained an average value of 36.5 to good category and third cycle the average value of 36.5 with a student activity to good category.

  9. The Role of Age and Executive Function in Auditory Category Learning

    PubMed Central

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  10. Timing the impact of literacy on visual processing

    PubMed Central

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  11. Timing the impact of literacy on visual processing.

    PubMed

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas

    2014-12-09

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.

  12. Problem solving of student with visual impairment related to mathematical literacy problem

    NASA Astrophysics Data System (ADS)

    Pratama, A. R.; Saputro, D. R. S.; Riyadi

    2018-04-01

    The student with visual impairment, total blind category depends on the sense of touch and hearing in obtaining information. In fact, the two senses can receive information less than 20%. Thus, students with visual impairment of the total blind categories in the learning process must have difficulty, including learning mathematics. This study aims to describe the problem-solving process of the student with visual impairment, total blind category on mathematical literacy issues based on Polya phase. This research using test method similar problems mathematical literacy in PISA and in-depth interviews. The subject of this study was a student with visual impairment, total blind category. Based on the result of the research, problem-solving related to mathematical literacy based on Polya phase is quite good. In the phase of understanding the problem, the student read about twice by brushing the text and assisted with information through hearing three times. The student with visual impairment in problem-solving based on the Polya phase, devising a plan by summoning knowledge and experience gained previously. At the phase of carrying out the plan, students with visual impairment implement the plan in accordance with pre-made. In the looking back phase, students with visual impairment need to check the answers three times but have not been able to find a way.

  13. Using spoken words to guide open-ended category formation.

    PubMed

    Chauhan, Aneesh; Seabra Lopes, Luís

    2011-11-01

    Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.

  14. The effect of category learning on attentional modulation of visual cortex.

    PubMed

    Folstein, Jonathan R; Fuller, Kelly; Howard, Dorothy; DePatie, Thomas

    2017-09-01

    Learning about visual object categories causes changes in the way we perceive those objects. One likely mechanism by which this occurs is the application of attention to potentially relevant objects. Here we test the hypothesis that category membership influences the allocation of attention, allowing attention to be applied not only to object features, but to entire categories. Participants briefly learned to categorize a set of novel cartoon animals after which EEG was recorded while participants distinguished between a target and non-target category. A second identical EEG session was conducted after two sessions of categorization practice. The category structure and task design allowed parametric manipulation of number of target features while holding feature frequency and category membership constant. We found no evidence that category membership influenced attentional selection: a postero-lateral negative component, labeled the selection negativity/N250, increased over time and was sensitive to number of target features, not target categories. In contrast, the right hemisphere N170 was not sensitive to target features. The P300 appeared sensitive to category in the first session, but showed a graded sensitivity to number of target features in the second session, possibly suggesting a transition from rule-based to similarity based categorization. Copyright © 2017. Published by Elsevier Ltd.

  15. Out of sight, out of mind: Categorization learning and normal aging.

    PubMed

    Schenk, Sabrina; Minda, John P; Lech, Robert K; Suchan, Boris

    2016-10-01

    The present combined EEG and eye tracking study examined the process of categorization learning at different age ranges and aimed to investigate to which degree categorization learning is mediated by visual attention and perceptual strategies. Seventeen young subjects and ten elderly subjects had to perform a visual categorization task with two abstract categories. Each category consisted of prototypical stimuli and an exception. The categorization of prototypical stimuli was learned very early during the experiment, while the learning of exceptions was delayed. The categorization of exceptions was accompanied by higher P150, P250 and P300 amplitudes. In contrast to younger subjects, elderly subjects had problems in the categorization of exceptions, but showed an intact categorization performance for prototypical stimuli. Moreover, elderly subjects showed higher fixation rates for important stimulus features and higher P150 amplitudes, which were positively correlated with the categorization performances. These results indicate that elderly subjects compensate for cognitive decline through enhanced perceptual and attentional processing of individual stimulus features. Additionally, a computational approach has been applied and showed a transition away from purely abstraction-based learning to an exemplar-based learning in the middle block for both groups. However, the calculated models provide a better fit for younger subjects than for elderly subjects. The current study demonstrates that human categorization learning is based on early abstraction-based processing followed by an exemplar-memorization stage. This strategy combination facilitates the learning of real world categories with a nuanced category structure. In addition, the present study suggests that categorization learning is affected by normal aging and modulated by perceptual processing and visual attention. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Assessing the Neural Basis of Uncertainty in Perceptual Category Learning through Varying Levels of Distortion

    ERIC Educational Resources Information Center

    Daniel, Reka; Wagner, Gerd; Koch, Kathrin; Reichenbach, Jurgen R.; Sauer, Heinrich; Schlosser, Ralf G. M.

    2011-01-01

    The formation of new perceptual categories involves learning to extract that information from a wide range of often noisy sensory inputs, which is critical for selecting between a limited number of responses. To identify brain regions involved in visual classification learning under noisy conditions, we developed a task on the basis of the…

  17. Differential Impact of Posterior Lesions in the Left and Right Hemisphere on Visual Category Learning and Generalization to Contrast Reversal

    ERIC Educational Resources Information Center

    Langguth, Berthold; Juttner, Martin; Landis, Theodor; Regard, Marianne; Rentschler, Ingo

    2009-01-01

    Hemispheric differences in the learning and generalization of pattern categories were explored in two experiments involving sixteen patients with unilateral posterior, cerebral lesions in the left (LH) or right (RH) hemisphere. In each experiment participants were first trained to criterion in a supervised learning paradigm to categorize a set of…

  18. Discovery learning model with geogebra assisted for improvement mathematical visual thinking ability

    NASA Astrophysics Data System (ADS)

    Juandi, D.; Priatna, N.

    2018-05-01

    The main goal of this study is to improve the mathematical visual thinking ability of high school student through implementation the Discovery Learning Model with Geogebra Assisted. This objective can be achieved through study used quasi-experimental method, with non-random pretest-posttest control design. The sample subject of this research consist of 62 senior school student grade XI in one of school in Bandung district. The required data will be collected through documentation, observation, written tests, interviews, daily journals, and student worksheets. The results of this study are: 1) Improvement students Mathematical Visual Thinking Ability who obtain learning with applied the Discovery Learning Model with Geogebra assisted is significantly higher than students who obtain conventional learning; 2) There is a difference in the improvement of students’ Mathematical Visual Thinking ability between groups based on prior knowledge mathematical abilities (high, medium, and low) who obtained the treatment. 3) The Mathematical Visual Thinking Ability improvement of the high group is significantly higher than in the medium and low groups. 4) The quality of improvement ability of high and low prior knowledge is moderate category, in while the quality of improvement ability in the high category achieved by student with medium prior knowledge.

  19. Large-scale weakly supervised object localization via latent category learning.

    PubMed

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  20. Interactivity of Visual Mathematical Representations: Factors Affecting Learning and Cognitive Processes

    ERIC Educational Resources Information Center

    Sedig, Kamran; Liang, Hai-Ning

    2006-01-01

    Computer-based mathematical cognitive tools (MCTs) are a category of external aids intended to support and enhance learning and cognitive processes of learners. MCTs often contain interactive visual mathematical representations (VMRs), where VMRs are graphical representations that encode properties and relationships of mathematical concepts. In…

  1. Statistical learning using real-world scenes: extracting categorical regularities without conscious intent.

    PubMed

    Brady, Timothy F; Oliva, Aude

    2008-07-01

    Recent work has shown that observers can parse streams of syllables, tones, or visual shapes and learn statistical regularities in them without conscious intent (e.g., learn that A is always followed by B). Here, we demonstrate that these statistical-learning mechanisms can operate at an abstract, conceptual level. In Experiments 1 and 2, observers incidentally learned which semantic categories of natural scenes covaried (e.g., kitchen scenes were always followed by forest scenes). In Experiments 3 and 4, category learning with images of scenes transferred to words that represented the categories. In each experiment, the category of the scenes was irrelevant to the task. Together, these results suggest that statistical-learning mechanisms can operate at a categorical level, enabling generalization of learned regularities using existing conceptual knowledge. Such mechanisms may guide learning in domains as disparate as the acquisition of causal knowledge and the development of cognitive maps from environmental exploration.

  2. To call a cloud 'cirrus': sound symbolism in names for categories or items.

    PubMed

    Ković, Vanja; Sučević, Jelena; Styles, Suzy J

    2017-01-01

    The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects ('alien life forms'), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.

  3. Handwriting generates variable visual output to facilitate symbol learning.

    PubMed

    Li, Julia X; James, Karin H

    2016-03-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Handwriting generates variable visual input to facilitate symbol learning

    PubMed Central

    Li, Julia X.; James, Karin H.

    2015-01-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913

  5. Unsupervised and self-mapping category formation and semantic object recognition for mobile robot vision used in an actual environment

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Tsukada, M.; Sato, K.

    2013-07-01

    This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.

  6. Smart-system of distance learning of visually impaired people based on approaches of artificial intelligence

    NASA Astrophysics Data System (ADS)

    Samigulina, Galina A.; Shayakhmetova, Assem S.

    2016-11-01

    Research objective is the creation of intellectual innovative technology and information Smart-system of distance learning for visually impaired people. The organization of the available environment for receiving quality education for visually impaired people, their social adaptation in society are important and topical issues of modern education.The proposed Smart-system of distance learning for visually impaired people can significantly improve the efficiency and quality of education of this category of people. The scientific novelty of proposed Smart-system is using intelligent and statistical methods of processing multi-dimensional data, and taking into account psycho-physiological characteristics of perception and awareness learning information by visually impaired people.

  7. Exploiting Attribute Correlations: A Novel Trace Lasso-Based Weakly Supervised Dictionary Learning Method.

    PubMed

    Wu, Lin; Wang, Yang; Pan, Shirui

    2017-12-01

    It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.

  8. Rapid visual perception of interracial crowds: Racial category learning from emotional segregation.

    PubMed

    Lamer, Sarah Ariel; Sweeny, Timothy D; Dyer, Michael Louis; Weisbuch, Max

    2018-05-01

    Drawing from research on social identity and ensemble coding, we theorize that crowd perception provides a powerful mechanism for social category learning. Crowds include allegiances that may be distinguished by visual cues to shared behavior and mental states, providing perceivers with direct information about social groups and thus a basis for learning social categories. Here, emotion expressions signaled group membership: to the extent that a crowd exhibited emotional segregation (i.e., was segregated into emotional subgroups), a visible characteristic (race) that incidentally distinguished emotional subgroups was expected to support categorical distinctions. Participants were randomly assigned to view interracial crowds in which emotion differences between (black vs. white) subgroups were either small (control condition) or large (emotional segregation condition). On each trial, participants saw crowds of 12 faces (6 black, 6 white) for roughly 300 ms and were asked to estimate the average emotion of the entire crowd. After all trials, participants completed a racial categorization task and self-report measure of race essentialism. As predicted, participants exposed to emotional segregation (vs. control) exhibited stronger racial category boundaries and stronger race essentialism. Furthermore, such effects accrued via ensemble coding, a visual mechanism that summarizes perceptual information: emotional segregation strengthened participants' racial category boundaries to the extent that segregation limited participants' abilities to integrate emotion across racial subgroups. Together with evidence that people observe emotional segregation in natural environments, these findings suggest that crowd perception mechanisms support racial category boundaries and race essentialism. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Right away: A late, right-lateralized category effect complements an early, left-lateralized category effect in visual search.

    PubMed

    Constable, Merryn D; Becker, Stefanie I

    2017-10-01

    According to the Sapir-Whorf hypothesis, learned semantic categories can influence early perceptual processes. A central finding in support of this view is the lateralized category effect-namely, the finding that categorically different colors (e.g., blue and green hues) can be discriminated faster than colors within the same color category (e.g., different hues of green), especially when they are presented in the right visual field. Because the right visual field projects to the left hemisphere, this finding has been popularly couched in terms of the left-lateralization of language. However, other studies have reported bilateral category effects, which has led some researchers to question the linguistic origins of the effect. Here we examined the time course of lateralized and bilateral category effects in the classical visual search paradigm by means of eyetracking and RT distribution analyses. Our results show a bilateral category effect in the manual responses, which is combined of an early, left-lateralized category effect and a later, right-lateralized category effect. The newly discovered late, right-lateralized category effect occurred only when observers had difficulty locating the target, indicating a specialization of the right hemisphere to find categorically different targets after an initial error. The finding that early and late stages of visual search show different lateralized category effects can explain a wide range of previously discrepant findings.

  10. Visual Imagery, Lifecourse Structure and Lifelong Learning

    ERIC Educational Resources Information Center

    Schuller, Tom

    2004-01-01

    Imagery could add an extra dimension to analyses of lifelong learning, which need to draw on diverse sources and techniques. This article has two principal components. First I suggest that the use of images might be divided into three categories: as illustration; as evidence; and as heuristic. I go on to explore the latter two categories, first by…

  11. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    ERIC Educational Resources Information Center

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  12. Attentional Bias in Human Category Learning: The Case of Deep Learning.

    PubMed

    Hanson, Catherine; Caglar, Leyla Roskan; Hanson, Stephen José

    2018-01-01

    Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This "failure" to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning.

  13. First-Pass Processing of Value Cues in the Ventral Visual Pathway.

    PubMed

    Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E

    2018-02-19

    Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. [Which learning methods are expected for ultrasound training? Blended learning on trial].

    PubMed

    Röhrig, S; Hempel, D; Stenger, T; Armbruster, W; Seibel, A; Walcher, F; Breitkreutz, R

    2014-10-01

    Current teaching methods in graduate and postgraduate training often include frontal presentations. Especially in ultrasound education not only knowledge but also sensomotory and visual skills need to be taught. This requires new learning methods. This study examined which types of teaching methods are preferred by participants in ultrasound training courses before, during and after the course by analyzing a blended learning concept. It also investigated how much time trainees are willing to spend on such activities. A survey was conducted at the end of a certified ultrasound training course. Participants were asked to complete a questionnaire based on a visual analogue scale (VAS) in which three categories were defined: category (1) vote for acceptance with a two thirds majority (VAS 67-100%), category (2) simple acceptance (50-67%) and category (3) rejection (< 50%). A total of 176 trainees participated in this survey. Participants preferred an e-learning program with interactive elements, short presentations (less than 20 min), incorporating interaction with the audience, hands-on sessions in small groups, an alternation between presentations and hands-on-sessions, live demonstrations and quizzes. For post-course learning, interactive and media-assisted approaches were preferred, such as e-learning, films of the presentations and the possibility to stay in contact with instructors in order to discuss the results. Participants also voted for maintaining a logbook for documentation of results. The results of this study indicate the need for interactive learning concepts and blended learning activities. Directors of ultrasound courses may consider these aspects and are encouraged to develop sustainable learning pathways.

  15. Resonant Cholinergic Dynamics in Cognitive and Motor Decision-Making: Attention, Category Learning, and Choice in Neocortex, Superior Colliculus, and Optic Tectum.

    PubMed

    Grossberg, Stephen; Palma, Jesse; Versace, Massimiliano

    2015-01-01

    Freely behaving organisms need to rapidly calibrate their perceptual, cognitive, and motor decisions based on continuously changing environmental conditions. These plastic changes include sharpening or broadening of cognitive and motor attention and learning to match the behavioral demands that are imposed by changing environmental statistics. This article proposes that a shared circuit design for such flexible decision-making is used in specific cognitive and motor circuits, and that both types of circuits use acetylcholine to modulate choice selectivity. Such task-sensitive control is proposed to control thalamocortical choice of the critical features that are cognitively attended and that are incorporated through learning into prototypes of visual recognition categories. A cholinergically-modulated process of vigilance control determines if a recognition category and its attended features are abstract (low vigilance) or concrete (high vigilance). Homologous neural mechanisms of cholinergic modulation are proposed to focus attention and learn a multimodal map within the deeper layers of superior colliculus. This map enables visual, auditory, and planned movement commands to compete for attention, leading to selection of a winning position that controls where the next saccadic eye movement will go. Such map learning may be viewed as a kind of attentive motor category learning. The article hereby explicates a link between attention, learning, and cholinergic modulation during decision making within both cognitive and motor systems. Homologs between the mammalian superior colliculus and the avian optic tectum lead to predictions about how multimodal map learning may occur in the mammalian and avian brain and how such learning may be modulated by acetycholine.

  16. Resonant Cholinergic Dynamics in Cognitive and Motor Decision-Making: Attention, Category Learning, and Choice in Neocortex, Superior Colliculus, and Optic Tectum

    PubMed Central

    Grossberg, Stephen; Palma, Jesse; Versace, Massimiliano

    2016-01-01

    Freely behaving organisms need to rapidly calibrate their perceptual, cognitive, and motor decisions based on continuously changing environmental conditions. These plastic changes include sharpening or broadening of cognitive and motor attention and learning to match the behavioral demands that are imposed by changing environmental statistics. This article proposes that a shared circuit design for such flexible decision-making is used in specific cognitive and motor circuits, and that both types of circuits use acetylcholine to modulate choice selectivity. Such task-sensitive control is proposed to control thalamocortical choice of the critical features that are cognitively attended and that are incorporated through learning into prototypes of visual recognition categories. A cholinergically-modulated process of vigilance control determines if a recognition category and its attended features are abstract (low vigilance) or concrete (high vigilance). Homologous neural mechanisms of cholinergic modulation are proposed to focus attention and learn a multimodal map within the deeper layers of superior colliculus. This map enables visual, auditory, and planned movement commands to compete for attention, leading to selection of a winning position that controls where the next saccadic eye movement will go. Such map learning may be viewed as a kind of attentive motor category learning. The article hereby explicates a link between attention, learning, and cholinergic modulation during decision making within both cognitive and motor systems. Homologs between the mammalian superior colliculus and the avian optic tectum lead to predictions about how multimodal map learning may occur in the mammalian and avian brain and how such learning may be modulated by acetycholine. PMID:26834535

  17. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers.

    PubMed

    Chen, Chi-Hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-08-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories based on the commonalities across training stimuli. Experiment 2 replicated the first experiment and further examined whether speakers of Mandarin, a language in which final syllables of object names are more predictive of category membership than English, were able to learn words and form object categories when trained with the same type of structures. The results indicate that both groups of learners successfully extracted multiple levels of co-occurrence and used them to learn words and object categories simultaneously. However, marked individual differences in performance were also found, suggesting possible interference and competition in processing the two concurrent streams of regularities. Copyright © 2016 Cognitive Science Society, Inc.

  18. The effect of encoding conditions on learning in the prototype distortion task.

    PubMed

    Lee, Jessica C; Livesey, Evan J

    2017-06-01

    The prototype distortion task demonstrates that it is possible to learn about a category of physically similar stimuli through mere observation. However, there have been few attempts to test whether different encoding conditions affect learning in this task. This study compared prototypicality gradients produced under incidental learning conditions in which participants performed a visual search task, with those produced under intentional learning conditions in which participants were required to memorize the stimuli. Experiment 1 showed that similar prototypicality gradients could be obtained for category endorsement and familiarity ratings, but also found (weaker) prototypicality gradients in the absence of exposure. In Experiments 2 and 3, memorization was found to strengthen prototypicality gradients in familiarity ratings in comparison to visual search, but there were no group differences in participants' ability to discriminate between novel and presented exemplars. Although the Search groups in Experiments 2 and 3 produced prototypicality gradients, they were no different in magnitude to those produced in the absence of stimulus exposure in Experiment 1, suggesting that incidental learning during visual search was not conducive to producing prototypicality gradients. This study suggests that learning in the prototype distortion task is not implicit in the sense of resulting automatically from exposure, is affected by the nature of encoding, and should be considered in light of potential learning-at-test effects.

  19. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  20. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  1. Object-graphs for context-aware visual category discovery.

    PubMed

    Lee, Yong Jae; Grauman, Kristen

    2012-02-01

    How can knowing about some categories help us to discover new ones in unlabeled images? Unsupervised visual category discovery is useful to mine for recurring objects without human supervision, but existing methods assume no prior information and thus tend to perform poorly for cluttered scenes with multiple objects. We propose to leverage knowledge about previously learned categories to enable more accurate discovery, and address challenges in estimating their familiarity in unsegmented, unlabeled images. We introduce two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co-occurrence patterns relative to an unfamiliar region and show that by using them to model the interaction between an image’s known and unknown objects, we can better detect new visual categories. Rather than mine for all categories from scratch, our method identifies new objects while drawing on useful cues from familiar ones. We evaluate our approach on several benchmark data sets and demonstrate clear improvements in discovery over conventional purely appearance-based baselines.

  2. Combining Computational Modeling and Neuroimaging to Examine Multiple Category Learning Systems in the Brain

    PubMed Central

    Nomura, Emi M.; Reber, Paul J.

    2012-01-01

    Considerable evidence has argued in favor of multiple neural systems supporting human category learning, one based on conscious rule inference and one based on implicit information integration. However, there have been few attempts to study potential system interactions during category learning. The PINNACLE (Parallel Interactive Neural Networks Active in Category Learning) model incorporates multiple categorization systems that compete to provide categorization judgments about visual stimuli. Incorporating competing systems requires inclusion of cognitive mechanisms associated with resolving this competition and creates a potential credit assignment problem in handling feedback. The hypothesized mechanisms make predictions about internal mental states that are not always reflected in choice behavior, but may be reflected in neural activity. Two prior functional magnetic resonance imaging (fMRI) studies of category learning were re-analyzed using PINNACLE to identify neural correlates of internal cognitive states on each trial. These analyses identified additional brain regions supporting the two types of category learning, regions particularly active when the systems are hypothesized to be in maximal competition, and found evidence of covert learning activity in the “off system” (the category learning system not currently driving behavior). These results suggest that PINNACLE provides a plausible framework for how competing multiple category learning systems are organized in the brain and shows how computational modeling approaches and fMRI can be used synergistically to gain access to cognitive processes that support complex decision-making machinery. PMID:24962771

  3. Finding faults: analogical comparison supports spatial concept learning in geoscience.

    PubMed

    Jee, Benjamin D; Uttal, David H; Gentner, Dedre; Manduca, Cathy; Shipley, Thomas F; Sageman, Bradley

    2013-05-01

    A central issue in education is how to support the spatial thinking involved in learning science, technology, engineering, and mathematics (STEM). We investigated whether and how the cognitive process of analogical comparison supports learning of a basic spatial concept in geoscience, fault. Because of the high variability in the appearance of faults, it may be difficult for students to learn the category-relevant spatial structure. There is abundant evidence that comparing analogous examples can help students gain insight into important category-defining features (Gentner in Cogn Sci 34(5):752-775, 2010). Further, comparing high-similarity pairs can be especially effective at revealing key differences (Sagi et al. 2012). Across three experiments, we tested whether comparison of visually similar contrasting examples would help students learn the fault concept. Our main findings were that participants performed better at identifying faults when they (1) compared contrasting (fault/no fault) cases versus viewing each case separately (Experiment 1), (2) compared similar as opposed to dissimilar contrasting cases early in learning (Experiment 2), and (3) viewed a contrasting pair of schematic block diagrams as opposed to a single block diagram of a fault as part of an instructional text (Experiment 3). These results suggest that comparison of visually similar contrasting cases helped distinguish category-relevant from category-irrelevant features for participants. When such comparisons occurred early in learning, participants were more likely to form an accurate conceptual representation. Thus, analogical comparison of images may provide one powerful way to enhance spatial learning in geoscience and other STEM disciplines.

  4. Category Learning Research in the Interactive Online Environment Second Life

    NASA Technical Reports Server (NTRS)

    Andrews, Jan; Livingston, Ken; Sturm, Joshua; Bliss, Daniel; Hawthorne, Daniel

    2011-01-01

    The interactive online environment Second Life allows users to create novel three-dimensional stimuli that can be manipulated in a meaningful yet controlled environment. These features suggest Second Life's utility as a powerful tool for investigating how people learn concepts for unfamiliar objects. The first of two studies was designed to establish that cognitive processes elicited in this virtual world are comparable to those tapped in conventional settings by attempting to replicate the established finding that category learning systematically influences perceived similarity . From the perspective of an avatar, participants navigated a course of unfamiliar three-dimensional stimuli and were trained to classify them into two labeled categories based on two visual features. Participants then gave similarity ratings for pairs of stimuli and their responses were compared to those of control participants who did not learn the categories. Results indicated significant compression, whereby objects classified together were judged to be more similar by learning than control participants, thus supporting the validity of using Second Life as a laboratory for studying human cognition. A second study used Second Life to test the novel hypothesis that effects of learning on perceived similarity do not depend on the presence of verbal labels for categories. We presented the same stimuli but participants classified them by selecting between two complex visual patterns designed to be extremely difficult to label. While learning was more challenging in this condition , those who did learn without labels showed a compression effect identical to that found in the first study using verbal labels. Together these studies establish that at least some forms of human learning in Second Life parallel learning in the actual world and thus open the door to future studies that will make greater use of the enriched variety of objects and interactions possible in simulated environments compared to traditional experimental situations.

  5. Inferential Learning of Serial Order of Perceptual Categories by Rhesus Monkeys (Macaca mulatta)

    PubMed Central

    2017-01-01

    Category learning in animals is typically trained explicitly, in most instances by varying the exemplars of a single category in a matching-to-sample task. Here, we show that male rhesus macaques can learn categories by a transitive inference paradigm in which novel exemplars of five categories were presented throughout training. Instead of requiring decisions about a constant set of repetitively presented stimuli, we studied the macaque's ability to determine the relative order of multiple exemplars of particular stimuli that were rarely repeated. Ordinal decisions generalized both to novel stimuli and, as a consequence, to novel pairings. Thus, we showed that rhesus monkeys could learn to categorize on the basis of implied ordinal position, without prior matching-to-sample training, and that they could then make inferences about category order. Our results challenge the plausibility of association models of category learning and broaden the scope of the transitive inference paradigm. SIGNIFICANCE STATEMENT The cognitive abilities of nonhuman animals are of enduring interest to scientists and the general public because they blur the dividing line between human and nonhuman intelligence. Categorization and sequence learning are highly abstract cognitive abilities each in their own right. This study is the first to provide evidence that visual categories can be ordered serially by macaque monkeys using a behavioral paradigm that provides no explicit feedback about category or serial order. These results strongly challenge accounts of learning based on stimulus–response associations. PMID:28546309

  6. SOVEREIGN: An autonomous neural system for incrementally learning planned action sequences to navigate towards a rewarded goal.

    PubMed

    Gnadt, William; Grossberg, Stephen

    2008-06-01

    How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and size-invariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.

  7. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    PubMed

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.

  8. Learning foreign sounds in an alien world: Videogame training improves non-native speech categorization

    PubMed Central

    Lim, Sung-joo; Holt, Lori L.

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually-weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information and players’ responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5 hours across 5 days exhibited improvements in /r/-/l/ perception on par with 2–4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. PMID:21827533

  9. Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2014-01-01

    Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.

  10. Reading with sounds: sensory substitution selectively activates the visual word form area in the blind.

    PubMed

    Striem-Amit, Ella; Cohen, Laurent; Dehaene, Stanislas; Amedi, Amir

    2012-11-08

    Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using "soundscapes"--sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Seeing Fluid Physics via Visual Expertise Training

    NASA Astrophysics Data System (ADS)

    Hertzberg, Jean; Goodman, Katherine; Curran, Tim

    2016-11-01

    In a course on Flow Visualization, students often expressed that their perception of fluid flows had increased, implying the acquisition of a type of visual expertise, akin to that of radiologists or dog show judges. In the first steps towards measuring this expertise, we emulated an experimental design from psychology. The study had two groups of participants: "novices" with no formal fluids education, and "experts" who had passed as least one fluid mechanics course. All participants were trained to place static images of fluid flows into two categories (laminar and turbulent). Half the participants were trained on flow images with a specific format (Von Kármán vortex streets), and the other half on a broader group. Novices' results were in line with past perceptual expertise studies, showing that it is easier to transfer learning from a broad category to a new specific format than vice versa. In contrast, experts did not have a significant difference between training conditions, suggesting the experts did not undergo the same learning process as the novices. We theorize that expert subjects were able to access their conceptual knowledge about fluids to perform this new, visual task. This finding supports new ways of understanding conceptual learning.

  12. Learning strategy preferences, verbal-visual cognitive styles, and multimedia preferences for continuing engineering education instructional design

    NASA Astrophysics Data System (ADS)

    Baukal, Charles Edward, Jr.

    A literature search revealed very little information on how to teach working engineers, which became the motivation for this research. Effective training is important for many reasons such as preventing accidents, maximizing fuel efficiency, minimizing pollution emissions, and reducing equipment downtime. The conceptual framework for this study included the development of a new instructional design framework called the Multimedia Cone of Abstraction (MCoA). This was developed by combining Dale's Cone of Experience and Mayer's Cognitive Theory of Multimedia Learning. An anonymous survey of 118 engineers from a single Midwestern manufacturer was conducted to determine their demographics, learning strategy preferences, verbal-visual cognitive styles, and multimedia preferences. The learning strategy preference profile and verbal-visual cognitive styles of the sample were statistically significantly different than the general population. The working engineers included more Problem Solvers and were much more visually-oriented than the general population. To study multimedia preferences, five of the seven levels in the MCoA were used. Eight types of multimedia were compared in four categories (types in parantheses): text (text and narration), static graphics (drawing and photograph), non-interactive dynamic graphics (animation and video), and interactive dynamic graphics (simulated virtual reality and real virtual reality). The first phase of the study examined multimedia preferences within a category. Participants compared multimedia types in pairs on dual screens using relative preference, rating, and ranking. Surprisingly, the more abstract multimedia (text, drawing, animation, and simulated virtual reality) were preferred in every category to the more concrete multimedia (narration, photograph, video, and real virtual reality), despite the fact that most participants had relatively little prior subject knowledge. However, the more abstract graphics were only slightly preferred to the more concrete graphics. In the second phase, the more preferred multimedia types in each category from the first phase were compared against each other using relative preference, rating, and ranking and overall rating and ranking. Drawing was the most preferred multimedia type overall, although only slightly more than animation and simulated virtual reality. Text was a distant fourth. These results suggest that instructional content for continuing engineering education should include problem solving and should be highly visual.

  13. A Bayesian generative model for learning semantic hierarchies

    PubMed Central

    Mittelman, Roni; Sun, Min; Kuipers, Benjamin; Savarese, Silvio

    2014-01-01

    Building fine-grained visual recognition systems that are capable of recognizing tens of thousands of categories, has received much attention in recent years. The well known semantic hierarchical structure of categories and concepts, has been shown to provide a key prior which allows for optimal predictions. The hierarchical organization of various domains and concepts has been subject to extensive research, and led to the development of the WordNet domains hierarchy (Fellbaum, 1998), which was also used to organize the images in the ImageNet (Deng et al., 2009) dataset, in which the category count approaches the human capacity. Still, for the human visual system, the form of the hierarchy must be discovered with minimal use of supervision or innate knowledge. In this work, we propose a new Bayesian generative model for learning such domain hierarchies, based on semantic input. Our model is motivated by the super-subordinate organization of domain labels and concepts that characterizes WordNet, and accounts for several important challenges: maintaining context information when progressing deeper into the hierarchy, learning a coherent semantic concept for each node, and modeling uncertainty in the perception process. PMID:24904452

  14. Normal Aging and the Dissociable Prototype Learning Systems

    PubMed Central

    Glass, Brian D.; Chotibut, Tanya; Pacheco, Jennifer; Schnyer, David M.; Maddox, W. Todd

    2011-01-01

    Dissociable prototype learning systems have been demonstrated behaviorally and with neuroimaging in younger adults as well as with patient populations. In A/not-A (AN) prototype learning, participants are shown members of category A during training, and during test are asked to decide whether novel items are in category A or are not in category A. Research suggests that AN learning is mediated by a perceptual learning system. In A/B (AB) prototype learning, participants are shown members of category A and B during training, and during test are asked to decide whether novel items are in category A or category B. In contrast to AN, research suggests that AB learning is mediated by a declarative memory system. The current study examined the effects of normal aging on AN and AB prototype learning. We observed an age-related deficit in AB learning, but an age-related advantage in AN learning. Computational modeling supports one possible interpretation based on narrower selective attentional focus in older adults in the AB task and broader selective attention in the AN task. Neuropsychological testing in older participants suggested that executive functioning and attentional control were associated with better performance in both tasks. However, nonverbal memory was associated with better AN performance, while visual attention was associated with worse AB performance. The results support an interactive memory systems approach and suggest that age-related declines in one memory system can lead to deficits in some tasks, but to enhanced performance in others. PMID:21875215

  15. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers

    ERIC Educational Resources Information Center

    Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-01-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…

  16. Data Mining and Machine Learning Models for Predicting Drug Likeness and Their Disease or Organ Category.

    PubMed

    Yosipof, Abraham; Guedes, Rita C; García-Sosa, Alfonso T

    2018-01-01

    Data mining approaches can uncover underlying patterns in chemical and pharmacological property space decisive for drug discovery and development. Two of the most common approaches are visualization and machine learning methods. Visualization methods use dimensionality reduction techniques in order to reduce multi-dimension data into 2D or 3D representations with a minimal loss of information. Machine learning attempts to find correlations between specific activities or classifications for a set of compounds and their features by means of recurring mathematical models. Both models take advantage of the different and deep relationships that can exist between features of compounds, and helpfully provide classification of compounds based on such features or in case of visualization methods uncover underlying patterns in the feature space. Drug-likeness has been studied from several viewpoints, but here we provide the first implementation in chemoinformatics of the t-Distributed Stochastic Neighbor Embedding (t-SNE) method for the visualization and the representation of chemical space, and the use of different machine learning methods separately and together to form a new ensemble learning method called AL Boost. The models obtained from AL Boost synergistically combine decision tree, random forests (RF), support vector machine (SVM), artificial neural network (ANN), k nearest neighbors (kNN), and logistic regression models. In this work, we show that together they form a predictive model that not only improves the predictive force but also decreases bias. This resulted in a corrected classification rate of over 0.81, as well as higher sensitivity and specificity rates for the models. In addition, separation and good models were also achieved for disease categories such as antineoplastic compounds and nervous system diseases, among others. Such models can be used to guide decision on the feature landscape of compounds and their likeness to either drugs or other characteristics, such as specific or multiple disease-category(ies) or organ(s) of action of a molecule.

  17. Art, Science & Visual Literacy: Selected Readings from the Annual Conference of the International Visual Literacy Association (24th, Pittsburgh, Pennsylvania, September 30-October 4, 1992).

    ERIC Educational Resources Information Center

    Braden, Roberts A., Ed.; And Others

    Following an introductory paper on Pittsburgh and the arts, 57 conference papers are presented under the following four major categories: (1) "Imagery, Science and the Arts," including discovery in art and science, technology and art, visual design of newspapers, multimedia science education, science learning and interactive videodisc technology,…

  18. Distinct responses of Purkinje neurons and roles of simple spikes during associative motor learning in larval zebrafish

    PubMed Central

    Harmon, Thomas C; Magaram, Uri; McLean, David L; Raman, Indira M

    2017-01-01

    To study cerebellar activity during learning, we made whole-cell recordings from larval zebrafish Purkinje cells while monitoring fictive swimming during associative conditioning. Fish learned to swim in response to visual stimulation preceding tactile stimulation of the tail. Learning was abolished by cerebellar ablation. All Purkinje cells showed task-related activity. Based on how many complex spikes emerged during learned swimming, they were classified as multiple, single, or zero complex spike (MCS, SCS, ZCS) cells. With learning, MCS and ZCS cells developed increased climbing fiber (MCS) or parallel fiber (ZCS) input during visual stimulation; SCS cells fired complex spikes associated with learned swimming episodes. The categories correlated with location. Optogenetically suppressing simple spikes only during visual stimulation demonstrated that simple spikes are required for acquisition and early stages of expression of learned responses, but not their maintenance, consistent with a transient, instructive role for simple spikes during cerebellar learning in larval zebrafish. DOI: http://dx.doi.org/10.7554/eLife.22537.001 PMID:28541889

  19. Perceptual category learning of photographic and painterly stimuli in rhesus macaques (Macaca mulatta) and humans

    PubMed Central

    Jensen, Greg; Terrace, Herbert

    2017-01-01

    Humans are highly adept at categorizing visual stimuli, but studies of human categorization are typically validated by verbal reports. This makes it difficult to perform comparative studies of categorization using non-human animals. Interpretation of comparative studies is further complicated by the possibility that animal performance may merely reflect reinforcement learning, whereby discrete features act as discriminative cues for categorization. To assess and compare how humans and monkeys classified visual stimuli, we trained 7 rhesus macaques and 41 human volunteers to respond, in a specific order, to four simultaneously presented stimuli at a time, each belonging to a different perceptual category. These exemplars were drawn at random from large banks of images, such that the stimuli presented changed on every trial. Subjects nevertheless identified and ordered these changing stimuli correctly. Three monkeys learned to order naturalistic photographs; four others, close-up sections of paintings with distinctive styles. Humans learned to order both types of stimuli. All subjects classified stimuli at levels substantially greater than that predicted by chance or by feature-driven learning alone, even when stimuli changed on every trial. However, humans more closely resembled monkeys when classifying the more abstract painting stimuli than the photographic stimuli. This points to a common classification strategy in both species, one that humans can rely on in the absence of linguistic labels for categories. PMID:28961270

  20. Syntactic Categorization in French-Learning Infants

    ERIC Educational Resources Information Center

    Shi, Rushen; Melancon, Andreane

    2010-01-01

    Recent work showed that infants recognize and store function words starting from the age of 6-8 months. Using a visual fixation procedure, the present study tested whether French-learning 14-month-olds have the knowledge of syntactic categories of determiners and pronouns, respectively, and whether they can use these function words for…

  1. The Interplay between Perceptual Organization and Categorization in the Representation of Complex Visual Patterns by Young Infants

    ERIC Educational Resources Information Center

    Quinn, Paul C.; Schyns, Philippe G.; Goldstone, Robert L.

    2006-01-01

    The relation between perceptual organization and categorization processes in 3- and 4-month-olds was explored. The question was whether an invariant part abstracted during category learning could interfere with Gestalt organizational processes. A 2003 study by Quinn and Schyns had reported that an initial category familiarization experience in…

  2. Speaker Identity Supports Phonetic Category Learning

    ERIC Educational Resources Information Center

    Mani, Nivedita; Schneider, Signe

    2013-01-01

    Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…

  3. Prototype learning and dissociable categorization systems in Alzheimer's disease.

    PubMed

    Heindel, William C; Festa, Elena K; Ott, Brian R; Landy, Kelly M; Salmon, David P

    2013-08-01

    Recent neuroimaging studies suggest that prototype learning may be mediated by at least two dissociable memory systems depending on the mode of acquisition, with A/Not-A prototype learning dependent upon a perceptual representation system located within posterior visual cortex and A/B prototype learning dependent upon a declarative memory system associated with medial temporal and frontal regions. The degree to which patients with Alzheimer's disease (AD) can acquire new categorical information may therefore critically depend upon the mode of acquisition. The present study examined A/Not-A and A/B prototype learning in AD patients using procedures that allowed direct comparison of learning across tasks. Despite impaired explicit recall of category features in all tasks, patients showed differential patterns of category acquisition across tasks. First, AD patients demonstrated impaired prototype induction along with intact exemplar classification under incidental A/Not-A conditions, suggesting that the loss of functional connectivity within visual cortical areas disrupted the integration processes supporting prototype induction within the perceptual representation system. Second, AD patients demonstrated intact prototype induction but impaired exemplar classification during A/B learning under observational conditions, suggesting that this form of prototype learning is dependent on a declarative memory system that is disrupted in AD. Third, the surprisingly intact classification of both prototypes and exemplars during A/B learning under trial-and-error feedback conditions suggests that AD patients shifted control from their deficient declarative memory system to a feedback-dependent procedural memory system when training conditions allowed. Taken together, these findings serve to not only increase our understanding of category learning in AD, but to also provide new insights into the ways in which different memory systems interact to support the acquisition of categorical knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Prototype Learning and Dissociable Categorization Systems in Alzheimer’s Disease

    PubMed Central

    Heindel, William C.; Festa, Elena K.; Ott, Brian R.; Landy, Kelly M.; Salmon, David P.

    2015-01-01

    Recent neuroimaging studies suggest that prototype learning may be mediated by at least two dissociable memory systems depending on the mode of acquisition, with A/Not-A prototype learning dependent upon a perceptual representation system located within posterior visual cortex and A/B prototype learning dependent upon a declarative memory system associated with medial temporal and frontal regions. The degree to which patients with Alzheimer’s disease (AD) can acquire new categorical information may therefore critically depend upon the mode of acquisition. The present study examined A/Not-A and A/B prototype learning in AD patients using procedures that allowed direct comparison of learning across tasks. Despite impaired explicit recall of category features in all tasks, patients showed differential patterns of category acquisition across tasks. First, AD patients demonstrated impaired prototype induction along with intact exemplar classification under incidental A/Not-A conditions, suggesting that the loss of functional connectivity within visual cortical areas disrupted the integration processes supporting prototype induction within the perceptual representation system. Second, AD patients demonstrated intact prototype induction but impaired exemplar classification during A/B learning under observational conditions, suggesting that this form of prototype learning is dependent on a declarative memory system that is disrupted in AD. Third, the surprisingly intact classification of both prototypes and exemplars during A/B learning under trial-and-error feedback conditions suggests that AD patients shifted control from their deficient declarative memory system to a feedback-dependent procedural memory system when training conditions allowed. Taken together, these findings serve to not only increase our understanding of category learning in AD, but to also provide new insights into the ways in which different memory systems interact to support the acquisition of categorical knowledge. PMID:23751172

  5. Detecting Visually Observable Disease Symptoms from Faces.

    PubMed

    Wang, Kuan; Luo, Jiebo

    2016-12-01

    Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.

  6. Typicality effects in artificial categories: is there a hemisphere difference?

    PubMed

    Richards, L G; Chiarello, C

    1990-07-01

    In category classification tasks, typicality effects are usually found: accuracy and reaction time depend upon distance from a prototype. In this study, subjects learned either verbal or nonverbal dot pattern categories, followed by a lateralized classification task. Comparable typicality effects were found in both reaction time and accuracy across visual fields for both verbal and nonverbal categories. Both hemispheres appeared to use a similarity-to-prototype matching strategy in classification. This indicates that merely having a verbal label does not differentiate classification in the two hemispheres.

  7. Occipital cortical thickness in very low birth weight born adolescents predicts altered neural specialization of visual semantic category related neural networks.

    PubMed

    Klaver, Peter; Latal, Beatrice; Martin, Ernst

    2015-01-01

    Very low birth weight (VLBW) premature born infants have a high risk to develop visual perceptual and learning deficits as well as widespread functional and structural brain abnormalities during infancy and childhood. Whether and how prematurity alters neural specialization within visual neural networks is still unknown. We used functional and structural brain imaging to examine the visual semantic system of VLBW born (<1250 g, gestational age 25-32 weeks) adolescents (13-15 years, n = 11, 3 males) and matched term born control participants (13-15 years, n = 11, 3 males). Neurocognitive assessment revealed no group differences except for lower scores on an adaptive visuomotor integration test. All adolescents were scanned while viewing pictures of animals and tools and scrambled versions of these pictures. Both groups demonstrated animal and tool category related neural networks. Term born adolescents showed tool category related neural activity, i.e. tool pictures elicited more activity than animal pictures, in temporal and parietal brain areas. Animal category related activity was found in the occipital, temporal and frontal cortex. VLBW born adolescents showed reduced tool category related activity in the dorsal visual stream compared with controls, specifically the left anterior intraparietal sulcus, and enhanced animal category related activity in the left middle occipital gyrus and right lingual gyrus. Lower birth weight of VLBW adolescents correlated with larger thickness of the pericalcarine gyrus in the occipital cortex and smaller surface area of the superior temporal gyrus in the lateral temporal cortex. Moreover, larger thickness of the pericalcarine gyrus and smaller surface area of the superior temporal gyrus correlated with reduced tool category related activity in the parietal cortex. Together, our data suggest that very low birth weight predicts alterations of higher order visual semantic networks, particularly in the dorsal stream. The differences in neural specialization may be associated with aberrant cortical development of areas in the visual system that develop early in childhood. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.

    PubMed

    Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling

    2015-11-01

    In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.

  9. Learning Multisensory Representations

    DTIC Science & Technology

    2016-05-23

    public release. Erdogan , G., Yildirim, I., & Jacobs, R. A. (2014). Transfer of object shape knowledge across visual and haptic modalities. Proceedings...2014). The adaptive nature of visual working memory. Current Directions in Psychological Science, 23, 164-170. Erdogan , G., Yildirim, I...sequence category knowledge: A probabilistic language of thought approach. Psychonomic Bulletin and Review, 22, 673-686. Erdogan , G., Chen, Q., Garcea, F

  10. Learning to attend: modeling the shaping of selectivity in infero-temporal cortex in a categorization task.

    PubMed

    Szabo, Miruna; Deco, Gustavo; Fusi, Stefano; Del Giudice, Paolo; Mattia, Maurizio; Stetter, Martin

    2006-05-01

    Recent experiments on behaving monkeys have shown that learning a visual categorization task makes the neurons in infero-temporal cortex (ITC) more selective to the task-relevant features of the stimuli (Sigala and Logothetis in Nature 415 318-320, 2002). We hypothesize that such a selectivity modulation emerges from the interaction between ITC and other cortical area, presumably the prefrontal cortex (PFC), where the previously learned stimulus categories are encoded. We propose a biologically inspired model of excitatory and inhibitory spiking neurons with plastic synapses, modified according to a reward based Hebbian learning rule, to explain the experimental results and test the validity of our hypothesis. We assume that the ITC neurons, receiving feature selective inputs, form stronger connections with the category specific neurons to which they are consistently associated in rewarded trials. After learning, the top-down influence of PFC neurons enhances the selectivity of the ITC neurons encoding the behaviorally relevant features of the stimuli, as observed in the experiments. We conclude that the perceptual representation in visual areas like ITC can be strongly affected by the interaction with other areas which are devoted to higher cognitive functions.

  11. Compensatory processing during rule-based category learning in older adults.

    PubMed

    Bharani, Krishna L; Paller, Ken A; Reber, Paul J; Weintraub, Sandra; Yanar, Jorge; Morrison, Robert G

    2016-01-01

    Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex.

  12. Compensatory Processing During Rule-Based Category Learning in Older Adults

    PubMed Central

    Bharani, Krishna L.; Paller, Ken A.; Reber, Paul J.; Weintraub, Sandra; Yanar, Jorge; Morrison, Robert G.

    2016-01-01

    Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex. PMID:26422522

  13. Semantic and visual memory codes in learning disabled readers.

    PubMed

    Swanson, H L

    1984-02-01

    Two experiments investigated whether learning disabled readers' impaired recall is due to multiple coding deficiencies. In Experiment 1, learning disabled and skilled readers viewed nonsense pictures without names or with either relevant or irrelevant names with respect to the distinctive characteristics of the picture. Both types of names improved recall of nondisabled readers, while learning disabled readers exhibited better recall for unnamed pictures. No significant difference in recall was found between name training (relevant, irrelevant) conditions within reading groups. In Experiment 2, both reading groups participated in recall training for complex visual forms labeled with unrelated words, hierarchically related words, or without labels. A subsequent reproduction transfer task showed a facilitation in performance in skilled readers due to labeling, with learning disabled readers exhibiting better reproduction for unnamed pictures. Measures of output organization (clustering) indicated that recall is related to the development of superordinate categories. The results suggest that learning disabled children's reading difficulties are due to an inability to activate a semantic representation that interconnects visual and verbal codes.

  14. The cognitive capabilities of farm animals: categorisation learning in dwarf goats (Capra hircus).

    PubMed

    Meyer, Susann; Nürnberg, Gerd; Puppe, Birger; Langbein, Jan

    2012-07-01

    The ability to establish categories enables organisms to classify stimuli, objects and events by assessing perceptual, associative or rational similarities and provides the basis for higher cognitive processing. The cognitive capabilities of farm animals are receiving increasing attention in applied ethology, a development driven primarily by scientifically based efforts to improve animal welfare. The present study investigated the learning of perceptual categories in Nigerian dwarf goats (Capra hircus) by using an automated learning device installed in the animals' pen. Thirteen group-housed goats were trained in a closed-economy approach to discriminate artificial two-dimensional symbols presented in a four-choice design. The symbols belonged to two categories: category I, black symbols with an open centre (rewarded) and category II, the same symbols but filled black (unrewarded). One symbol from category I and three different symbols from category II were used to define a discrimination problem. After the training of eight problems, the animals were presented with a transfer series containing the training problems interspersed with completely new problems made from new symbols belonging to the same categories. The results clearly demonstrate that dwarf goats are able to form categories based on similarities in the visual appearance of artificial symbols and to generalise across new symbols. However, the goats had difficulties in discriminating specific symbols. It is probable that perceptual problems caused these difficulties. Nevertheless, the present study suggests that goats housed under farming conditions have well-developed cognitive abilities, including learning of open-ended categories. This result could prove beneficial by facilitating animals' adaptation to housing environments that favour their cognitive capabilities.

  15. Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes.

    PubMed

    Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian

    2018-04-18

    Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Teaching children with autism spectrum disorder to tact olfactory stimuli.

    PubMed

    Dass, Tina K; Kisamore, April N; Vladescu, Jason C; Reeve, Kenneth F; Reeve, Sharon A; Taylor-Santa, Catherine

    2018-05-28

    Research on tact acquisition by children with autism spectrum disorder (ASD) has often focused on teaching participants to tact visual stimuli. It is important to evaluate procedures for teaching tacts of nonvisual stimuli (e.g., olfactory, tactile). The purpose of the current study was to extend the literature on secondary target instruction and tact training by evaluating the effects of a discrete-trial instruction procedure involving (a) echoic prompts, a constant prompt delay, and error correction for primary targets; (b) inclusion of secondary target stimuli in the consequent portion of learning trials; and (c) multiple exemplar training on the acquisition of item tacts of olfactory stimuli, emergence of category tacts of olfactory stimuli, generalization of category tacts, and emergence of category matching, with three children diagnosed with ASD. Results showed that all participants learned the item and category tacts following teaching, participants demonstrated generalization across category tacts, and category matching emerged for all participants. © 2018 Society for the Experimental Analysis of Behavior.

  17. You see what you have learned. Evidence for an interrelation of associative learning and visual selective attention.

    PubMed

    Feldmann-Wüstefeld, Tobias; Uengoer, Metin; Schubö, Anna

    2015-11-01

    Besides visual salience and observers' current intention, prior learning experience may influence deployment of visual attention. Associative learning models postulate that observers pay more attention to stimuli previously experienced as reliable predictors of specific outcomes. To investigate the impact of learning experience on deployment of attention, we combined an associative learning task with a visual search task and measured event-related potentials of the EEG as neural markers of attention deployment. In the learning task, participants categorized stimuli varying in color/shape with only one dimension being predictive of category membership. In the search task, participants searched a shape target while disregarding irrelevant color distractors. Behavioral results showed that color distractors impaired performance to a greater degree when color rather than shape was predictive in the learning task. Neurophysiological results show that the amplified distraction was due to differential attention deployment (N2pc). Experiment 2 showed that when color was predictive for learning, color distractors captured more attention in the search task (ND component) and more suppression of color distractor was required (PD component). The present results thus demonstrate that priority in visual attention is biased toward predictive stimuli, which allows learning experience to shape selection. We also show that learning experience can overrule strong top-down control (blocked tasks, Experiment 3) and that learning experience has a longer-term effect on attention deployment (tasks on two successive days, Experiment 4). © 2015 Society for Psychophysiological Research.

  18. Mild Perceptual Categorization Deficits Follow Bilateral Removal of Anterior Inferior Temporal Cortex in Rhesus Monkeys.

    PubMed

    Matsumoto, Narihisa; Eldridge, Mark A G; Saunders, Richard C; Reoli, Rachel; Richmond, Barry J

    2016-01-06

    In primates, visual recognition of complex objects depends on the inferior temporal lobe. By extension, categorizing visual stimuli based on similarity ought to depend on the integrity of the same area. We tested three monkeys before and after bilateral anterior inferior temporal cortex (area TE) removal. Although mildly impaired after the removals, they retained the ability to assign stimuli to previously learned categories, e.g., cats versus dogs, and human versus monkey faces, even with trial-unique exemplars. After the TE removals, they learned in one session to classify members from a new pair of categories, cars versus trucks, as quickly as they had learned the cats versus dogs before the removals. As with the dogs and cats, they generalized across trial-unique exemplars of cars and trucks. However, as seen in earlier studies, these monkeys with TE removals had difficulty learning to discriminate between two simple black and white stimuli. These results raise the possibility that TE is needed for memory of simple conjunctions of basic features, but that it plays only a small role in generalizing overall configural similarity across a large set of stimuli, such as would be needed for perceptual categorical assignment. The process of seeing and recognizing objects is attributed to a set of sequentially connected brain regions stretching forward from the primary visual cortex through the temporal lobe to the anterior inferior temporal cortex, a region designated area TE. Area TE is considered the final stage for recognizing complex visual objects, e.g., faces. It has been assumed, but not tested directly, that this area would be critical for visual generalization, i.e., the ability to place objects such as cats and dogs into their correct categories. Here, we demonstrate that monkeys rapidly and seemingly effortlessly categorize large sets of complex images (cats vs dogs, cars vs trucks), surprisingly, even after removal of area TE, leaving a puzzle about how this generalization is done. Copyright © 2016 the authors 0270-6474/16/360043-11$15.00/0.

  19. Understanding pictorial information in biology: students' cognitive activities and visual reading strategies

    NASA Astrophysics Data System (ADS)

    Brandstetter, Miriam; Sandmann, Angela; Florian, Christine

    2017-06-01

    In classroom, scientific contents are increasingly communicated through visual forms of representations. Students' learning outcomes rely on their ability to read and understand pictorial information. Understanding pictorial information in biology requires cognitive effort and can be challenging to students. Yet evidence-based knowledge about students' visual reading strategies during the process of understanding pictorial information is pending. Therefore, 42 students at the age of 14-15 were asked to think aloud while trying to understand visual representations of the blood circulatory system and the patellar reflex. A category system was developed differentiating 16 categories of cognitive activities. A Principal Component Analysis revealed two underlying patterns of activities that can be interpreted as visual reading strategies: 1. Inferences predominated by using a problem-solving schema; 2. Inferences predominated by recall of prior content knowledge. Each pattern consists of a specific set of cognitive activities that reflect selection, organisation and integration of pictorial information as well as different levels of expertise. The results give detailed insights into cognitive activities of students who were required to understand the pictorial information of complex organ systems. They provide an evidence-based foundation to derive instructional aids that can promote students pictorial-information-based learning on different levels of expertise.

  20. STDP in lateral connections creates category-based perceptual cycles for invariance learning with multiple stimuli.

    PubMed

    Evans, Benjamin D; Stringer, Simon M

    2015-04-01

    Learning to recognise objects and faces is an important and challenging problem tackled by the primate ventral visual system. One major difficulty lies in recognising an object despite profound differences in the retinal images it projects, due to changes in view, scale, position and other identity-preserving transformations. Several models of the ventral visual system have been successful in coping with these issues, but have typically been privileged by exposure to only one object at a time. In natural scenes, however, the challenges of object recognition are typically further compounded by the presence of several objects which should be perceived as distinct entities. In the present work, we explore one possible mechanism by which the visual system may overcome these two difficulties simultaneously, through segmenting unseen (artificial) stimuli using information about their category encoded in plastic lateral connections. We demonstrate that these experience-guided lateral interactions robustly organise input representations into perceptual cycles, allowing feed-forward connections trained with spike-timing-dependent plasticity to form independent, translation-invariant output representations. We present these simulations as a functional explanation for the role of plasticity in the lateral connectivity of visual cortex.

  1. A Hierarchical and Contextual Model for Learning and Recognizing Highly Variant Visual Categories

    DTIC Science & Technology

    2010-01-01

    neighboring pattern primitives, to create our model. We also present a minimax entropy framework for automatically learning which contextual constraints are...Grammars . . . . . . . . . . . . . . . . . . 19 3.2 Markov Random Fields . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Creating a Contextual...Compositional Boosting. . . . . 119 7.8 Top-down hallucinations of missing objects. . . . . . . . . . . . . . . 121 7.9 The bottom-up to top-down

  2. Dissociable changes in functional network topology underlie early category learning and development of automaticity

    PubMed Central

    Soto, Fabian A.; Bassett, Danielle S.; Ashby, F. Gregory

    2016-01-01

    Recent work has shown that multimodal association areas–including frontal, temporal and parietal cortex–are focal points of functional network reconfiguration during human learning and performance of cognitive tasks. On the other hand, neurocomputational theories of category learning suggest that the basal ganglia and related subcortical structures are focal points of functional network reconfiguration during early learning of some categorization tasks, but become less so with the development of automatic categorization performance. Using a combination of network science and multilevel regression, we explore how changes in the connectivity of small brain regions can predict behavioral changes during training in a visual categorization task. We find that initial category learning, as indexed by changes in accuracy, is predicted by increasingly efficient integrative processing in subcortical areas, with higher functional specialization, more efficient integration across modules, but a lower cost in terms of redundancy of information processing. The development of automaticity, as indexed by changes in the speed of correct responses, was predicted by lower clustering (particularly in subcortical areas), higher strength (highest in cortical areas) and higher betweenness centrality. By combining neurocomputational theories and network scientific methods, these results synthesize the dissociative roles of multimodal association areas and subcortical structures in the development of automaticity during category learning. PMID:27453156

  3. An interplay of fusiform gyrus and hippocampus enables prototype- and exemplar-based category learning.

    PubMed

    Lech, Robert K; Güntürkün, Onur; Suchan, Boris

    2016-09-15

    The aim of the present study was to examine the contributions of different brain structures to prototype- and exemplar-based category learning using functional magnetic resonance imaging (fMRI). Twenty-eight subjects performed a categorization task in which they had to assign prototypes and exceptions to two different families. This test procedure usually produces different learning curves for prototype and exception stimuli. Our behavioral data replicated these previous findings by showing an initially superior performance for prototypes and typical stimuli and a switch from a prototype-based to an exemplar-based categorization for exceptions in the later learning phases. Since performance varied, we divided participants into learners and non-learners. Analysis of the functional imaging data revealed that the interaction of group (learners vs. non-learners) and block (Block 5 vs. Block 1) yielded an activation of the left fusiform gyrus for the processing of prototypes, and an activation of the right hippocampus for exceptions after learning the categories. Thus, successful prototype- and exemplar-based category learning is associated with activations of complementary neural substrates that constitute object-based processes of the ventral visual stream and their interaction with unique-cue representations, possibly based on sparse coding within the hippocampus. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Mechanisms of object recognition: what we have learned from pigeons

    PubMed Central

    Soto, Fabian A.; Wasserman, Edward A.

    2014-01-01

    Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons. PMID:25352784

  5. Learning new color names produces rapid increase in gray matter in the intact adult human cortex

    PubMed Central

    Kwok, Veronica; Niu, Zhendong; Kay, Paul; Zhou, Ke; Mo, Lei; Jin, Zhen; So, Kwok-Fai; Tan, Li Hai

    2011-01-01

    The human brain has been shown to exhibit changes in the volume and density of gray matter as a result of training over periods of several weeks or longer. We show that these changes can be induced much faster by using a training method that is claimed to simulate the rapid learning of word meanings by children. Using whole-brain magnetic resonance imaging (MRI) we show that learning newly defined and named subcategories of the universal categories green and blue in a period of 2 h increases the volume of gray matter in V2/3 of the left visual cortex, a region known to mediate color vision. This pattern of findings demonstrates that the anatomical structure of the adult human brain can change very quickly, specifically during the acquisition of new, named categories. Also, prior behavioral and neuroimaging research has shown that differences between languages in the boundaries of named color categories influence the categorical perception of color, as assessed by judgments of relative similarity, by response time in alternative forced-choice tasks, and by visual search. Moreover, further behavioral studies (visual search) and brain imaging studies have suggested strongly that the categorical effect of language on color processing is left-lateralized, i.e., mediated by activity in the left cerebral hemisphere in adults (hence “lateralized Whorfian” effects). The present results appear to provide a structural basis in the brain for the behavioral and neurophysiologically observed indices of these Whorfian effects on color processing. PMID:21464316

  6. Data Mining and Machine Learning Models for Predicting Drug Likeness and their Disease or Organ Category

    NASA Astrophysics Data System (ADS)

    Yosipof, Abraham; Guedes, Rita C.; García-Sosa, Alfonso T.

    2018-05-01

    Data mining approaches can uncover underlying patterns in chemical and pharmacological property space decisive for drug discovery and development. Two of the most common approaches are visualization and machine learning methods. Visualization methods use dimensionality reduction techniques in order to reduce multi-dimension data into 2D or 3D representations with a minimal loss of information. Machine learning attempts to find correlations between specific activities or classifications for a set of compounds and their features by means of recurring mathematical models. Both models take advantage of the different and deep relationships that can exist between features of compounds, and helpfully provide classification of compounds based on such features. Drug-likeness has been studied from several viewpoints, but here we provide the first implementation in chemoinformatics of the t-Distributed Stochastic Neighbor Embedding (t-SNE) method for the visualization and the representation of chemical space, and the use of different machine learning methods separately and together to form a new ensemble learning method called AL Boost. The models obtained from AL Boost synergistically combine decision tree, random forests (RF), support vector machine (SVM), artificial neuronal network (ANN), k nearest neighbors (kNN), and logistic regression models. In this work, we show that together they form a predictive model that not only improves the predictive force but also decreases bias. This resulted in a corrected classification rate of over 0.81, as well as higher sensitivity and specificity rates for the models. In addition, separation and good models were also achieved for disease categories such as antineoplastic compounds and nervous system diseases, among others. Such models can be used to guide decision on the feature landscape of compounds and their likeness to either drugs or other characteristics, such as specific or multiple disease-category(ies) or organ(s) of action of a molecule.

  7. Adapting Art Instruction for Students with Disabilities.

    ERIC Educational Resources Information Center

    Platt, Jennifer M.; Janeczko, Donna

    1991-01-01

    This article presents adaptations for teaching art to students with disabilities. Various techniques, methods, and materials are described by category of disability, including students with mental disabilities, visual impairments, hearing impairments, learning disabilities, emotional disabilities, and physical disabilities. (JDD)

  8. Multi-level discriminative dictionary learning with application to large scale image classification.

    PubMed

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  9. Feature saliency and feedback information interactively impact visual category learning

    PubMed Central

    Hammer, Rubi; Sloutsky, Vladimir; Grill-Spector, Kalanit

    2015-01-01

    Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object’s features most relevant for categorization, while ‘filtering out’ irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously. PMID:25745404

  10. Search for Patterns of Functional Specificity in the Brain: A Nonparametric Hierarchical Bayesian Model for Group fMRI Data

    PubMed Central

    Sridharan, Ramesh; Vul, Edward; Hsieh, Po-Jang; Kanwisher, Nancy; Golland, Polina

    2012-01-01

    Functional MRI studies have uncovered a number of brain areas that demonstrate highly specific functional patterns. In the case of visual object recognition, small, focal regions have been characterized with selectivity for visual categories such as human faces. In this paper, we develop an algorithm that automatically learns patterns of functional specificity from fMRI data in a group of subjects. The method does not require spatial alignment of functional images from different subjects. The algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to learn the patterns of functional specificity shared across the group, which we call functional systems, and estimate the number of these systems. Inference based on our model enables automatic discovery and characterization of dominant and consistent functional systems. We apply the method to data from a visual fMRI study comprised of 69 distinct stimulus images. The discovered system activation profiles correspond to selectivity for a number of image categories such as faces, bodies, and scenes. Among systems found by our method, we identify new areas that are deactivated by face stimuli. In empirical comparisons with perviously proposed exploratory methods, our results appear superior in capturing the structure in the space of visual categories of stimuli. PMID:21884803

  11. The Cat Is out of the Bag: The Joint Influence of Previous Experience and Looking Behavior on Infant Categorization

    ERIC Educational Resources Information Center

    Kovack-Lesh, Kristine A.; Horst, Jessica S.; Oakes, Lisa M.

    2008-01-01

    We examined the effect of 4-month-old infants' previous experience with dogs, cats, or both and their online looking behavior on their learning of the adult-defined category of "cat" in a visual familiarization task. Four-month-old infants' (N = 123) learning in the laboratory was jointly determined by whether or not they had experience…

  12. A connectionist model of category learning by individuals with high-functioning autism spectrum disorder.

    PubMed

    Dovgopoly, Alexander; Mercado, Eduardo

    2013-06-01

    Individuals with autism spectrum disorder (ASD) show atypical patterns of learning and generalization. We explored the possible impacts of autism-related neural abnormalities on perceptual category learning using a neural network model of visual cortical processing. When applied to experiments in which children or adults were trained to classify complex two-dimensional images, the model can account for atypical patterns of perceptual generalization. This is only possible, however, when individual differences in learning are taken into account. In particular, analyses performed with a self-organizing map suggested that individuals with high-functioning ASD show two distinct generalization patterns: one that is comparable to typical patterns, and a second in which there is almost no generalization. The model leads to novel predictions about how individuals will generalize when trained with simplified input sets and can explain why some researchers have failed to detect learning or generalization deficits in prior studies of category learning by individuals with autism. On the basis of these simulations, we propose that deficits in basic neural plasticity mechanisms may be sufficient to account for the atypical patterns of perceptual category learning and generalization associated with autism, but they do not account for why only a subset of individuals with autism would show such deficits. If variations in performance across subgroups reflect heterogeneous neural abnormalities, then future behavioral and neuroimaging studies of individuals with ASD will need to account for such disparities.

  13. Designing Instructional Materials: Some Guidelines.

    ERIC Educational Resources Information Center

    Burbank, Lucille; Pett, Dennis

    Guidelines for the design of instructional materials are outlined in this paper. The principles of design are presented in five major categories: (1) general design (structural appeal and personal appeal); (2) instructional design (attention, memory, concept learning, and attitude change); (3) visual design (media considerations, pictures, graphs…

  14. Picturing words? Sensorimotor cortex activation for printed words in child and adult readers

    PubMed Central

    Dekker, Tessa M.; Mareschal, Denis; Johnson, Mark H.; Sereno, Martin I.

    2014-01-01

    Learning to read involves associating abstract visual shapes with familiar meanings. Embodiment theories suggest that word meaning is at least partially represented in distributed sensorimotor networks in the brain (Barsalou, 2008; Pulvermueller, 2013). We explored how reading comprehension develops by tracking when and how printed words start activating these “semantic” sensorimotor representations as children learn to read. Adults and children aged 7–10 years showed clear category-specific cortical specialization for tool versus animal pictures during a one-back categorisation task. Thus, sensorimotor representations for these categories were in place at all ages. However, co-activation of these same brain regions by the visual objects’ written names was only present in adults, even though all children could read and comprehend all presented words, showed adult-like task performance, and older children were proficient readers. It thus takes years of training and expert reading skill before spontaneous processing of printed words’ sensorimotor meanings develops in childhood. PMID:25463817

  15. Unsupervised visual discrimination learning of complex stimuli: Accuracy, bias and generalization.

    PubMed

    Montefusco-Siegmund, Rodrigo; Toro, Mauricio; Maldonado, Pedro E; Aylwin, María de la L

    2018-07-01

    Through same-different judgements, we can discriminate an immense variety of stimuli and consequently, they are critical in our everyday interaction with the environment. The quality of the judgements depends on familiarity with stimuli. A way to improve the discrimination is through learning, but to this day, we lack direct evidence of how learning shapes the same-different judgments with complex stimuli. We studied unsupervised visual discrimination learning in 42 participants, as they performed same-different judgments with two types of unfamiliar complex stimuli in the absence of labeling or individuation. Across nine daily training sessions with equiprobable same and different stimuli pairs, participants increased the sensitivity and the criterion by reducing the errors with both same and different pairs. With practice, there was a superior performance for different pairs and a bias for different response. To evaluate the process underlying this bias, we manipulated the proportion of same and different pairs, which resulted in an additional proportion-induced bias, suggesting that the bias observed with equal proportions was a stimulus processing bias. Overall, these results suggest that unsupervised discrimination learning occurs through changes in the stimulus processing that increase the sensory evidence and/or the precision of the working memory. Finally, the acquired discrimination ability was fully transferred to novel exemplars of the practiced stimuli category, in agreement with the acquisition of a category specific perceptual expertise. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Edmodo social learning network for elementary school mathematics learning

    NASA Astrophysics Data System (ADS)

    Ariani, Y.; Helsa, Y.; Ahmad, S.; Prahmana, RCI

    2017-12-01

    A developed instructional media can be as printed media, visual media, audio media, and multimedia. The development of instructional media can also take advantage of technological development by utilizing Edmodo social network. This research aims to develop a digital classroom learning model using Edmodo social learning network for elementary school mathematics learning which is practical, valid and effective in order to improve the quality of learning activities. The result of this research showed that the prototype of mathematics learning device for elementary school students using Edmodo was in good category. There were 72% of students passed the assessment as a result of Edmodo learning. Edmodo has become a promising way to engage students in a collaborative learning process.

  17. Not all triads are created equal: further support for the importance of visual and semantic proximity in object identification.

    PubMed

    Schweizer, Tom A; Dixon, Mike J; Desmarais, Geneviève; Smith, Stephen D

    2002-01-01

    Identification deficits were investigated in ELM, a temporal lobe stroke patient with category-specific deficits. We replicated previous work done on FS, a patient with category specific deficits as a result of herpes viral encephalitis. ELM was tested using novel, computer generated shapes that were paired with artifact labels. We paired semantically close or disparate labels to shapes and ELM attempted to learn these pairings. Overall, ELM's shape-label confusions were most detrimentally affected when we used labels that referred to objects that were visually and semantically close. However, as with FS, ELM had as many errors when shapes were paired with the labels "donut," "tire," and "washer" as he did when they were paired with visually and semantically close artifact labels. Two explanations are put forth to account for the anomalous performance by both patients on the triad of donut-tire-washer.

  18. Where’s Waldo? How perceptual, cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene

    PubMed Central

    Chang, Hung-Cheng; Grossberg, Stephen; Cao, Yongqiang

    2014-01-01

    The Where’s Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where’s Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex, inferotemporal cortex, prefrontal cortex (PFC), amygdala, basal ganglia (BG), and superior colliculus (SC). PMID:24987339

  19. Four Types of Disabilities: Their Impact on Online Learning

    ERIC Educational Resources Information Center

    Crow, Kevin L.

    2008-01-01

    This article introduced some of the issues and challenges faced by online learners who have disabilities by providing an overview of four major disability categories: visual impairments, hearing impairments, motor impairments, and cognitive impairments. It also discussed how assistive technologies and universal design are being incorporated in…

  20. DIAGNOSIS AND TREATMENT OF READING DIFFICULTIES IN PUERTO RICAN AND NEGRO COMMUNITIES.

    ERIC Educational Resources Information Center

    COHEN, S. ALAN

    READING DISABILITIES ARE DIVIDED INTO THREE CATEGORIES--THOSE CAUSED BY PERCEPTUAL FACTORS, THOSE CAUSED BY PSYCHOSOCIAL FACTORS, AND THOSE CAUSED BY PSYCHOEDUCATIONAL FACTORS. POOR DEVELOPMENT OF VISUAL PERCEPTION CONSTITUTES A DISPROPORTIONATE PERCENTAGE OF LEARNING DISABILITY AMONG NEGROES AND PUERTO RICANS IN CENTRAL CITIES. EARLY CHILDHOOD…

  1. Learning through Seeing and Doing: Visual Supports for Children with Autism

    ERIC Educational Resources Information Center

    Rao, Shaila M.; Gagie, Brenda

    2006-01-01

    Autism is a life-long, complex developmental disorder that causes impairment in the way individuals process information. Autism belongs to heterogeneous categories of developmental disabilities where neurological disorders lead to deficits in a child's ability to communicate, understand language, play, develop social skills, and relate to others.…

  2. Statistical Learning Is Constrained to Less Abstract Patterns in Complex Sensory Input (but not the Least)

    PubMed Central

    Emberson, Lauren L.; Rubinstein, Dani

    2016-01-01

    The influence of statistical information on behavior (either through learning or adaptation) is quickly becoming foundational to many domains of cognitive psychology and cognitive neuroscience, from language comprehension to visual development. We investigate a central problem impacting these diverse fields: when encountering input with rich statistical information, are there any constraints on learning? This paper examines learning outcomes when adult learners are given statistical information across multiple levels of abstraction simultaneously: from abstract, semantic categories of everyday objects to individual viewpoints on these objects. After revealing statistical learning of abstract, semantic categories with scrambled individual exemplars (Exp. 1), participants viewed pictures where the categories as well as the individual objects predicted picture order (e.g., bird1—dog1, bird2—dog2). Our findings suggest that participants preferentially encode the relationships between the individual objects, even in the presence of statistical regularities linking semantic categories (Exps. 2 and 3). In a final experiment we investigate whether learners are biased towards learning object-level regularities or simply construct the most detailed model given the data (and therefore best able to predict the specifics of the upcoming stimulus) by investigating whether participants preferentially learn from the statistical regularities linking individual snapshots of objects or the relationship between the objects themselves (e.g., bird_picture1— dog_picture1, bird_picture2—dog_picture2). We find that participants fail to learn the relationships between individual snapshots, suggesting a bias towards object-level statistical regularities as opposed to merely constructing the most complete model of the input. This work moves beyond the previous existence proofs that statistical learning is possible at both very high and very low levels of abstraction (categories vs. individual objects) and suggests that, at least with the current categories and type of learner, there are biases to pick up on statistical regularities between individual objects even when robust statistical information is present at other levels of abstraction. These findings speak directly to emerging theories about how systems supporting statistical learning and prediction operate in our structure-rich environments. Moreover, the theoretical implications of the current work across multiple domains of study is already clear: statistical learning cannot be assumed to be unconstrained even if statistical learning has previously been established at a given level of abstraction when that information is presented in isolation. PMID:27139779

  3. How does the brain rapidly learn and reorganize view-invariant and position-invariant object representations in the inferotemporal cortex?

    PubMed

    Cao, Yongqiang; Grossberg, Stephen; Markowitz, Jeffrey

    2011-12-01

    All primates depend for their survival on being able to rapidly learn about and recognize objects. Objects may be visually detected at multiple positions, sizes, and viewpoints. How does the brain rapidly learn and recognize objects while scanning a scene with eye movements, without causing a combinatorial explosion in the number of cells that are needed? How does the brain avoid the problem of erroneously classifying parts of different objects together at the same or different positions in a visual scene? In monkeys and humans, a key area for such invariant object category learning and recognition is the inferotemporal cortex (IT). A neural model is proposed to explain how spatial and object attention coordinate the ability of IT to learn invariant category representations of objects that are seen at multiple positions, sizes, and viewpoints. The model clarifies how interactions within a hierarchy of processing stages in the visual brain accomplish this. These stages include the retina, lateral geniculate nucleus, and cortical areas V1, V2, V4, and IT in the brain's What cortical stream, as they interact with spatial attention processes within the parietal cortex of the Where cortical stream. The model builds upon the ARTSCAN model, which proposed how view-invariant object representations are generated. The positional ARTSCAN (pARTSCAN) model proposes how the following additional processes in the What cortical processing stream also enable position-invariant object representations to be learned: IT cells with persistent activity, and a combination of normalizing object category competition and a view-to-object learning law which together ensure that unambiguous views have a larger effect on object recognition than ambiguous views. The model explains how such invariant learning can be fooled when monkeys, or other primates, are presented with an object that is swapped with another object during eye movements to foveate the original object. The swapping procedure is predicted to prevent the reset of spatial attention, which would otherwise keep the representations of multiple objects from being combined by learning. Li and DiCarlo (2008) have presented neurophysiological data from monkeys showing how unsupervised natural experience in a target swapping experiment can rapidly alter object representations in IT. The model quantitatively simulates the swapping data by showing how the swapping procedure fools the spatial attention mechanism. More generally, the model provides a unifying framework, and testable predictions in both monkeys and humans, for understanding object learning data using neurophysiological methods in monkeys, and spatial attention, episodic learning, and memory retrieval data using functional imaging methods in humans. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. On the road to invariant recognition: explaining tradeoff and morph properties of cells in inferotemporal cortex using multiple-scale task-sensitive attentive learning.

    PubMed

    Grossberg, Stephen; Markowitz, Jeffrey; Cao, Yongqiang

    2011-12-01

    Visual object recognition is an essential accomplishment of advanced brains. Object recognition needs to be tolerant, or invariant, with respect to changes in object position, size, and view. In monkeys and humans, a key area for recognition is the anterior inferotemporal cortex (ITa). Recent neurophysiological data show that ITa cells with high object selectivity often have low position tolerance. We propose a neural model whose cells learn to simulate this tradeoff, as well as ITa responses to image morphs, while explaining how invariant recognition properties may arise in stages due to processes across multiple cortical areas. These processes include the cortical magnification factor, multiple receptive field sizes, and top-down attentive matching and learning properties that may be tuned by task requirements to attend to either concrete or abstract visual features with different levels of vigilance. The model predicts that data from the tradeoff and image morph tasks emerge from different levels of vigilance in the animals performing them. This result illustrates how different vigilance requirements of a task may change the course of category learning, notably the critical features that are attended and incorporated into learned category prototypes. The model outlines a path for developing an animal model of how defective vigilance control can lead to symptoms of various mental disorders, such as autism and amnesia. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Real-world visual statistics and infants' first-learned object names

    PubMed Central

    Clerkin, Elizabeth M.; Hart, Elizabeth; Rehg, James M.; Yu, Chen

    2017-01-01

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present—a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872373

  6. Categorization for Faces and Tools—Two Classes of Objects Shaped by Different Experience—Differs in Processing Timing, Brain Areas Involved, and Repetition Effects

    PubMed Central

    Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A.

    2018-01-01

    The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se, or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140–170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210–220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning. PMID:29379426

  7. Categorization for Faces and Tools-Two Classes of Objects Shaped by Different Experience-Differs in Processing Timing, Brain Areas Involved, and Repetition Effects.

    PubMed

    Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A

    2017-01-01

    The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se , or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140-170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210-220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning.

  8. Impairments in part-whole representations of objects in two cases of integrative visual agnosia.

    PubMed

    Behrmann, Marlene; Williams, Pepper

    2007-10-01

    How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.

  9. Fine-grained visual marine vessel classification for coastal surveillance and defense applications

    NASA Astrophysics Data System (ADS)

    Solmaz, Berkan; Gundogdu, Erhan; Karaman, Kaan; Yücesoy, Veysel; Koç, Aykut

    2017-10-01

    The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.

  10. Learning to match auditory and visual speech cues: social influences on acquisition of phonological categories.

    PubMed

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential looking paradigm, 44 German 6-month olds' ability to detect mismatches between concurrently presented auditory and visual native vowels was tested. Outcomes were related to mothers' speech style and interactive behavior assessed during free play with their infant, and to infant-specific factors assessed through a questionnaire. Results show that mothers' and infants' social behavior modulated infants' preference for matching audiovisual speech. Moreover, infants' audiovisual speech perception correlated with later vocabulary size, suggesting a lasting effect on language development. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.

  11. [Cognitive enrichment in farm animals--the impact of social rank and social environment on learning behaviour of dwarf goats].

    PubMed

    Baymann, Ulrike; Langbein, Jan; Siebert, Katrin; Nürnberg, Gerd; Manteuffel, Gerhard; Mohr, Elmar

    2007-01-01

    The influence of social rank and social environment on visual discrimination learning of small groups of Nigerian dwarf goats (Capra hircus, n = 79) was studied using a computer-controlled learning device integrated in the animals' home pen. The experiment was divided into three sections (LE1, LE1 u, LE2; each 14d). In LE1 the goats learned a discrimination task in a socially stable environment. In LE1u animals were mixed and relocated to another pen and given the same task as in LE1. In LE2 the animals were mixed and relocated again and given a new discrimination task. We used drinking water as a primary reinforcer. The rank category of the goats were analysed as alpha, omega or middle ranking for each section of the experiment. The rank category had an influence on daily learning success (percentage of successful trials per day) only in LE1 u. Daily learning success decreased after mixing and relocation of the animals in LE1 u and LE2 compared to LE1. That resulted in an undersupply of drinking water on the first day of both these tasks. We discuss social stress induced by agonistic interactions after mixing as a reason for that decline. The absolute learning performance (trials to reach the learning criterion) of the omega animals was lower in LE2 compared to the other rank categories. Furthermore, their absolute learning performance was lower in LE2 compared to LE1. For future application of similar automated learning devices in animal husbandry, we recommend against the combination of management routines like mixing and relocation with changes in the learning task because of the negative effects on learning performance, particularly of the omega animals.

  12. Computer-aided Classification of Mammographic Masses Using Visually Sensitive Image Features

    PubMed Central

    Wang, Yunzhi; Aghaei, Faranak; Zarafshani, Ali; Qiu, Yuchen; Qian, Wei; Zheng, Bin

    2017-01-01

    Purpose To develop a new computer-aided diagnosis (CAD) scheme that computes visually sensitive image features routinely used by radiologists to develop a machine learning classifier and distinguish between the malignant and benign breast masses detected from digital mammograms. Methods An image dataset including 301 breast masses was retrospectively selected. From each segmented mass region, we computed image features that mimic five categories of visually sensitive features routinely used by radiologists in reading mammograms. We then selected five optimal features in the five feature categories and applied logistic regression models for classification. A new CAD interface was also designed to show lesion segmentation, computed feature values and classification score. Results Areas under ROC curves (AUC) were 0.786±0.026 and 0.758±0.027 when to classify mass regions depicting on two view images, respectively. By fusing classification scores computed from two regions, AUC increased to 0.806±0.025. Conclusion This study demonstrated a new approach to develop CAD scheme based on 5 visually sensitive image features. Combining with a “visual aid” interface, CAD results may be much more easily explainable to the observers and increase their confidence to consider CAD generated classification results than using other conventional CAD approaches, which involve many complicated and visually insensitive texture features. PMID:27911353

  13. Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots

    PubMed Central

    Taniguchi, Akira; Taniguchi, Tadahiro; Cangelosi, Angelo

    2017-01-01

    In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method. PMID:29311888

  14. Recognition-induced forgetting is not due to category-based set size.

    PubMed

    Maxcey, Ashleigh M

    2016-01-01

    What are the consequences of accessing a visual long-term memory representation? Previous work has shown that accessing a long-term memory representation via retrieval improves memory for the targeted item and hurts memory for related items, a phenomenon called retrieval-induced forgetting. Recently we found a similar forgetting phenomenon with recognition of visual objects. Recognition-induced forgetting occurs when practice recognizing an object during a two-alternative forced-choice task, from a group of objects learned at the same time, leads to worse memory for objects from that group that were not practiced. An alternative explanation of this effect is that category-based set size is inducing forgetting, not recognition practice as claimed by some researchers. This alternative explanation is possible because during recognition practice subjects make old-new judgments in a two-alternative forced-choice task, and are thus exposed to more objects from practiced categories, potentially inducing forgetting due to set-size. Herein I pitted the category-based set size hypothesis against the recognition-induced forgetting hypothesis. To this end, I parametrically manipulated the amount of practice objects received in the recognition-induced forgetting paradigm. If forgetting is due to category-based set size, then the magnitude of forgetting of related objects will increase as the number of practice trials increases. If forgetting is recognition induced, the set size of exemplars from any given category should not be predictive of memory for practiced objects. Consistent with this latter hypothesis, additional practice systematically improved memory for practiced objects, but did not systematically affect forgetting of related objects. These results firmly establish that recognition practice induces forgetting of related memories. Future directions and important real-world applications of using recognition to access our visual memories of previously encountered objects are discussed.

  15. Practice guidelines in the context of primary care, learning and usability in the physicians' decision-making process--a qualitative study.

    PubMed

    Ingemansson, Maria; Bastholm-Rahmner, Pia; Kiessling, Anna

    2014-08-20

    Decision-making is central for general practitioners (GP). Practice guidelines are important tools in this process but implementation of them in the complex context of primary care is a challenge. The purpose of this study was to explore how GPs approach, learn from and use practice guidelines in their day-to-day decision-making process in primary care. A qualitative approach using focus-group interviews was chosen in order to provide in-depth information. The participants were 22 GPs with a median of seven years of experience in primary care, representing seven primary healthcare centres in Stockholm, Sweden in 2011. The interviews focused on how the GPs use guidelines in their decision-making, factors that influence their decision how to approach these guidelines, and how they could encourage the learning process in routine practice.Data were analysed by qualitative content analysis. Meaning units were condensed and grouped in categories. After interpreting the content in the categories, themes were created. Three themes were conceptualized. The first theme emphasized to use guidelines by interactive contextualized dialogues. The categories underpinning this theme: 1. Feedback by peer-learning 2. Feedback by collaboration, mutual learning, and equality between specialties, identified important ways to achieve this learning dialogue. Confidence was central in the second theme, learning that establishes confidence to provide high quality care. Three aspects of confidence were identified in the categories of this theme: 1. Confidence by confirmation, 2. Confidence by reliability and 3. Confidence by evaluation of own results. In the third theme, learning by use of relevant evidence in the decision-making process, we identified two categories: 1. Design and lay-out visualizing the evidence 2. Accessibility adapted to the clinical decision-making process as prerequisites for using the practice guidelines. Decision-making in primary care is a dual process that involves use of intuitive and analytic thinking in a balanced way in order to provide high quality care. Key aspects of effective learning in this clinical decision-making process were: contextualized dialogue, which was based on the GPs' own experiences, feedback on own results and easy access to short guidelines perceived as trustworthy.

  16. Online learning and control of attraction basins for the development of sensorimotor control strategies.

    PubMed

    de Rengervé, Antoine; Andry, Pierre; Gaussier, Philippe

    2015-04-01

    Imitation and learning from humans require an adequate sensorimotor controller to learn and encode behaviors. We present the Dynamic Muscle Perception-Action(DM-PerAc) model to control a multiple degrees-of-freedom (DOF) robot arm. In the original PerAc model, path-following or place-reaching behaviors correspond to the sensorimotor attractors resulting from the dynamics of learned sensorimotor associations. The DM-PerAc model, inspired by human muscles, permits one to combine impedance-like control with the capability of learning sensorimotor attraction basins. We detail a solution to learn incrementally online the DM-PerAc visuomotor controller. Postural attractors are learned by adapting the muscle activations in the model depending on movement errors. Visuomotor categories merging visual and proprioceptive signals are associated with these muscle activations. Thus, the visual and proprioceptive signals activate the motor action generating an attractor which satisfies both visual and proprioceptive constraints. This visuomotor controller can serve as a basis for imitative behaviors. In addition, the muscle activation patterns can define directions of movement instead of postural attractors. Such patterns can be used in state-action couples to generate trajectories like in the PerAc model. We discuss a possible extension of the DM-PerAc controller by adapting the Fukuyori's controller based on the Langevin's equation. This controller can serve not only to reach attractors which were not explicitly learned, but also to learn the state/action couples to define trajectories.

  17. Student Visual Communication of Evolution

    NASA Astrophysics Data System (ADS)

    Oliveira, Alandeom W.; Cook, Kristin

    2017-06-01

    Despite growing recognition of the importance of visual representations to science education, previous research has given attention mostly to verbal modalities of evolution instruction. Visual aspects of classroom learning of evolution are yet to be systematically examined by science educators. The present study attends to this issue by exploring the types of evolutionary imagery deployed by secondary students. Our visual design analysis revealed that students resorted to two larger categories of images when visually communicating evolution: spatial metaphors (images that provided a spatio-temporal account of human evolution as a metaphorical "walk" across time and space) and symbolic representations ("icons of evolution" such as personal portraits of Charles Darwin that simply evoked evolutionary theory rather than metaphorically conveying its conceptual contents). It is argued that students need opportunities to collaboratively critique evolutionary imagery and to extend their visual perception of evolution beyond dominant images.

  18. A framework for medical image retrieval using machine learning and statistical similarity matching techniques with relevance feedback.

    PubMed

    Rahman, Md Mahmudur; Bhattacharya, Prabir; Desai, Bipin C

    2007-01-01

    A content-based image retrieval (CBIR) framework for diverse collection of medical images of different imaging modalities, anatomic regions with different orientations and biological systems is proposed. Organization of images in such a database (DB) is well defined with predefined semantic categories; hence, it can be useful for category-specific searching. The proposed framework consists of machine learning methods for image prefiltering, similarity matching using statistical distance measures, and a relevance feedback (RF) scheme. To narrow down the semantic gap and increase the retrieval efficiency, we investigate both supervised and unsupervised learning techniques to associate low-level global image features (e.g., color, texture, and edge) in the projected PCA-based eigenspace with their high-level semantic and visual categories. Specially, we explore the use of a probabilistic multiclass support vector machine (SVM) and fuzzy c-mean (FCM) clustering for categorization and prefiltering of images to reduce the search space. A category-specific statistical similarity matching is proposed in a finer level on the prefiltered images. To incorporate a better perception subjectivity, an RF mechanism is also added to update the query parameters dynamically and adjust the proposed matching functions. Experiments are based on a ground-truth DB consisting of 5000 diverse medical images of 20 predefined categories. Analysis of results based on cross-validation (CV) accuracy and precision-recall for image categorization and retrieval is reported. It demonstrates the improvement, effectiveness, and efficiency achieved by the proposed framework.

  19. Pitch enhancement facilitates word learning across visual contexts

    PubMed Central

    Filippi, Piera; Gingras, Bruno; Fitch, W. Tecumseh

    2014-01-01

    This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution. PMID:25566144

  20. Semantic contextual cuing and visual attention.

    PubMed

    Goujon, Annabelle; Didierjean, André; Marmèche, Evelyne

    2009-02-01

    Since M. M. Chun and Y. Jiang's (1998) original study, a large body of research based on the contextual cuing paradigm has shown that the visuocognitive system is capable of capturing certain regularities in the environment in an implicit way. The present study investigated whether regularities based on the semantic category membership of the context can be learned implicitly and whether that learning depends on attention. The contextual cuing paradigm was used with lexical displays in which the semantic category of the contextual words either did or did not predict the target location. Experiments 1 and 2 revealed that implicit contextual cuing effects can be extended to semantic category regularities. Experiments 3 and 4 indicated an implicit contextual cuing effect when the predictive context appeared in an attended color but not when the predictive context appeared in an ignored color. However, when the previously ignored context suddenly became attended, it immediately facilitated performance. In contrast, when the previously attended context suddenly became ignored, no benefit was observed. Results suggest that the expression of implicit semantic knowledge depends on attention but that latent learning can nevertheless take place outside the attentional field. Copyright 2009 APA, all rights reserved.

  1. Children (but not adults) judge similarity in own- and other-race faces by the color of their skin

    PubMed Central

    Balas, Benjamin; Peissig, Jessie; Moulson, Margaret

    2014-01-01

    Face shape and pigmentation are both diagnostic cues for face identification and categorization. In particular, both shape and pigmentation contribute to observers’ categorization of faces by race. Though many theoretical accounts of the behavioral other-race effect either explicitly or implicitly depend on differential use of visual information as a function of category expertise, there is little evidence that observers do in fact differentially rely upon distinct visual cues for own- and other-race faces. Presently, we examined how Asian and Caucasian children (4–6 years old) and adults use 3D shape and 2D pigmentation to make similarity judgments of White, Black, and Asian faces. Children in this age range are both capable of making category judgments about race, but are also sufficiently plastic with regard to the behavioral other-race effect that it seems as though their representations of facial appearance across different categories are still emerging. Using a simple match-to-sample similarity task, we found that children tend to use pigmentation to judge facial similarity more than adults, and also that own- vs. other-group category membership appears to influence how quickly children learn to use shape information more readily. We therefore suggest that children continue to adjust how different visual information is weighted during early and middle childhood and that experience with faces affects the speed at which adult-like weightings are established. PMID:25462031

  2. Representational Distance Learning for Deep Neural Networks

    PubMed Central

    McClure, Patrick; Kriegeskorte, Nikolaus

    2016-01-01

    Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains. PMID:28082889

  3. Representational Distance Learning for Deep Neural Networks.

    PubMed

    McClure, Patrick; Kriegeskorte, Nikolaus

    2016-01-01

    Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.

  4. Real-world visual statistics and infants' first-learned object names.

    PubMed

    Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B

    2017-01-05

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  5. Impact of audio-visual storytelling in simulation learning experiences of undergraduate nursing students.

    PubMed

    Johnston, Sandra; Parker, Christina N; Fox, Amanda

    2017-09-01

    Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, p<0.02) were evident in the subscale of transferability of learning from simulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this is vital to quality educational outcomes. Copyright © 2017. Published by Elsevier Ltd.

  6. Imaginative Ideas for the Teacher of Mathematics, Grades K-12. Ranucci's Reservoir. A Collection of Articles by Ernest R. Ranucci.

    ERIC Educational Resources Information Center

    Farrell, Margaret A., Ed.

    This book is a collection of 21 articles by mathematics teacher Ernest Ranucci (1912-1976) grouped into five categories: (1) "Patterns" (for developing inductive reasoning skills); (2) "Mathematics in the World Around Us"; (3) "Spatial Visualization"; (4) "Inventiveness in Geometry"; and (5) "Games to Learn By", which demonstrates that mathematics…

  7. Accessibility Evaluation of Top-Ranking University Websites in World, Oceania, and Arab Categories for Home, Admission, and Course Description Webpages

    ERIC Educational Resources Information Center

    Alahmadi, Tahani; Drew, Steve

    2017-01-01

    Evaluating accessibility is an important equity step in assessing the effectiveness and usefulness of online learning materials for students with disabilities such as visual or hearing impairments. Previous studies in this area have indicated that, over time, university websites have become gradually more inaccessible. This paper relates findings…

  8. Occupational risk identification using hand-held or laptop computers.

    PubMed

    Naumanen, Paula; Savolainen, Heikki; Liesivuori, Jyrki

    2008-01-01

    This paper describes the Work Environment Profile (WEP) program and its use in risk identification by computer. It is installed into a hand-held computer or a laptop to be used in risk identification during work site visits. A 5-category system is used to describe the identified risks in 7 groups, i.e., accidents, biological and physical hazards, ergonomic and psychosocial load, chemicals, and information technology hazards. Each group contains several qualifying factors. These 5 categories are colour-coded at this stage to aid with visualization. Risk identification produces visual summary images the interpretation of which is facilitated by colours. The WEP program is a tool for risk assessment which is easy to learn and to use both by experts and nonprofessionals. It is especially well adapted to be used both in small and in larger enterprises. Considerable time is saved as no paper notes are needed.

  9. Towards a unified theory of neocortex: laminar cortical circuits for vision and cognition.

    PubMed

    Grossberg, Stephen

    2007-01-01

    A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of preattentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.

  10. STDP-based spiking deep convolutional neural networks for object recognition.

    PubMed

    Kheradpisheh, Saeed Reza; Ganjtabesh, Mohammad; Thorpe, Simon J; Masquelier, Timothée

    2018-03-01

    Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Evidence that primary visual cortex is required for image, orientation, and motion discrimination by rats.

    PubMed

    Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela

    2013-01-01

    The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.

  12. Innovative intelligent technology of distance learning for visually impaired people

    NASA Astrophysics Data System (ADS)

    Samigulina, Galina; Shayakhmetova, Assem; Nuysuppov, Adlet

    2017-12-01

    The aim of the study is to develop innovative intelligent technology and information systems of distance education for people with impaired vision (PIV). To solve this problem a comprehensive approach has been proposed, which consists in the aggregate of the application of artificial intelligence methods and statistical analysis. Creating an accessible learning environment, identifying the intellectual, physiological, psychophysiological characteristics of perception and information awareness by this category of people is based on cognitive approach. On the basis of fuzzy logic the individually-oriented learning path of PIV is con- structed with the aim of obtaining high-quality engineering education with modern equipment in the joint use laboratories.

  13. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    PubMed Central

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2015-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations. PMID:25642198

  14. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements.

    PubMed

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2014-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.

  15. Learning about the scale of the solar system using digital planetarium visualizations

    NASA Astrophysics Data System (ADS)

    Yu, Ka Chun; Sahami, Kamran; Dove, James

    2017-07-01

    We studied the use of a digital planetarium for teaching relative distances and sizes in introductory undergraduate astronomy classes. Inspired in part by the classic short film The Powers of Ten and large physical scale models of the Solar System that can be explored on foot, we created lectures using virtual versions of these two pedagogical approaches for classes that saw either an immersive treatment in the planetarium or a non-immersive version in the regular classroom (with N = 973 students participating in total). Students who visited the planetarium had not only the greatest learning gains, but their performance increased with time, whereas students who saw the same visuals projected onto a flat display in their classroom showed less retention over time. The gains seen in the students who visited the planetarium reveal that this medium is a powerful tool for visualizing scale over multiple orders of magnitude. However the modest gains for the students in the regular classroom also show the utility of these visualization approaches for the broader category of classroom physics simulations.

  16. Development of a vocabulary of object shapes in a child with a very-early-acquired visual agnosia: a unique case.

    PubMed

    Funnell, Elaine; Wilding, John

    2011-02-01

    We report a longitudinal study of an exceptional child (S.R.) whose early-acquired visual agnosia, following encephalitis at 8 weeks of age, did not prevent her from learning to construct an increasing vocabulary of visual object forms (drawn from different categories), albeit slowly. S.R. had problems perceiving subtle differences in shape; she was unable to segment local letters within global displays; and she would bring complex scenes close to her eyes: a symptom suggestive of an attempt to reduce visual crowding. Investigations revealed a robust ability to use the gestalt grouping factors of proximity and collinearity to detect fragmented forms in noisy backgrounds, compared with a very weak ability to segment fragmented forms on the basis of contrasts of shape. When contrasts in spatial grouping and shape were pitted against each other, shape made little contribution, consistent with problems in perceiving complex scenes, but when shape contrast was varied, and spatial grouping was held constant, S.R. showed the same hierarchy of difficulty as the controls, although her responses were slowed. This is the first report of a child's visual-perceptual development following very early neurological impairments to the visual cortex. Her ability to learn to perceive visual shape following damage at a rudimentary stage of perceptual development contrasts starkly with the loss of such ability in childhood cases of acquired visual agnosia that follow damage to the established perceptual system. Clearly, there is a critical period during which neurological damage to the highly active, early developing visual-perceptual system does not prevent but only impairs further learning.

  17. Toward interactive search in remote sensing imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Reid B; Hush, Do; Harvey, Neal

    2010-01-01

    To move from data to information in almost all science and defense applications requires a human-in-the-loop to validate information products, resolve inconsistencies, and account for incomplete and potentially deceptive sources of information. This is a key motivation for visual analytics which aims to develop techniques that complement and empower human users. By contrast, the vast majority of algorithms developed in machine learning aim to replace human users in data exploitation. In this paper we describe a recently introduced machine learning problem, called rare category detection, which may be a better match to visual analytic environments. We describe a new designmore » criteria for this problem, and present comparisons to existing techniques with both synthetic and real-world datasets. We conclude by describing an application in broad-area search of remote sensing imagery.« less

  18. Differential effect of visual masking in perceptual categorization.

    PubMed

    Hélie, Sébastien; Cousineau, Denis

    2015-06-01

    This article explores the visual information used to categorize stimuli drawn from a common stimulus space into verbal and nonverbal categories using 2 experiments. Experiment 1 explores the effect of target duration on verbal and nonverbal categorization using backward masking to interrupt visual processing. With categories equated for difficulty for long and short target durations, intermediate target duration shows an advantage for verbal categorization over nonverbal categorization. Experiment 2 tests whether the results of Experiment 1 can be explained by shorter target duration resulting in a smaller signal-to-noise ratio of the categorization stimulus. To test for this possibility, Experiment 2 used integration masking with the same stimuli, categories, and masks as Experiment 1 with a varying level of mask opacity. As predicted, low mask opacity yielded similar results to long target duration while high mask opacity yielded similar results to short target duration. Importantly, intermediate mask opacity produced an advantage for verbal categorization over nonverbal categorization, similar to intermediate target duration. These results suggest that verbal and nonverbal categorization are affected differently by manipulations affecting the signal-to-noise ratio of the stimulus, consistent with multiple-system theories of categorizations. The results further suggest that verbal categorization may be more digital (and more robust to low signal-to-noise ratio) while the information used in nonverbal categorization may be more analog (and less robust to lower signal-to-noise ratio). This article concludes with a discussion of how these new results affect the use of masking in perceptual categorization and multiple-system theories of perceptual category learning. (c) 2015 APA, all rights reserved).

  19. Human Amygdala Tracks a Feature-Based Valence Signal Embedded within the Facial Expression of Surprise.

    PubMed

    Kim, M Justin; Mattek, Alison M; Bennett, Randi H; Solomon, Kimberly M; Shin, Jin; Whalen, Paul J

    2017-09-27

    Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified particular visual features of the mouth region that predicted this valence effect, isolating the specific visual signal that might be driving this neural valence response. Copyright © 2017 the authors 0270-6474/17/379510-09$15.00/0.

  20. Bag of Visual Words Model with Deep Spatial Features for Geographical Scene Classification

    PubMed Central

    Wu, Lin

    2017-01-01

    With the popular use of geotagging images, more and more research efforts have been placed on geographical scene classification. In geographical scene classification, valid spatial feature selection can significantly boost the final performance. Bag of visual words (BoVW) can do well in selecting feature in geographical scene classification; nevertheless, it works effectively only if the provided feature extractor is well-matched. In this paper, we use convolutional neural networks (CNNs) for optimizing proposed feature extractor, so that it can learn more suitable visual vocabularies from the geotagging images. Our approach achieves better performance than BoVW as a tool for geographical scene classification, respectively, in three datasets which contain a variety of scene categories. PMID:28706534

  1. Development of visual category selectivity in ventral visual cortex does not require visual experience

    PubMed Central

    van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.

    2017-01-01

    To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127

  2. Cortical Representations of Symbols, Objects, and Faces Are Pruned Back during Early Childhood

    PubMed Central

    Pinel, Philippe; Dehaene, Stanislas; Pelphrey, Kevin A.

    2011-01-01

    Regions of human ventral extrastriate visual cortex develop specializations for natural categories (e.g., faces) and cultural artifacts (e.g., words). In adults, category-based specializations manifest as greater neural responses in visual regions of the brain (e.g., fusiform gyrus) to some categories over others. However, few studies have examined how these specializations originate in the brains of children. Moreover, it is as yet unknown whether the development of visual specializations hinges on “increases” in the response to the preferred categories, “decreases” in the responses to nonpreferred categories, or “both.” This question is relevant to a long-standing debate concerning whether neural development is driven by building up or pruning back representations. To explore these questions, we measured patterns of visual activity in 4-year-old children for 4 categories (faces, letters, numbers, and shoes) using functional magnetic resonance imaging. We report 2 key findings regarding the development of visual categories in the brain: 1) the categories “faces” and “symbols” doubly dissociate in the fusiform gyrus before children can read and 2) the development of category-specific responses in young children depends on cortical responses to nonpreferred categories that decrease as preferred category knowledge is acquired. PMID:20457691

  3. Diagnosticity and Prototypicality in Category Learning: A Comparison of Inference Learning and Classification Learning

    ERIC Educational Resources Information Center

    Chin-Parker, Seth; Ross, Brian H.

    2004-01-01

    Category knowledge allows for both the determination of category membership and an understanding of what the members of a category are like. Diagnostic information is used to determine category membership; prototypical information reflects the most likely features given category membership. Two experiments examined 2 means of category learning,…

  4. Understanding Clinical Mammographic Breast Density Assessment: a Deep Learning Perspective.

    PubMed

    Mohamed, Aly A; Luo, Yahong; Peng, Hong; Jankowitz, Rachel C; Wu, Shandong

    2017-09-20

    Mammographic breast density has been established as an independent risk marker for developing breast cancer. Breast density assessment is a routine clinical need in breast cancer screening and current standard is using the Breast Imaging and Reporting Data System (BI-RADS) criteria including four qualitative categories (i.e., fatty, scattered density, heterogeneously dense, or extremely dense). In each mammogram examination, a breast is typically imaged with two different views, i.e., the mediolateral oblique (MLO) view and cranial caudal (CC) view. The BI-RADS-based breast density assessment is a qualitative process made by visual observation of both the MLO and CC views by radiologists, where there is a notable inter- and intra-reader variability. In order to maintain consistency and accuracy in BI-RADS-based breast density assessment, gaining understanding on radiologists' reading behaviors will be educational. In this study, we proposed to leverage the newly emerged deep learning approach to investigate how the MLO and CC view images of a mammogram examination may have been clinically used by radiologists in coming up with a BI-RADS density category. We implemented a convolutional neural network (CNN)-based deep learning model, aimed at distinguishing the breast density categories using a large (15,415 images) set of real-world clinical mammogram images. Our results showed that the classification of density categories (in terms of area under the receiver operating characteristic curve) using MLO view images is significantly higher than that using the CC view. This indicates that most likely it is the MLO view that the radiologists have predominately used to determine the breast density BI-RADS categories. Our study holds a potential to further interpret radiologists' reading characteristics, enhance personalized clinical training to radiologists, and ultimately reduce reader variations in breast density assessment.

  5. Children's Category-Based Inferences Affect Classification

    ERIC Educational Resources Information Center

    Ross, Brian H.; Gelman, Susan A.; Rosengren, Karl S.

    2005-01-01

    Children learn many new categories and make inferences about these categories. Much work has examined how children make inferences on the basis of category knowledge. However, inferences may also affect what is learned about a category. Four experiments examine whether category-based inferences during category learning influence category knowledge…

  6. The transfer of category knowledge by macaques (Macaca mulatta) and humans (Homo sapiens).

    PubMed

    Zakrzewski, Alexandria C; Church, Barbara A; Smith, J David

    2018-02-01

    Cognitive psychologists distinguish implicit, procedural category learning (stimulus-response associations learned outside declarative cognition) from explicit-declarative category learning (conscious category rules). These systems are dissociated by category learning tasks with either a multidimensional, information-integration (II) solution or a unidimensional, rule-based (RB) solution. In the present experiments, humans and two monkeys learned II and RB category tasks fostering implicit and explicit learning, respectively. Then they received occasional transfer trials-never directly reinforced-drawn from untrained regions of the stimulus space. We hypothesized that implicit-procedural category learning-allied to associative learning-would transfer weakly because it is yoked to the training stimuli. This result was confirmed for humans and monkeys. We hypothesized that explicit category learning-allied to abstract category rules-would transfer robustly. This result was confirmed only for humans. That is, humans displayed explicit category knowledge that transferred flawlessly. Monkeys did not. This result illuminates the distinctive abstractness, stimulus independence, and representational portability of humans' explicit category rules. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Task alters category representations in prefrontal but not high-level visual cortex.

    PubMed

    Bugatus, Lior; Weiner, Kevin S; Grill-Spector, Kalanit

    2017-07-15

    A central question in neuroscience is how cognitive tasks affect category representations across the human brain. Regions in lateral occipito-temporal cortex (LOTC), ventral temporal cortex (VTC), and ventro-lateral prefrontal cortex (VLFPC) constitute the extended "what" pathway, which is considered instrumental for visual category processing. However, it is unknown (1) whether distributed responses across LOTC, VTC, and VLPFC explicitly represent category, task, or some combination of both, and (2) in what way representations across these subdivisions of the extended 'what' pathway may differ. To fill these gaps in knowledge, we scanned 12 participants using fMRI to test the effect of category and task on distributed responses across LOTC, VTC, and VLPFC. Results reveal that task and category modulate responses in both high-level visual regions, as well as prefrontal cortex. However, we found fundamentally different types of representations across the brain. Distributed responses in high-level visual regions are more strongly driven by category than task, and exhibit task-independent category representations. In contrast, distributed responses in prefrontal cortex are more strongly driven by task than category, and contain task-dependent category representations. Together, these findings of differential representations across the brain support a new idea that LOTC and VTC maintain stable category representations allowing efficient processing of visual information, while prefrontal cortex contains flexible representations in which category information may emerge only when relevant to the task. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. An Analysis of Category Management of Service Contracts

    DTIC Science & Technology

    2017-12-01

    management teams a way to make informed , data-driven decisions. Data-driven decisions derived from clustering not only align with Category...savings. Furthermore, this methodology provides a data-driven visualization to inform sound business decisions on potential Category Management ...Category Management initiatives. The Maptitude software will allow future research to collect data and develop visualizations to inform Category

  9. Procedurally Mediated Social Inferences: The Case of Category Accessibility Effects.

    DTIC Science & Technology

    1984-12-01

    New York: Academic. Craik , F. I. M., & Lockhart , R. S. (1972). Levels of processing : A framework for memory research. Journal of Verbal Learning...more "deeply" encoded semantic features (cf. Craik 8 Lockhart , 1972). (A few theorists assume that visual images may also be used as an alternative...semantically rather than phonemically or graphemically ( Craik & Lockhart , 1972). It is this familiar type of declarative memory of which we are usually

  10. Time Course of Visual Attention in Infant Categorization of Cats versus Dogs: Evidence for a Head Bias as Revealed through Eye Tracking

    ERIC Educational Resources Information Center

    Quinn, Paul C.; Doran, Matthew M.; Reiss, Jason E.; Hoffman, James E.

    2009-01-01

    Previous looking time studies have shown that infants use the heads of cat and dog images to form category representations for these animal classes. The present research used an eye-tracking procedure to determine the time course of attention to the head and whether it reflects a preexisting bias or online learning. Six- to 7-month-olds were…

  11. Toward a dual-learning systems model of speech category learning

    PubMed Central

    Chandrasekaran, Bharath; Koslov, Seth R.; Maddox, W. T.

    2014-01-01

    More than two decades of work in vision posits the existence of dual-learning systems of category learning. The reflective system uses working memory to develop and test rules for classifying in an explicit fashion, while the reflexive system operates by implicitly associating perception with actions that lead to reinforcement. Dual-learning systems models hypothesize that in learning natural categories, learners initially use the reflective system and, with practice, transfer control to the reflexive system. The role of reflective and reflexive systems in auditory category learning and more specifically in speech category learning has not been systematically examined. In this article, we describe a neurobiologically constrained dual-learning systems theoretical framework that is currently being developed in speech category learning and review recent applications of this framework. Using behavioral and computational modeling approaches, we provide evidence that speech category learning is predominantly mediated by the reflexive learning system. In one application, we explore the effects of normal aging on non-speech and speech category learning. Prominently, we find a large age-related deficit in speech learning. The computational modeling suggests that older adults are less likely to transition from simple, reflective, unidimensional rules to more complex, reflexive, multi-dimensional rules. In a second application, we summarize a recent study examining auditory category learning in individuals with elevated depressive symptoms. We find a deficit in reflective-optimal and an enhancement in reflexive-optimal auditory category learning. Interestingly, individuals with elevated depressive symptoms also show an advantage in learning speech categories. We end with a brief summary and description of a number of future directions. PMID:25132827

  12. Color categories affect pre-attentive color perception.

    PubMed

    Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna

    2010-10-01

    Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition

    DTIC Science & Technology

    2013-01-01

    Discriminative Visual Recognition ∗ Felix X. Yu†, Liangliang Cao§, Rogerio S. Feris§, John R. Smith§, Shih-Fu Chang† † Columbia University § IBM T. J...for Designing Category-Level Attributes for Dis- criminative Visual Recognition [3]. We first provide an overview of the proposed ap- proach in...2013 to 00-00-2013 4. TITLE AND SUBTITLE Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition 5a

  14. The development of automaticity in short-term memory search: Item-response learning and category learning.

    PubMed

    Cao, Rui; Nosofsky, Robert M; Shiffrin, Richard M

    2017-05-01

    In short-term-memory (STM)-search tasks, observers judge whether a test probe was present in a short list of study items. Here we investigated the long-term learning mechanisms that lead to the highly efficient STM-search performance observed under conditions of consistent-mapping (CM) training, in which targets and foils never switch roles across trials. In item-response learning, subjects learn long-term mappings between individual items and target versus foil responses. In category learning, subjects learn high-level codes corresponding to separate sets of items and learn to attach old versus new responses to these category codes. To distinguish between these 2 forms of learning, we tested subjects in categorized varied mapping (CV) conditions: There were 2 distinct categories of items, but the assignment of categories to target versus foil responses varied across trials. In cases involving arbitrary categories, CV performance closely resembled standard varied-mapping performance without categories and departed dramatically from CM performance, supporting the item-response-learning hypothesis. In cases involving prelearned categories, CV performance resembled CM performance, as long as there was sufficient practice or steps taken to reduce trial-to-trial category-switching costs. This pattern of results supports the category-coding hypothesis for sufficiently well-learned categories. Thus, item-response learning occurs rapidly and is used early in CM training; category learning is much slower but is eventually adopted and is used to increase the efficiency of search beyond that available from item-response learning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Visual search for object categories is predicted by the representational architecture of high-level visual cortex

    PubMed Central

    Alvarez, George A.; Nakayama, Ken; Konkle, Talia

    2016-01-01

    Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing. PMID:27832600

  16. The Processing Speed of Scene Categorization at Multiple Levels of Description: The Superordinate Advantage Revisited.

    PubMed

    Banno, Hayaki; Saiki, Jun

    2015-03-01

    Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures. © 2015 SAGE Publications.

  17. The Role of Corticostriatal Systems in Speech Category Learning

    PubMed Central

    Yi, Han-Gyol; Maddox, W. Todd; Mumford, Jeanette A.; Chandrasekaran, Bharath

    2016-01-01

    One of the most difficult category learning problems for humans is learning nonnative speech categories. While feedback-based category training can enhance speech learning, the mechanisms underlying these benefits are unclear. In this functional magnetic resonance imaging study, we investigated neural and computational mechanisms underlying feedback-dependent speech category learning in adults. Positive feedback activated a large corticostriatal network including the dorsolateral prefrontal cortex, inferior parietal lobule, middle temporal gyrus, caudate, putamen, and the ventral striatum. Successful learning was contingent upon the activity of domain-general category learning systems: the fast-learning reflective system, involving the dorsolateral prefrontal cortex that develops and tests explicit rules based on the feedback content, and the slow-learning reflexive system, involving the putamen in which the stimuli are implicitly associated with category responses based on the reward value in feedback. Computational modeling of response strategies revealed significant use of reflective strategies early in training and greater use of reflexive strategies later in training. Reflexive strategy use was associated with increased activation in the putamen. Our results demonstrate a critical role for the reflexive corticostriatal learning system as a function of response strategy and proficiency during speech category learning. Keywords: category learning, fMRI, corticostriatal systems, speech, putamen PMID:25331600

  18. Learning preference as a predictor of academic performance in first year accelerated graduate entry nursing students: a prospective follow-up study.

    PubMed

    Koch, Jane; Salamonson, Yenna; Rolley, John X; Davidson, Patricia M

    2011-08-01

    The growth of accelerated graduate entry nursing programs has challenged traditional approaches to teaching and learning. To date, limited research has been undertaken in the role of learning preferences, language proficiency and academic performance in accelerated programs. Sixty-two first year accelerated graduate entry nursing students, in a single cohort at a university in the western region of Sydney, Australia, were surveyed to assess their learning preference using the Visual, Aural, Read/write and Kinaesthetic (VARK) learning preference questionnaire, together with sociodemographic data, English language acculturation and perceived academic control. Six months following course commencement, the participant's grade point average (GPA) was studied as a measurement of academic performance. A 93% response rate was achieved. The majority of students (62%) reported preference for multiple approaches to learning with the kinaesthetic sensory mode a significant (p=0.009) predictor of academic performance. Students who spoke only English at home had higher mean scores across two of the four categories of VARK sensory modalities, visual and kinaesthetic compared to those who spoke non-English. Further research is warranted to investigate the reasons why the kinaesthetic sensory mode is a predictor of academic performance and to what extent the VARK mean scores of the four learning preference(s) change with improved English language proficiency. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. The learning style preferences of chiropractic students: A cross-sectional study

    PubMed Central

    Whillier, Stephney; Lystad, Reidar P.; Abi-Arrage, David; McPhie, Christopher; Johnston, Samara; Williams, Christopher; Rice, Mark

    2014-01-01

    Objective The aims of our study were to measure the learning style preferences of chiropractic students and to assess whether they differ across the 5 years of chiropractic study. Methods A total of 407 (41.4% females) full-degree, undergraduate, and postgraduate students enrolled in an Australian chiropractic program agreed to participate in a cross-sectional survey comprised of basic demographic information and the Visual, Aural, Read/Write, Kinesthetic (VARK) questionnaire, which identifies learning preferences on four different subscales: visual, aural, reading/writing, and kinesthetic. Multivariate analysis of variance and the χ2 test were used to check for differences in continuous (VARK scores) and categorical (VARK category preference) outcome variables. Results The majority of chiropractic students (56.0%) were found to be multimodal learners. Compared to the other learning styles preferences, kinesthetic learning was preferred by a significantly greater proportion of students (65.4%, p < .001) and received a significantly greater mean VARK score (5.66 ± 2.47, p < .001). Conclusions To the best of our knowledge, this is the first time chiropractic students have been shown to be largely multimodal learners with a preference for kinesthetic learning. While this knowledge may be beneficial in the structuring of future curricula, more thorough research must be conducted to show any beneficial relationship between learning style preferences and teaching methods. PMID:24350945

  20. Experience improves feature extraction in Drosophila.

    PubMed

    Peng, Yueqing; Xi, Wang; Zhang, Wei; Zhang, Ke; Guo, Aike

    2007-05-09

    Previous exposure to a pattern in the visual scene can enhance subsequent recognition of that pattern in many species from honeybees to humans. However, whether previous experience with a visual feature of an object, such as color or shape, can also facilitate later recognition of that particular feature from multiple visual features is largely unknown. Visual feature extraction is the ability to select the key component from multiple visual features. Using a visual flight simulator, we designed a novel protocol for visual feature extraction to investigate the effects of previous experience on visual reinforcement learning in Drosophila. We found that, after conditioning with a visual feature of objects among combinatorial shape-color features, wild-type flies exhibited poor ability to extract the correct visual feature. However, the ability for visual feature extraction was greatly enhanced in flies trained previously with that visual feature alone. Moreover, we demonstrated that flies might possess the ability to extract the abstract category of "shape" but not a particular shape. Finally, this experience-dependent feature extraction is absent in flies with defective MBs, one of the central brain structures in Drosophila. Our results indicate that previous experience can enhance visual feature extraction in Drosophila and that MBs are required for this experience-dependent visual cognition.

  1. Remembering faces and scenes: The mixed-category advantage in visual working memory.

    PubMed

    Jiang, Yuhong V; Remington, Roger W; Asaad, Anthony; Lee, Hyejin J; Mikkalson, Taylor C

    2016-09-01

    We examined the mixed-category memory advantage for faces and scenes to determine how domain-specific cortical resources constrain visual working memory. Consistent with previous findings, visual working memory for a display of 2 faces and 2 scenes was better than that for a display of 4 faces or 4 scenes. This pattern was unaffected by manipulations of encoding duration. However, the mixed-category advantage was carried solely by faces: Memory for scenes was not better when scenes were encoded with faces rather than with other scenes. The asymmetry between faces and scenes was found when items were presented simultaneously or sequentially, centrally, or peripherally, and when scenes were drawn from a narrow category. A further experiment showed a mixed-category advantage in memory for faces and bodies, but not in memory for scenes and objects. The results suggest that unique category-specific interactions contribute significantly to the mixed-category advantage in visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Statistical Inference in the Learning of Novel Phonetic Categories

    ERIC Educational Resources Information Center

    Zhao, Yuan

    2010-01-01

    Learning a phonetic category (or any linguistic category) requires integrating different sources of information. A crucial unsolved problem for phonetic learning is how this integration occurs: how can we update our previous knowledge about a phonetic category as we hear new exemplars of the category? One model of learning is Bayesian Inference,…

  3. Basic level category structure emerges gradually across human ventral visual cortex.

    PubMed

    Iordan, Marius Cătălin; Greene, Michelle R; Beck, Diane M; Fei-Fei, Li

    2015-07-01

    Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad ("natural object") to very distinct ("Mr. Woof"), with a mid-level of generality (basic level: "dog") often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multivoxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipitotemporal cortex. We found that, although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipitotemporal cortex. This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.

  4. Classification versus inference learning contrasted with real-world categories.

    PubMed

    Jones, Erin L; Ross, Brian H

    2011-07-01

    Categories are learned and used in a variety of ways, but the research focus has been on classification learning. Recent work contrasting classification with inference learning of categories found important later differences in category performance. However, theoretical accounts differ on whether this is due to an inherent difference between the tasks or to the implementation decisions. The inherent-difference explanation argues that inference learners focus on the internal structure of the categories--what each category is like--while classification learners focus on diagnostic information to predict category membership. In two experiments, using real-world categories and controlling for earlier methodological differences, inference learners learned more about what each category was like than did classification learners, as evidenced by higher performance on a novel classification test. These results suggest that there is an inherent difference between learning new categories by classifying an item versus inferring a feature.

  5. Learning about the internal structure of categories through classification and feature inference.

    PubMed

    Jee, Benjamin D; Wiley, Jennifer

    2014-01-01

    Previous research on category learning has found that classification tasks produce representations that are skewed toward diagnostic feature dimensions, whereas feature inference tasks lead to richer representations of within-category structure. Yet, prior studies often measure category knowledge through tasks that involve identifying only the typical features of a category. This neglects an important aspect of a category's internal structure: how typical and atypical features are distributed within a category. The present experiments tested the hypothesis that inference learning results in richer knowledge of internal category structure than classification learning. We introduced several new measures to probe learners' representations of within-category structure. Experiment 1 found that participants in the inference condition learned and used a wider range of feature dimensions than classification learners. Classification learners, however, were more sensitive to the presence of atypical features within categories. Experiment 2 provided converging evidence that classification learners were more likely to incorporate atypical features into their representations. Inference learners were less likely to encode atypical category features, even in a "partial inference" condition that focused learners' attention on the feature dimensions relevant to classification. Overall, these results are contrary to the hypothesis that inference learning produces superior knowledge of within-category structure. Although inference learning promoted representations that included a broad range of category-typical features, classification learning promoted greater sensitivity to the distribution of typical and atypical features within categories.

  6. Relationships between academic performance, SES school type and perceptual-motor skills in first grade South African learners: NW-CHILD study.

    PubMed

    Pienaar, A E; Barhorst, R; Twisk, J W R

    2014-05-01

    Perceptual-motor skills contribute to a variety of basic learning skills associated with normal academic success. This study aimed to determine the relationship between academic performance and perceptual-motor skills in first grade South African learners and whether low SES (socio-economic status) school type plays a role in such a relationship. This cross-sectional study of the baseline measurements of the NW-CHILD longitudinal study included a stratified random sample of first grade learners (n = 812; 418 boys and 394 boys), with a mean age of 6.78 years ± 0.49 living in the North West Province (NW) of South Africa. The Beery-Buktenica Developmental Test of Visual-Motor Integration-4 (VMI) was used to assess visual-motor integration, visual perception and hand control while the Bruininks Oseretsky Test of Motor Proficiency, short form (BOT2-SF) assessed overall motor proficiency. Academic performance in math, reading and writing was assessed with the Mastery of Basic Learning Areas Questionnaire. Linear mixed models analysis was performed with spss to determine possible differences between the different VMI and BOT2-SF standard scores in different math, reading and writing mastery categories ranging from no mastery to outstanding mastery. A multinomial multilevel logistic regression analysis was performed to assess the relationship between a clustered score of academic performance and the different determinants. A strong relationship was established between academic performance and VMI, visual perception, hand control and motor proficiency with a significant relationship between a clustered academic performance score, visual-motor integration and visual perception. A negative association was established between low SES school types on academic performance, with a common perceptual motor foundation shared by all basic learning areas. Visual-motor integration, visual perception, hand control and motor proficiency are closely related to basic academic skills required in the first formal school year, especially among learners in low SES type schools. © 2013 John Wiley & Sons Ltd.

  7. Learning multisensory representations for auditory-visual transfer of sequence category knowledge: a probabilistic language of thought approach.

    PubMed

    Yildirim, Ilker; Jacobs, Robert A

    2015-06-01

    If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.

  8. Modelling eye movements in a categorical search task

    PubMed Central

    Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris

    2013-01-01

    We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720

  9. Decoding visual object categories in early somatosensory cortex.

    PubMed

    Smith, Fraser W; Goodale, Melvyn A

    2015-04-01

    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. © The Author 2013. Published by Oxford University Press.

  10. Decoding Visual Object Categories in Early Somatosensory Cortex

    PubMed Central

    Smith, Fraser W.; Goodale, Melvyn A.

    2015-01-01

    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136

  11. High resolution satellite image indexing and retrieval using SURF features and bag of visual words

    NASA Astrophysics Data System (ADS)

    Bouteldja, Samia; Kourgli, Assia

    2017-03-01

    In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.

  12. What explains health in persons with visual impairment?

    PubMed Central

    2014-01-01

    Background Visual impairment is associated with important limitations in functioning. The International Classification of Functioning, Disability and Health (ICF) adopted by the World Health Organisation (WHO) relies on a globally accepted framework for classifying problems in functioning and the influence of contextual factors. Its comprehensive perspective, including biological, individual and social aspects of health, enables the ICF to describe the whole health experience of persons with visual impairment. The objectives of this study are (1) to analyze whether the ICF can be used to comprehensively describe the problems in functioning of persons with visual impairment and the environmental factors that influence their lives and (2) to select the ICF categories that best capture self-perceived health of persons with visual impairment. Methods Data from 105 persons with visual impairment were collected, including socio-demographic data, vision-related data, the Extended ICF Checklist and the visual analogue scale of the EuroQoL-5D, to assess self-perceived health. Descriptive statistics and a Group Lasso regression were performed. The main outcome measures were functioning defined as impairments in Body functions and Body structures, limitations in Activities and restrictions in Participation, influencing Environmental factors and self-perceived health. Results In total, 120 ICF categories covering a broad range of Body functions, Body structures, aspects of Activities and Participation and Environmental factors were identified. Thirteen ICF categories that best capture self-perceived health were selected based on the Group Lasso regression. While Activities-and-Participation categories were selected most frequently, the greatest impact on self-perceived health was found in Body-functions categories. The ICF can be used as a framework to comprehensively describe the problems of persons with visual impairment and the Environmental factors which influence their lives. Conclusions There are plenty of ICF categories, Environmental-factors categories in particular, which are relevant to persons with visual impairment, but have hardly ever been taken into consideration in literature and visual impairment-specific patient-reported outcome measures. PMID:24886326

  13. Ego depletion interferes with rule-defined category learning but not non-rule-defined category learning.

    PubMed

    Minda, John P; Rabi, Rahel

    2015-01-01

    Considerable research on category learning has suggested that many cognitive and environmental factors can have a differential effect on the learning of rule-defined (RD) categories as opposed to the learning of non-rule-defined (NRD) categories. Prior research has also suggested that ego depletion can temporarily reduce the capacity for executive functioning and cognitive flexibility. The present study examined whether temporarily reducing participants' executive functioning via a resource depletion manipulation would differentially impact RD and NRD category learning. Participants were either asked to write a story with no restrictions (the control condition), or without using two common letters (the ego depletion condition). Participants were then asked to learn either a set of RD categories or a set of NRD categories. Resource depleted participants performed more poorly than controls on the RD task, but did not differ from controls on the NRD task, suggesting that self regulatory resources are required for successful RD category learning. These results lend support to multiple systems theories and clarify the role of self-regulatory resources within this theory.

  14. Ego depletion interferes with rule-defined category learning but not non-rule-defined category learning

    PubMed Central

    Minda, John P.; Rabi, Rahel

    2015-01-01

    Considerable research on category learning has suggested that many cognitive and environmental factors can have a differential effect on the learning of rule-defined (RD) categories as opposed to the learning of non-rule-defined (NRD) categories. Prior research has also suggested that ego depletion can temporarily reduce the capacity for executive functioning and cognitive flexibility. The present study examined whether temporarily reducing participants’ executive functioning via a resource depletion manipulation would differentially impact RD and NRD category learning. Participants were either asked to write a story with no restrictions (the control condition), or without using two common letters (the ego depletion condition). Participants were then asked to learn either a set of RD categories or a set of NRD categories. Resource depleted participants performed more poorly than controls on the RD task, but did not differ from controls on the NRD task, suggesting that self regulatory resources are required for successful RD category learning. These results lend support to multiple systems theories and clarify the role of self-regulatory resources within this theory. PMID:25688220

  15. Beyond sensory images: Object-based representation in the human ventral pathway

    PubMed Central

    Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.

    2004-01-01

    We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396

  16. Similarity relations in visual search predict rapid visual categorization

    PubMed Central

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  17. Feature Inference Learning and Eyetracking

    ERIC Educational Resources Information Center

    Rehder, Bob; Colner, Robert M.; Hoffman, Aaron B.

    2009-01-01

    Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of…

  18. Multi-voxel patterns of visual category representation during episodic encoding are predictive of subsequent memory

    PubMed Central

    Kuhl, Brice A.; Rissman, Jesse; Wagner, Anthony D.

    2012-01-01

    Successful encoding of episodic memories is thought to depend on contributions from prefrontal and temporal lobe structures. Neural processes that contribute to successful encoding have been extensively explored through univariate analyses of neuroimaging data that compare mean activity levels elicited during the encoding of events that are subsequently remembered vs. those subsequently forgotten. Here, we applied pattern classification to fMRI data to assess the degree to which distributed patterns of activity within prefrontal and temporal lobe structures elicited during the encoding of word-image pairs were diagnostic of the visual category (Face or Scene) of the encoded image. We then assessed whether representation of category information was predictive of subsequent memory. Classification analyses indicated that temporal lobe structures contained information robustly diagnostic of visual category. Information in prefrontal cortex was less diagnostic of visual category, but was nonetheless associated with highly reliable classifier-based evidence for category representation. Critically, trials associated with greater classifier-based estimates of category representation in temporal and prefrontal regions were associated with a higher probability of subsequent remembering. Finally, consideration of trial-by-trial variance in classifier-based measures of category representation revealed positive correlations between prefrontal and temporal lobe representations, with the strength of these correlations varying as a function of the category of image being encoded. Together, these results indicate that multi-voxel representations of encoded information can provide unique insights into how visual experiences are transformed into episodic memories. PMID:21925190

  19. Consider the category: The effect of spacing depends on individual learning histories.

    PubMed

    Slone, Lauren K; Sandhofer, Catherine M

    2017-07-01

    The spacing effect refers to increased retention following learning instances that are spaced out in time compared with massed together in time. By one account, the advantages of spaced learning should be independent of task particulars and previous learning experiences given that spacing effects have been demonstrated in a variety of tasks across the lifespan. However, by another account, spaced learning should be affected by previous learning because past learning affects the memory and attention processes that form the crux of the spacing effect. The current study investigated whether individuals' learning histories affect the role of spacing in category learning. We examined the effect of spacing on 24 2- to 3.5-year-old children's learning of categories organized by properties to which children's previous learning experiences have biased them to attend (i.e., shape) and properties to which children are less biased to attend (i.e., texture and color). Spaced presentations led to significantly better learning of shape categories, but not of texture or color categories, compared with massed presentations. In addition, generalized estimating equations analyses revealed positive relations between the size of children's "shape-side" productive vocabularies and their shape category learning and between the size of children's "against-the-system" productive vocabularies and their texture category learning. These results suggest that children's attention to and memory for novel object categories are strongly related to their individual word-learning histories. Moreover, children's learned attentional biases affected the types of categories for which spacing facilitated learning. These findings highlight the importance of considering how learners' previous experiences may influence future learning. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. SUSTAIN: a network model of category learning.

    PubMed

    Love, Bradley C; Medin, Douglas L; Gureckis, Todd M

    2004-04-01

    SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into prototypes-attractors-rules. SUSTAIN's discovery of category substructure is affected not only by the structure of the world but by the nature of the learning task and the learner's goals. SUSTAIN successfully extends category learning models to studies of inference learning, unsupervised learning, category construction, and contexts in which identification learning is faster than classification learning.

  1. Modelling individual difference in visual categorization.

    PubMed

    Shen, Jianhong; Palmeri, Thomas J

    Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.

  2. Modelling individual difference in visual categorization

    PubMed Central

    Shen, Jianhong; Palmeri, Thomas J.

    2016-01-01

    Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization. PMID:28154496

  3. Relative risk of probabilistic category learning deficits in patients with schizophrenia and their siblings

    PubMed Central

    Weickert, Thomas W.; Goldberg, Terry E.; Egan, Michael F.; Apud, Jose A.; Meeter, Martijn; Myers, Catherine E.; Gluck, Mark A; Weinberger, Daniel R.

    2010-01-01

    Background While patients with schizophrenia display an overall probabilistic category learning performance deficit, the extent to which this deficit occurs in unaffected siblings of patients with schizophrenia is unknown. There are also discrepant findings regarding probabilistic category learning acquisition rate and performance in patients with schizophrenia. Methods A probabilistic category learning test was administered to 108 patients with schizophrenia, 82 unaffected siblings, and 121 healthy participants. Results Patients with schizophrenia displayed significant differences from their unaffected siblings and healthy participants with respect to probabilistic category learning acquisition rates. Although siblings on the whole failed to differ from healthy participants on strategy and quantitative indices of overall performance and learning acquisition, application of a revised learning criterion enabling classification into good and poor learners based on individual learning curves revealed significant differences between percentages of sibling and healthy poor learners: healthy (13.2%), siblings (34.1%), patients (48.1%), yielding a moderate relative risk. Conclusions These results clarify previous discrepant findings pertaining to probabilistic category learning acquisition rate in schizophrenia and provide the first evidence for the relative risk of probabilistic category learning abnormalities in unaffected siblings of patients with schizophrenia, supporting genetic underpinnings of probabilistic category learning deficits in schizophrenia. These findings also raise questions regarding the contribution of antipsychotic medication to the probabilistic category learning deficit in schizophrenia. The distinction between good and poor learning may be used to inform genetic studies designed to detect schizophrenia risk alleles. PMID:20172502

  4. Incremental Bayesian Category Learning From Natural Language.

    PubMed

    Frermann, Lea; Lapata, Mirella

    2016-08-01

    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., chair is a member of the furniture category). We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: (a) the acquisition of features that discriminate among categories, and (b) the grouping of concepts into categories based on those features. Our model learns categories incrementally using particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference that sequentially integrates newly observed data and can be viewed as a plausible mechanism for human learning. Experimental results show that our incremental learner obtains meaningful categories which yield a closer fit to behavioral data compared to related models while at the same time acquiring features which characterize the learned categories. (An earlier version of this work was published in Frermann and Lapata .). Copyright © 2015 Cognitive Science Society, Inc.

  5. The contribution of temporary storage and executive processes to category learning.

    PubMed

    Wang, Tengfei; Ren, Xuezhu; Schweizer, Karl

    2015-09-01

    Three distinctly different working memory processes, temporary storage, mental shifting and inhibition, were proposed to account for individual differences in category learning. A sample of 213 participants completed a classic category learning task and two working memory tasks that were experimentally manipulated for tapping specific working memory processes. Fixed-links models were used to decompose data of the category learning task into two independent components representing basic performance and improvement in performance in category learning. Processes of working memory were also represented by fixed-links models. In a next step the three working memory processes were linked to components of category learning. Results from modeling analyses indicated that temporary storage had a significant effect on basic performance and shifting had a moderate effect on improvement in performance. In contrast, inhibition showed no effect on any component of the category learning task. These results suggest that temporary storage and the shifting process play different roles in the course of acquiring new categories. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Category-Specificity in Visual Object Recognition

    ERIC Educational Resources Information Center

    Gerlach, Christian

    2009-01-01

    Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not: some brain damaged patients are more impaired in recognizing natural objects than artefacts whereas others show the opposite impairment. Category-effects have also been demonstrated in neurologically intact subjects, but the…

  7. The Role of Corticostriatal Systems in Speech Category Learning.

    PubMed

    Yi, Han-Gyol; Maddox, W Todd; Mumford, Jeanette A; Chandrasekaran, Bharath

    2016-04-01

    One of the most difficult category learning problems for humans is learning nonnative speech categories. While feedback-based category training can enhance speech learning, the mechanisms underlying these benefits are unclear. In this functional magnetic resonance imaging study, we investigated neural and computational mechanisms underlying feedback-dependent speech category learning in adults. Positive feedback activated a large corticostriatal network including the dorsolateral prefrontal cortex, inferior parietal lobule, middle temporal gyrus, caudate, putamen, and the ventral striatum. Successful learning was contingent upon the activity of domain-general category learning systems: the fast-learning reflective system, involving the dorsolateral prefrontal cortex that develops and tests explicit rules based on the feedback content, and the slow-learning reflexive system, involving the putamen in which the stimuli are implicitly associated with category responses based on the reward value in feedback. Computational modeling of response strategies revealed significant use of reflective strategies early in training and greater use of reflexive strategies later in training. Reflexive strategy use was associated with increased activation in the putamen. Our results demonstrate a critical role for the reflexive corticostriatal learning system as a function of response strategy and proficiency during speech category learning. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Category transfer in sequential causal learning: the unbroken mechanism hypothesis.

    PubMed

    Hagmayer, York; Meder, Björn; von Sydow, Momme; Waldmann, Michael R

    2011-07-01

    The goal of the present set of studies is to explore the boundary conditions of category transfer in causal learning. Previous research has shown that people are capable of inducing categories based on causal learning input, and they often transfer these categories to new causal learning tasks. However, occasionally learners abandon the learned categories and induce new ones. Whereas previously it has been argued that transfer is only observed with essentialist categories in which the hidden properties are causally relevant for the target effect in the transfer relation, we here propose an alternative explanation, the unbroken mechanism hypothesis. This hypothesis claims that categories are transferred from a previously learned causal relation to a new causal relation when learners assume a causal mechanism linking the two relations that is continuous and unbroken. The findings of two causal learning experiments support the unbroken mechanism hypothesis. Copyright © 2011 Cognitive Science Society, Inc.

  9. Category representations in the brain are both discretely localized and widely distributed.

    PubMed

    Shehzad, Zarrar; McCarthy, Gregory

    2018-06-01

    Whether category information is discretely localized or represented widely in the brain remains a contentious issue. Initial functional MRI studies supported the localizationist perspective that category information is represented in discrete brain regions. More recent fMRI studies using machine learning pattern classification techniques provide evidence for widespread distributed representations. However, these latter studies have not typically accounted for shared information. Here, we find strong support for distributed representations when brain regions are considered separately. However, localized representations are revealed by using analytical methods that separate unique from shared information among brain regions. The distributed nature of shared information and the localized nature of unique information suggest that brain connectivity may encourage spreading of information but category-specific computations are carried out in distinct domain-specific regions. NEW & NOTEWORTHY Whether visual category information is localized in unique domain-specific brain regions or distributed in many domain-general brain regions is hotly contested. We resolve this debate by using multivariate analyses to parse functional MRI signals from different brain regions into unique and shared variance. Our findings support elements of both models and show information is initially localized and then shared among other regions leading to distributed representations being observed.

  10. Deep Residual Network Predicts Cortical Representation and Organization of Visual Features for Rapid Categorization.

    PubMed

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-02-28

    The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.

  11. Effects of Grammatical Categories on Children's Visual Language Processing: Evidence from Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Weber-Fox, Christine; Hart, Laura J.; Spruill, John E., III

    2006-01-01

    This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and…

  12. The Conceptual Grouping Effect: Categories Matter (and Named Categories Matter More)

    ERIC Educational Resources Information Center

    Lupyan, Gary

    2008-01-01

    Do conceptual categories affect basic visual processing? A conceptual grouping effect for familiar stimuli is reported using a visual search paradigm. Search through conceptually-homogeneous non-targets was faster and more efficient than search through conceptually-heterogeneous non-targets. This effect cannot be attributed to perceptual factors…

  13. Learning and transfer of category knowledge in an indirect categorization task.

    PubMed

    Helie, Sebastien; Ashby, F Gregory

    2012-05-01

    Knowledge representations acquired during category learning experiments are 'tuned' to the task goal. A useful paradigm to study category representations is indirect category learning. In the present article, we propose a new indirect categorization task called the "same"-"different" categorization task. The same-different categorization task is a regular same-different task, but the question asked to the participants is about the stimulus category membership instead of stimulus identity. Experiment 1 explores the possibility of indirectly learning rule-based and information-integration category structures using the new paradigm. The results suggest that there is little learning about the category structures resulting from an indirect categorization task unless the categories can be separated by a one-dimensional rule. Experiment 2 explores whether a category representation learned indirectly can be used in a direct classification task (and vice versa). The results suggest that previous categorical knowledge acquired during a direct classification task can be expressed in the same-different categorization task only when the categories can be separated by a rule that is easily verbalized. Implications of these results for categorization research are discussed.

  14. Decoding visual object categories from temporal correlations of ECoG signals.

    PubMed

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Heterogeneity in perceptual category learning by high functioning children with autism spectrum disorder

    PubMed Central

    Mercado, Eduardo; Church, Barbara A.; Coutinho, Mariana V. C.; Dovgopoly, Alexander; Lopata, Christopher J.; Toomey, Jennifer A.; Thomeer, Marcus L.

    2015-01-01

    Previous research suggests that high functioning (HF) children with autism spectrum disorder (ASD) sometimes have problems learning categories, but often appear to perform normally in categorization tasks. The deficits that individuals with ASD show when learning categories have been attributed to executive dysfunction, general deficits in implicit learning, atypical cognitive strategies, or abnormal perceptual biases and abilities. Several of these psychological explanations for category learning deficits have been associated with neural abnormalities such as cortical underconnectivity. The present study evaluated how well existing neurally based theories account for atypical perceptual category learning shown by HF children with ASD across multiple category learning tasks involving novel, abstract shapes. Consistent with earlier results, children’s performances revealed two distinct patterns of learning and generalization associated with ASD: one was indistinguishable from performance in typically developing children; the other revealed dramatic impairments. These two patterns were evident regardless of training regimen or stimulus set. Surprisingly, some children with ASD showed both patterns. Simulations of perceptual category learning could account for the two observed patterns in terms of differences in neural plasticity. However, no current psychological or neural theory adequately explains why a child with ASD might show such large fluctuations in category learning ability across training conditions or stimulus sets. PMID:26157368

  16. A role for the developing lexicon in phonetic category acquisition

    PubMed Central

    Feldman, Naomi H.; Griffiths, Thomas L.; Goldwater, Sharon; Morgan, James L.

    2013-01-01

    Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning. PMID:24219848

  17. The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes

    NASA Technical Reports Server (NTRS)

    Remington, R.; Williams, D.

    1984-01-01

    Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.

  18. Conceptual Distinctiveness Supports Detailed Visual Long-Term Memory for Real-World Objects

    PubMed Central

    Konkle, Talia; Brady, Timothy F.; Alvarez, George A.; Oliva, Aude

    2012-01-01

    Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars presented from each category. At test, observers indicated which of 2 exemplars they had previously studied. Memory performance was high and remained quite high (82% accuracy) with 16 exemplars from a category in memory, demonstrating a large memory capacity for object exemplars. However, memory performance decreased as more exemplars were held in memory, implying systematic categorical interference. Object categories with conceptually distinctive exemplars showed less interference in memory as the number of exemplars increased. Interference in memory was not predicted by the perceptual distinctiveness of exemplars from an object category, though these perceptual measures predicted visual search rates for an object target among exemplars. These data provide evidence that observers’ capacity to remember visual information in long-term memory depends more on conceptual structure than perceptual distinctiveness. PMID:20677899

  19. What you learn is more than what you see: what can sequencing effects tell us about inductive category learning?

    PubMed Central

    Carvalho, Paulo F.; Goldstone, Robert L.

    2015-01-01

    Inductive category learning takes place across time. As such, it is not surprising that the sequence in which information is studied has an impact in what is learned and how efficient learning is. In this paper we review research on different learning sequences and how this impacts learning. We analyze different aspects of interleaved (frequent alternation between categories during study) and blocked study (infrequent alternation between categories during study) that might explain how and when one sequence of study results in improved learning. While these different sequences of study differ in the amount of temporal spacing and temporal juxtaposition between items of different categories, these aspects do not seem to account for the majority of the results available in the literature. However, differences in the type of category being studied and the duration of the retention interval between study and test may play an important role. We conclude that there is no single aspect that is able to account for all the evidence available. Understanding learning as a process of sequential comparisons in time and how different sequences fundamentally alter the statistics of this experience offers a promising framework for understanding sequencing effects in category learning. We use this framework to present novel predictions and hypotheses for future research on sequencing effects in inductive category learning. PMID:25983699

  20. Learning through Feature Prediction: An Initial Investigation into Teaching Categories to Children with Autism through Predicting Missing Features

    ERIC Educational Resources Information Center

    Sweller, Naomi

    2015-01-01

    Individuals with autism have difficulty generalising information from one situation to another, a process that requires the learning of categories and concepts. Category information may be learned through: (1) classifying items into categories, or (2) predicting missing features of category items. Predicting missing features has to this point been…

  1. Toward a unified model of face and object recognition in the human visual system

    PubMed Central

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  2. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    ERIC Educational Resources Information Center

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  3. When more is less: Feedback effects in perceptual category learning ☆

    PubMed Central

    Maddox, W. Todd; Love, Bradley C.; Glass, Brian D.; Filoteo, J. Vincent

    2008-01-01

    Rule-based and information-integration category learning were compared under minimal and full feedback conditions. Rule-based category structures are those for which the optimal rule is verbalizable. Information-integration category structures are those for which the optimal rule is not verbalizable. With minimal feedback subjects are told whether their response was correct or incorrect, but are not informed of the correct category assignment. With full feedback subjects are informed of the correctness of their response and are also informed of the correct category assignment. An examination of the distinct neural circuits that subserve rule-based and information-integration category learning leads to the counterintuitive prediction that full feedback should facilitate rule-based learning but should also hinder information-integration learning. This prediction was supported in the experiment reported below. The implications of these results for theories of learning are discussed. PMID:18455155

  4. Discontinuous categories affect information-integration but not rule-based category learning.

    PubMed

    Maddox, W Todd; Filoteo, J Vincent; Lauritzen, J Scott; Connally, Emily; Hejl, Kelli D

    2005-07-01

    Three experiments were conducted that provide a direct examination of within-category discontinuity manipulations on the implicit, procedural-based learning and the explicit, hypothesis-testing systems proposed in F. G. Ashby, L. A. Alfonso-Reese, A. U. Turken, and E. M. Waldron's (1998) competition between verbal and implicit systems model. Discontinuous categories adversely affected information-integration but not rule-based category learning. Increasing the magnitude of the discontinuity did not lead to a significant decline in performance. The distance to the bound provides a reasonable description of the generalization profile associated with the hypothesis-testing system, whereas the distance to the bound plus the distance to the trained response region provides a reasonable description of the generalization profile associated with the procedural-based learning system. These results suggest that within-category discontinuity differentially impacts information-integration but not rule-based category learning and provides information regarding the detailed processing characteristics of each category learning system. ((c) 2005 APA, all rights reserved).

  5. Converging Modalities Ground Abstract Categories: The Case of Politics

    PubMed Central

    Farias, Ana Rita; Garrido, Margarida V.; Semin, Gün R.

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal. PMID:23593360

  6. Converging modalities ground abstract categories: the case of politics.

    PubMed

    Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  7. Differential impact of relevant and irrelevant dimension primes on rule-based and information-integration category learning.

    PubMed

    Grimm, Lisa R; Maddox, W Todd

    2013-11-01

    Research has identified multiple category-learning systems with each being "tuned" for learning categories with different task demands and each governed by different neurobiological systems. Rule-based (RB) classification involves testing verbalizable rules for category membership while information-integration (II) classification requires the implicit learning of stimulus-response mappings. In the first study to directly test rule priming with RB and II category learning, we investigated the influence of the availability of information presented at the beginning of the task. Participants viewed lines that varied in length, orientation, and position on the screen, and were primed to focus on stimulus dimensions that were relevant or irrelevant to the correct classification rule. In Experiment 1, we used an RB category structure, and in Experiment 2, we used an II category structure. Accuracy and model-based analyses suggested that a focus on relevant dimensions improves RB task performance later in learning while a focus on an irrelevant dimension improves II task performance early in learning. © 2013.

  8. Understanding visualization: a formal approach using category theory and semiotics.

    PubMed

    Vickers, Paul; Faith, Joe; Rossiter, Nick

    2013-06-01

    This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.

  9. When It Hurts (and Helps) to Try: The Role of Effort in Language Learning

    PubMed Central

    Finn, Amy S.; Lee, Taraz; Kraus, Allison; Hudson Kam, Carla L.

    2014-01-01

    Compared to children, adults are bad at learning language. This is counterintuitive; adults outperform children on most measures of cognition, especially those that involve effort (which continue to mature into early adulthood). The present study asks whether these mature effortful abilities interfere with language learning in adults and further, whether interference occurs equally for aspects of language that adults are good (word-segmentation) versus bad (grammar) at learning. Learners were exposed to an artificial language comprised of statistically defined words that belong to phonologically defined categories (grammar). Exposure occurred under passive or effortful conditions. Passive learners were told to listen while effortful learners were instructed to try to 1) learn the words, 2) learn the categories, or 3) learn the category-order. Effortful learners showed an advantage for learning words while passive learners showed an advantage for learning the categories. Effort can therefore hurt the learning of categories. PMID:25047901

  10. When it hurts (and helps) to try: the role of effort in language learning.

    PubMed

    Finn, Amy S; Lee, Taraz; Kraus, Allison; Hudson Kam, Carla L

    2014-01-01

    Compared to children, adults are bad at learning language. This is counterintuitive; adults outperform children on most measures of cognition, especially those that involve effort (which continue to mature into early adulthood). The present study asks whether these mature effortful abilities interfere with language learning in adults and further, whether interference occurs equally for aspects of language that adults are good (word-segmentation) versus bad (grammar) at learning. Learners were exposed to an artificial language comprised of statistically defined words that belong to phonologically defined categories (grammar). Exposure occurred under passive or effortful conditions. Passive learners were told to listen while effortful learners were instructed to try to 1) learn the words, 2) learn the categories, or 3) learn the category-order. Effortful learners showed an advantage for learning words while passive learners showed an advantage for learning the categories. Effort can therefore hurt the learning of categories.

  11. Color impact in visual attention deployment considering emotional images

    NASA Astrophysics Data System (ADS)

    Chamaret, C.

    2012-03-01

    Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.

  12. When increasing distraction helps learning: Distractor number and content interact in their effects on memory.

    PubMed

    Nussenbaum, Kate; Amso, Dima; Markant, Julie

    2017-11-01

    Previous work has demonstrated that increasing the number of distractors in a search array can reduce interference from distractor content during target processing. However, it is unclear how this reduced interference influences learning of target information. Here, we investigated how varying the amount and content of distraction present in a learning environment affects visual search and subsequent memory for target items. In two experiments, we demonstrate that the number and content of competing distractors interact in their influence on target selection and memory. Specifically, while increasing the number of distractors present in a search array made target detection more effortful, it did not impair learning and memory for target content. Instead, when the distractors contained category information that conflicted with the target, increasing the number of distractors from one to three actually benefitted learning and memory. These data suggest that increasing numbers of distractors may reduce interference from conflicting conceptual information during encoding.

  13. Visual agnosia and focal brain injury.

    PubMed

    Martinaud, O

    Visual agnosia encompasses all disorders of visual recognition within a selective visual modality not due to an impairment of elementary visual processing or other cognitive deficit. Based on a sequential dichotomy between the perceptual and memory systems, two different categories of visual object agnosia are usually considered: 'apperceptive agnosia' and 'associative agnosia'. Impaired visual recognition within a single category of stimuli is also reported in: (i) visual object agnosia of the ventral pathway, such as prosopagnosia (for faces), pure alexia (for words), or topographagnosia (for landmarks); (ii) visual spatial agnosia of the dorsal pathway, such as cerebral akinetopsia (for movement), or orientation agnosia (for the placement of objects in space). Focal brain injuries provide a unique opportunity to better understand regional brain function, particularly with the use of effective statistical approaches such as voxel-based lesion-symptom mapping (VLSM). The aim of the present work was twofold: (i) to review the various agnosia categories according to the traditional visual dual-pathway model; and (ii) to better assess the anatomical network underlying visual recognition through lesion-mapping studies correlating neuroanatomical and clinical outcomes. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  14. Nonparametric Hierarchical Bayesian Model for Functional Brain Parcellation

    PubMed Central

    Lashkari, Danial; Sridharan, Ramesh; Vul, Edward; Hsieh, Po-Jang; Kanwisher, Nancy; Golland, Polina

    2011-01-01

    We develop a method for unsupervised analysis of functional brain images that learns group-level patterns of functional response. Our algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over the sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to simultaneously learn the patterns of response that are shared across the group, and to estimate the number of these patterns supported by data. Inference based on this model enables automatic discovery and characterization of salient and consistent patterns in functional signals. We apply our method to data from a study that explores the response of the visual cortex to a collection of images. The discovered profiles of activation correspond to selectivity to a number of image categories such as faces, bodies, and scenes. More generally, our results appear superior to the results of alternative data-driven methods in capturing the category structure in the space of stimuli. PMID:21841977

  15. Scaffolding in geometry based on self regulated learning

    NASA Astrophysics Data System (ADS)

    Bayuningsih, A. S.; Usodo, B.; Subanti, S.

    2017-12-01

    This research aim to know the influence of problem based learning model by scaffolding technique on junior high school student’s learning achievement. This research took location on the junior high school in Banyumas. The research data obtained through mathematic learning achievement test and self-regulated learning (SRL) questioner. Then, the data analysis used two ways ANOVA. The results showed that scaffolding has positive effect to the mathematic learning achievement. The mathematic learning achievement use PBL-Scaffolding model is better than use PBL. The high SRL category student has better mathematic learning achievement than middle and low SRL categories, and then the middle SRL category has better than low SRL category. So, there are interactions between learning model with self-regulated learning in increasing mathematic learning achievement.

  16. Category specific dysnomia after thalamic infarction: a case-control study.

    PubMed

    Levin, Netta; Ben-Hur, Tamir; Biran, Iftah; Wertman, Eli

    2005-01-01

    Category specific naming impairment was described mainly after cortical lesions. It is thought to result from a lesion in a specific network, reflecting the organization of our semantic knowledge. The deficit usually involves multiple semantic categories whose profile of naming deficit generally obeys the animate/inanimate dichotomy. Thalamic lesions cause general semantic naming deficit, and only rarely a category specific semantic deficit for very limited and highly specific categories. We performed a case-control study on a 56-year-old right-handed man who presented with language impairment following a left anterior thalamic infarction. His naming ability and semantic knowledge were evaluated in the visual, tactile and auditory modalities for stimuli from 11 different categories, and compared to that of five controls. In naming to visual stimuli the patient performed poorly (error rate>50%) in four categories: vegetables, toys, animals and body parts (average 70.31+/-15%). In each category there was a different dominating error type. He performed better in the other seven categories (tools, clothes, transportation, fruits, electric, furniture, kitchen utensils), averaging 14.28+/-9% errors. Further analysis revealed a dichotomy between naming in animate and inanimate categories in the visual and tactile modalities but not in response to auditory stimuli. Thus, a unique category specific profile of response and naming errors to visual and tactile, but not auditory stimuli was found after a left anterior thalamic infarction. This might reflect the role of the thalamus not only as a relay station but further as a central integrator of different stages of perceptual and semantic processing.

  17. Feature highlighting enhances learning of a complex natural-science category.

    PubMed

    Miyatsu, Toshiya; Gouravajhala, Reshma; Nosofsky, Robert M; McDaniel, Mark A

    2018-04-26

    Learning naturalistic categories, which tend to have fuzzy boundaries and vary on many dimensions, can often be harder than learning well defined categories. One method for facilitating the category learning of naturalistic stimuli may be to provide explicit feature descriptions that highlight the characteristic features of each category. Although this method is commonly used in textbooks and classrooms, theoretically it remains uncertain whether feature descriptions should advantage learning complex natural-science categories. In three experiments, participants were trained on 12 categories of rocks, either without or with a brief description highlighting key features of each category. After training, they were tested on their ability to categorize both old and new rocks from each of the categories. Providing feature descriptions as a caption under a rock image failed to improve category learning relative to providing only the rock image with its category label (Experiment 1). However, when these same feature descriptions were presented such that they were explicitly linked to the relevant parts of the rock image (feature highlighting), participants showed significantly higher performance on both immediate generalization to new rocks (Experiment 2) and generalization after a 2-day delay (Experiment 3). Theoretical and practical implications are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision.

    PubMed

    Wen, Haiguang; Shi, Junxing; Zhang, Yizhen; Lu, Kun-Han; Cao, Jiayue; Liu, Zhongming

    2017-10-20

    Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. The impact of category structure and training methodology on learning and generalizing within-category representations.

    PubMed

    Ell, Shawn W; Smith, David B; Peralta, Gabriela; Hélie, Sébastien

    2017-08-01

    When interacting with categories, representations focused on within-category relationships are often learned, but the conditions promoting within-category representations and their generalizability are unclear. We report the results of three experiments investigating the impact of category structure and training methodology on the learning and generalization of within-category representations (i.e., correlational structure). Participants were trained on either rule-based or information-integration structures using classification (Is the stimulus a member of Category A or Category B?), concept (e.g., Is the stimulus a member of Category A, Yes or No?), or inference (infer the missing component of the stimulus from a given category) and then tested on either an inference task (Experiments 1 and 2) or a classification task (Experiment 3). For the information-integration structure, within-category representations were consistently learned, could be generalized to novel stimuli, and could be generalized to support inference at test. For the rule-based structure, extended inference training resulted in generalization to novel stimuli (Experiment 2) and inference training resulted in generalization to classification (Experiment 3). These data help to clarify the conditions under which within-category representations can be learned. Moreover, these results make an important contribution in highlighting the impact of category structure and training methodology on the generalization of categorical knowledge.

  20. On Learning Natural-Science Categories That Violate the Family-Resemblance Principle.

    PubMed

    Nosofsky, Robert M; Sanders, Craig A; Gerdom, Alex; Douglas, Bruce J; McDaniel, Mark A

    2017-01-01

    The general view in psychological science is that natural categories obey a coherent, family-resemblance principle. In this investigation, we documented an example of an important exception to this principle: Results of a multidimensional-scaling study of igneous, metamorphic, and sedimentary rocks (Experiment 1) suggested that the structure of these categories is disorganized and dispersed. This finding motivated us to explore what might be the optimal procedures for teaching dispersed categories, a goal that is likely critical to science education in general. Subjects in Experiment 2 learned to classify pictures of rocks into compact or dispersed high-level categories. One group learned the categories through focused high-level training, whereas a second group was required to simultaneously learn classifications at a subtype level. Although high-level training led to enhanced performance when the categories were compact, subtype training was better when the categories were dispersed. We provide an interpretation of the results in terms of an exemplar-memory model of category learning.

  1. Visual Working Memory Capacity for Objects from Different Categories: A Face-Specific Maintenance Effect

    ERIC Educational Resources Information Center

    Wong, Jason H.; Peterson, Matthew S.; Thompson, James C.

    2008-01-01

    The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments…

  2. Electrophysiological evidence for the left-lateralized effect of language on preattentive categorical perception of color

    PubMed Central

    Mo, Lei; Xu, Guiping; Kay, Paul; Tan, Li-Hai

    2011-01-01

    Previous studies have shown that the effect of language on categorical perception of color is stronger when stimuli are presented in the right visual field than in the left. To examine whether this lateralized effect occurs preattentively at an early stage of processing, we monitored the visual mismatch negativity, which is a component of the event-related potential of the brain to an unfamiliar stimulus among a temporally presented series of stimuli. In the oddball paradigm we used, the deviant stimuli were unrelated to the explicit task. A significant interaction between color-pair type (within-category vs. between-category) and visual field (left vs. right) was found. The amplitude of the visual mismatch negativity component evoked by the within-category deviant was significantly smaller than that evoked by the between-category deviant when displayed in the right visual field, but no such difference was observed for the left visual field. This result constitutes electroencephalographic evidence that the lateralized Whorf effect per se occurs out of awareness and at an early stage of processing. PMID:21844340

  3. Threat as a feature in visual semantic object memory.

    PubMed

    Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John

    2013-08-01

    Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.

  4. A normal' category-specific advantage for naming living things.

    PubMed

    Laws, K R; Neve, C

    1999-10-01

    'Artefactual' accounts of category-specific disorders for living things have highlighted that compared to nonliving things, living things have lower name frequency, lower concept familiarity and greater visual complexity and greater within-category structural similarity or 'visual crowding' [7]. These hypotheses imply that deficits for living things are an exaggeration of some 'normal tendency'. Contrary to these notions, we found that normal subjects were consistently worse at naming nonliving than living things in a speeded presentation paradigm. Moreover, their naming was not predicted by concept familiarity, name frequency or visual complexity; however, a novel measure of visual familiarity (i.e. for the appearance of things) did significantly predict naming. We propose that under speeded conditions, normal subjects find nonliving things harder to name because their representations are less visually predictable than for living things (i.e. nonliving things show greater within-item structural variability). Finally, because nonliving things have multiple representations in the real world, this may lower the probability of finding impaired naming and recognition in this category.

  5. Task-relevant perceptual features can define categories in visual memory too.

    PubMed

    Antonelli, Karla B; Williams, Carrick C

    2017-11-01

    Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.

  6. A Cross-cultural Exploration of Children's Everyday Ideas: Implications for science teaching and learning

    NASA Astrophysics Data System (ADS)

    Wee, Bryan

    2012-03-01

    Children's everyday ideas form critical foundations for science learning yet little research has been conducted to understand and legitimize these ideas, particularly from an international perspective. This paper explores children's everyday ideas about the environment across the US, Singapore and China to understand what they reveal about children's relationship to the environment and discuss its implications for science teaching and learning. A social constructivist lens guides research, and a visual methodology is used to frame children's realities. Participants' ages range from elementary to middle school, and a total of 210 children comprized mainly of Asians and Asian Americans were sampled from urban settings. Drawings are used to elicit children's everyday ideas and analyzed inductively using open coding and categorizing of data. Several categories support existing literature about how children view the environment; however, novel categories such as affect also emerged and lend new insight into the role that language, socio-cultural norms and perhaps ethnicity play in shaping children's everyday ideas. The findings imply the need for (a) a change in the role of science teachers from knowledge providers to social developers, (b) a science curriculum that is specific to learners' experiences in different socio-cultural settings, and (c) a shift away from inter-country comparisons using international science test scores.

  7. Category learning in the color-word contingency learning paradigm.

    PubMed

    Schmidt, James R; Augustinova, Maria; De Houwer, Jan

    2018-04-01

    In the typical color-word contingency learning paradigm, participants respond to the print color of words where each word is presented most often in one color. Learning is indicated by faster and more accurate responses when a word is presented in its usual color, relative to another color. To eliminate the possibility that this effect is driven exclusively by the familiarity of item-specific word-color pairings, we examine whether contingency learning effects can be observed also when colors are related to categories of words rather than to individual words. To this end, the reported experiments used three categories of words (animals, verbs, and professions) that were each predictive of one color. Importantly, each individual word was presented only once, thus eliminating individual color-word contingencies. Nevertheless, for the first time, a category-based contingency effect was observed, with faster and more accurate responses when a category item was presented in the color in which most of the other items of that category were presented. This finding helps to constrain episodic learning models and sets the stage for new research on category-based contingency learning.

  8. Guidance of attention by information held in working memory.

    PubMed

    Calleja, Marissa Ortiz; Rich, Anina N

    2013-05-01

    Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.

  9. Functional Connectivity Between Superior Parietal Lobule and Primary Visual Cortex "at Rest" Predicts Visual Search Efficiency.

    PubMed

    Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César

    2015-10-01

    Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks.

  10. Mapping the meanings of novel visual symbols by youth with moderate or severe mental retardation.

    PubMed

    Romski, M A; Sevcik, R A; Robinson, B F; Mervis, C B; Bertrand, J

    1996-01-01

    The word-learning ability of 12 school-age subjects with moderate or severe mental retardation was assessed. Subjects had little or no functional speech and used the System for Augmenting Language with visual-graphic symbols for communication. Their ability to fast map novel symbols revealed whether they possessed the novel name-nameless category (N3C) lexical operating principle. On first exposure, 7 subjects were able to map symbol meanings for novel objects. Follow-up assessments indicated that mappers retained comprehension of some of the novel words for up to delays of 15 days and generalized their knowledge to production. Ability to fast map reliably was related to symbol achievement status. Implications for understanding vocabulary acquisition by youth with mental retardation were discussed.

  11. Cogito ergo video: Task-relevant information is involuntarily boosted into awareness.

    PubMed

    Gayet, Surya; Brascamp, Jan W; Van der Stigchel, Stefan; Paffen, Chris L E

    2015-01-01

    Only part of the visual information that impinges on our retinae reaches visual awareness. In a series of three experiments, we investigated how the task relevance of incoming visual information affects its access to visual awareness. On each trial, participants were instructed to memorize one of two presented hues, drawn from different color categories (e.g., red and green), for later recall. During the retention interval, participants were presented with a differently colored grating in each eye such as to elicit binocular rivalry. A grating matched either the task-relevant (memorized) color category or the task-irrelevant (nonmemorized) color category. We found that the rivalrous stimulus that matched the task-relevant color category tended to dominate awareness over the rivalrous stimulus that matched the task-irrelevant color category. This effect of task relevance persisted when participants reported the orientation of the rivalrous stimuli, even though in this case color information was completely irrelevant for the task of reporting perceptual dominance during rivalry. When participants memorized the shape of a colored stimulus, however, its color category did not affect predominance of rivalrous stimuli during retention. Taken together, these results indicate that the selection of task-relevant information is under volitional control but that visual input that matches this information is boosted into awareness irrespective of whether this is useful for the observer.

  12. Perceptual expertise and top-down expectation of musical notation engages the primary visual cortex.

    PubMed

    Wong, Yetta Kwailing; Peng, Cynthia; Fratus, Kristyn N; Woodman, Geoffrey F; Gauthier, Isabel

    2014-08-01

    Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40-60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.

  13. Before the N400: effects of lexical-semantic violations in visual cortex.

    PubMed

    Dikker, Suzanne; Pylkkanen, Liina

    2011-07-01

    There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical-semantic information might similarly generate early visual effects. In a picture-noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical-semantic predictions can affect early visual processing around ∼100ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical-semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical-semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical-semantic effects. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Neural correlates of face gender discrimination learning.

    PubMed

    Su, Junzhu; Tan, Qingleng; Fang, Fang

    2013-04-01

    Using combined psychophysics and event-related potentials (ERPs), we investigated the effect of perceptual learning on face gender discrimination and probe the neural correlates of the learning effect. Human subjects were trained to perform a gender discrimination task with male or female faces. Before and after training, they were tested with the trained faces and other faces with the same and opposite genders. ERPs responding to these faces were recorded. Psychophysical results showed that training significantly improved subjects' discrimination performance and the improvement was specific to the trained gender, as well as to the trained identities. The training effect indicates that learning occurs at two levels-the category level (gender) and the exemplar level (identity). ERP analyses showed that the gender and identity learning was associated with the N170 latency reduction at the left occipital-temporal area and the N170 amplitude reduction at the right occipital-temporal area, respectively. These findings provide evidence for the facilitation model and the sharpening model on neuronal plasticity from visual experience, suggesting a faster processing speed and a sparser representation of face induced by perceptual learning.

  15. Rule-Based and Information-Integration Category Learning in Normal Aging

    ERIC Educational Resources Information Center

    Maddox, W. Todd; Pacheco, Jennifer; Reeves, Maia; Zhu, Bo; Schnyer, David M.

    2010-01-01

    The basal ganglia and prefrontal cortex play critical roles in category learning. Both regions evidence age-related structural and functional declines. The current study examined rule-based and information-integration category learning in a group of older and younger adults. Rule-based learning is thought to involve explicit, frontally mediated…

  16. Metacognitive monitoring during category learning: how success affects future behaviour.

    PubMed

    Doyle, Mario E; Hourihan, Kathleen L

    2016-10-01

    The purpose of this study was to see how people perceive their own learning during a category learning task, and whether their perceptions matched their performance. In two experiments, participants were asked to learn natural categories, of both high and low variability, and make category learning judgements (CLJs). Variability was manipulated by varying the number of exemplars and the number of times each exemplar was presented within each category. Experiment 1 showed that participants were generally overconfident in their knowledge of low variability families, suggesting that they considered repetition to be more useful for learning than it actually was. Also, a correct trial, for a particular category, was more likely to occur if the previous trial was correct. CLJs had the largest increase when a trial was correct following an incorrect trial and the largest decrease when an incorrect trial followed a correct trial. Experiment 2 replicated these results, but also demonstrated that global CLJ ratings showed the same bias towards repetition. These results indicate that we generally identify success as being the biggest determinant of learning, but do not always recognise cues, such as variability, that enhance learning.

  17. Working memory supports inference learning just like classification learning.

    PubMed

    Craig, Stewart; Lewandowsky, Stephan

    2013-08-01

    Recent research has found a positive relationship between people's working memory capacity (WMC) and their speed of category learning. To date, only classification-learning tasks have been considered, in which people learn to assign category labels to objects. It is unknown whether learning to make inferences about category features might also be related to WMC. We report data from a study in which 119 participants undertook classification learning and inference learning, and completed a series of WMC tasks. Working memory capacity was positively related to people's classification and inference learning performance.

  18. Differential item functioning analysis of the Vanderbilt Expertise Test for cars.

    PubMed

    Lee, Woo-Yeol; Cho, Sun-Joo; McGugin, Rankin W; Van Gulick, Ana Beth; Gauthier, Isabel

    2015-01-01

    The Vanderbilt Expertise Test for cars (VETcar) is a test of visual learning for contemporary car models. We used item response theory to assess the VETcar and in particular used differential item functioning (DIF) analysis to ask if the test functions the same way in laboratory versus online settings and for different groups based on age and gender. An exploratory factor analysis found evidence of multidimensionality in the VETcar, although a single dimension was deemed sufficient to capture the recognition ability measured by the test. We selected a unidimensional three-parameter logistic item response model to examine item characteristics and subject abilities. The VETcar had satisfactory internal consistency. A substantial number of items showed DIF at a medium effect size for test setting and for age group, whereas gender DIF was negligible. Because online subjects were on average older than those tested in the lab, we focused on the age groups to conduct a multigroup item response theory analysis. This revealed that most items on the test favored the younger group. DIF could be more the rule than the exception when measuring performance with familiar object categories, therefore posing a challenge for the measurement of either domain-general visual abilities or category-specific knowledge.

  19. Observation versus classification in supervised category learning.

    PubMed

    Levering, Kimery R; Kurtz, Kenneth J

    2015-02-01

    The traditional supervised classification paradigm encourages learners to acquire only the knowledge needed to predict category membership (a discriminative approach). An alternative that aligns with important aspects of real-world concept formation is learning with a broader focus to acquire knowledge of the internal structure of each category (a generative approach). Our work addresses the impact of a particular component of the traditional classification task: the guess-and-correct cycle. We compare classification learning to a supervised observational learning task in which learners are shown labeled examples but make no classification response. The goals of this work sit at two levels: (1) testing for differences in the nature of the category representations that arise from two basic learning modes; and (2) evaluating the generative/discriminative continuum as a theoretical tool for understand learning modes and their outcomes. Specifically, we view the guess-and-correct cycle as consistent with a more discriminative approach and therefore expected it to lead to narrower category knowledge. Across two experiments, the observational mode led to greater sensitivity to distributional properties of features and correlations between features. We conclude that a relatively subtle procedural difference in supervised category learning substantially impacts what learners come to know about the categories. The results demonstrate the value of the generative/discriminative continuum as a tool for advancing the psychology of category learning and also provide a valuable constraint for formal models and associated theories.

  20. Temporal stability of visually selective responses in intracranial field potentials recorded from human occipital and temporal lobes

    PubMed Central

    Bansal, Arjun K.; Singer, Jedediah M.; Anderson, William S.; Golby, Alexandra; Madsen, Joseph R.

    2012-01-01

    The cerebral cortex needs to maintain information for long time periods while at the same time being capable of learning and adapting to changes. The degree of stability of physiological signals in the human brain in response to external stimuli over temporal scales spanning hours to days remains unclear. Here, we quantitatively assessed the stability across sessions of visually selective intracranial field potentials (IFPs) elicited by brief flashes of visual stimuli presented to 27 subjects. The interval between sessions ranged from hours to multiple days. We considered electrodes that showed robust visual selectivity to different shapes; these electrodes were typically located in the inferior occipital gyrus, the inferior temporal cortex, and the fusiform gyrus. We found that IFP responses showed a strong degree of stability across sessions. This stability was evident in averaged responses as well as single-trial decoding analyses, at the image exemplar level as well as at the category level, across different parts of visual cortex, and for three different visual recognition tasks. These results establish a quantitative evaluation of the degree of stationarity of visually selective IFP responses within and across sessions and provide a baseline for studies of cortical plasticity and for the development of brain-machine interfaces. PMID:22956795

  1. Linguistic experience and audio-visual perception of non-native fricatives.

    PubMed

    Wang, Yue; Behne, Dawn M; Jiang, Haisheng

    2008-09-01

    This study examined the effects of linguistic experience on audio-visual (AV) perception of non-native (L2) speech. Canadian English natives and Mandarin Chinese natives differing in degree of English exposure [long and short length of residence (LOR) in Canada] were presented with English fricatives of three visually distinct places of articulation: interdentals nonexistent in Mandarin and labiodentals and alveolars common in both languages. Stimuli were presented in quiet and in a cafe-noise background in four ways: audio only (A), visual only (V), congruent AV (AVc), and incongruent AV (AVi). Identification results showed that overall performance was better in the AVc than in the A or V condition and better in quiet than in cafe noise. While the Mandarin long LOR group approximated the native English patterns, the short LOR group showed poorer interdental identification, more reliance on visual information, and greater AV-fusion with the AVi materials, indicating the failure of L2 visual speech category formation with the short LOR non-natives and the positive effects of linguistic experience with the long LOR non-natives. These results point to an integrated network in AV speech processing as a function of linguistic background and provide evidence to extend auditory-based L2 speech learning theories to the visual domain.

  2. Learning and retention through predictive inference and classification.

    PubMed

    Sakamoto, Yasuaki; Love, Bradley C

    2010-12-01

    Work in category learning addresses how humans acquire knowledge and, thus, should inform classroom practices. In two experiments, we apply and evaluate intuitions garnered from laboratory-based research in category learning to learning tasks situated in an educational context. In Experiment 1, learning through predictive inference and classification were compared for fifth-grade students using class-related materials. Making inferences about properties of category members and receiving feedback led to the acquisition of both queried (i.e., tested) properties and nonqueried properties that were correlated with a queried property (e.g., even if not queried, students learned about a species' habitat because it correlated with a queried property, like the species' size). In contrast, classifying items according to their species and receiving feedback led to knowledge of only the property most diagnostic of category membership. After multiple-day delay, the fifth-graders who learned through inference selectively retained information about the queried properties, and the fifth-graders who learned through classification retained information about the diagnostic property, indicating a role for explicit evaluation in establishing memories. Overall, inference learning resulted in fewer errors, better retention, and more liking of the categories than did classification learning. Experiment 2 revealed that querying a property only a few times was enough to manifest the full benefits of inference learning in undergraduate students. These results suggest that classroom teaching should emphasize reasoning from the category to multiple properties rather than from a set of properties to the category. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  3. Incidental learning of sound categories is impaired in developmental dyslexia.

    PubMed

    Gabay, Yafit; Holt, Lori L

    2015-12-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Incidental Learning of Sound Categories is Impaired in Developmental Dyslexia

    PubMed Central

    Gabay, Yafit; Holt, Lori L.

    2015-01-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. PMID:26409017

  5. Neuronal integration in visual cortex elevates face category tuning to conscious face perception

    PubMed Central

    Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.

    2012-01-01

    The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162

  6. Discovery learning with SAVI approach in geometry learning

    NASA Astrophysics Data System (ADS)

    Sahara, R.; Mardiyana; Saputro, D. R. S.

    2018-05-01

    Geometry is one branch of mathematics that an important role in learning mathematics in the schools. This research aims to find out about Discovery Learning with SAVI approach to achievement of learning geometry. This research was conducted at Junior High School in Surakarta city. Research data were obtained through test and questionnaire. Furthermore, the data was analyzed by using two-way Anova. The results showed that Discovery Learning with SAVI approach gives a positive influence on mathematics learning achievement. Discovery Learning with SAVI approach provides better mathematics learning outcomes than direct learning. In addition, students with high self-efficacy categories have better mathematics learning achievement than those with moderate and low self-efficacy categories, while student with moderate self-efficacy categories are better mathematics learning achievers than students with low self-efficacy categories. There is an interaction between Discovery Learning with SAVI approach and self-efficacy toward student's mathematics learning achievement. Therefore, Discovery Learning with SAVI approach can improve mathematics learning achievement.

  7. Implications of CI therapy for visual deficit training

    PubMed Central

    Taub, Edward; Mark, Victor W.; Uswatte, Gitendra

    2014-01-01

    We address here the question of whether the techniques of Constraint Induced (CI) therapy, a family of treatments that has been employed in the rehabilitation of movement and language after brain damage might apply to the rehabilitation of such visual deficits as unilateral spatial neglect and visual field deficits. CI therapy has been used successfully for the upper and lower extremities after chronic stroke, cerebral palsy (CP), multiple sclerosis (MS), other central nervous system (CNS) degenerative conditions, resection of motor areas of the brain, focal hand dystonia, and aphasia. Treatments making use of similar methods have proven efficacious for amblyopia. The CI therapy approach consists of four major components: intensive training, training by shaping, a “transfer package” to facilitate the transfer of gains from the treatment setting to everyday activities, and strong discouragement of compensatory strategies. CI therapy is said to be effective because it overcomes learned nonuse, a learned inhibition of movement that follows injury to the CNS. In addition, CI therapy produces substantial increases in the gray matter of motor areas on both sides of the brain. We propose here that these mechanisms are examples of more general processes: learned nonuse being considered parallel to sensory nonuse following damage to sensory areas of the brain, with both having in common diminished neural connections (DNCs) in the nervous system as an underlying mechanism. CI therapy would achieve its therapeutic effect by strengthening the DNCs. Use-dependent cortical reorganization is considered to be an example of the more general neuroplastic mechanism of brain structure repurposing. If the mechanisms involved in these broader categories are involved in each of the deficits being considered, then it may be the principles underlying efficacious treatment in each case may be similar. The lessons learned during CI therapy research might then prove useful for the treatment of visual deficits. PMID:25346665

  8. A deep learning method for classifying mammographic breast density categories.

    PubMed

    Mohamed, Aly A; Berg, Wendie A; Peng, Hong; Luo, Yahong; Jankowitz, Rachel C; Wu, Shandong

    2018-01-01

    Mammographic breast density is an established risk marker for breast cancer and is visually assessed by radiologists in routine mammogram image reading, using four qualitative Breast Imaging and Reporting Data System (BI-RADS) breast density categories. It is particularly difficult for radiologists to consistently distinguish the two most common and most variably assigned BI-RADS categories, i.e., "scattered density" and "heterogeneously dense". The aim of this work was to investigate a deep learning-based breast density classifier to consistently distinguish these two categories, aiming at providing a potential computerized tool to assist radiologists in assigning a BI-RADS category in current clinical workflow. In this study, we constructed a convolutional neural network (CNN)-based model coupled with a large (i.e., 22,000 images) digital mammogram imaging dataset to evaluate the classification performance between the two aforementioned breast density categories. All images were collected from a cohort of 1,427 women who underwent standard digital mammography screening from 2005 to 2016 at our institution. The truths of the density categories were based on standard clinical assessment made by board-certified breast imaging radiologists. Effects of direct training from scratch solely using digital mammogram images and transfer learning of a pretrained model on a large nonmedical imaging dataset were evaluated for the specific task of breast density classification. In order to measure the classification performance, the CNN classifier was also tested on a refined version of the mammogram image dataset by removing some potentially inaccurately labeled images. Receiver operating characteristic (ROC) curves and the area under the curve (AUC) were used to measure the accuracy of the classifier. The AUC was 0.9421 when the CNN-model was trained from scratch on our own mammogram images, and the accuracy increased gradually along with an increased size of training samples. Using the pretrained model followed by a fine-tuning process with as few as 500 mammogram images led to an AUC of 0.9265. After removing the potentially inaccurately labeled images, AUC was increased to 0.9882 and 0.9857 for without and with the pretrained model, respectively, both significantly higher (P < 0.001) than when using the full imaging dataset. Our study demonstrated high classification accuracies between two difficult to distinguish breast density categories that are routinely assessed by radiologists. We anticipate that our approach will help enhance current clinical assessment of breast density and better support consistent density notification to patients in breast cancer screening. © 2017 American Association of Physicists in Medicine.

  9. Category-based guidance of spatial attention during visual search for feature conjunctions.

    PubMed

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. A conflict-based model of color categorical perception: evidence from a priming study.

    PubMed

    Hu, Zhonghua; Hanley, J Richard; Zhang, Ruiling; Liu, Qiang; Roberson, Debi

    2014-10-01

    Categorical perception (CP) of color manifests as faster or more accurate discrimination of two shades of color that straddle a category boundary (e.g., one blue and one green) than of two shades from within the same category (e.g., two different shades of green), even when the differences between the pairs of colors are equated according to some objective metric. The results of two experiments provide new evidence for a conflict-based account of this effect, in which CP is caused by competition between visual and verbal/categorical codes on within-category trials. According to this view, conflict arises because the verbal code indicates that the two colors are the same, whereas the visual code indicates that they are different. In Experiment 1, two shades from the same color category were discriminated significantly faster when the previous trial also comprised a pair of within-category colors than when the previous trial comprised a pair from two different color categories. Under the former circumstances, the CP effect disappeared. According to the conflict-based model, response conflict between visual and categorical codes during discrimination of within-category pairs produced an adjustment of cognitive control that reduced the weight given to the categorical code relative to the visual code on the subsequent trial. Consequently, responses on within-category trials were facilitated, and CP effects were reduced. The effectiveness of this conflict-based account was evaluated in comparison with an alternative view that CP reflects temporary warping of perceptual space at the boundaries between color categories.

  11. Developmental Changes between Childhood and Adulthood in Passive Observational and Interactive Feedback-Based Categorization Rule Learning

    ERIC Educational Resources Information Center

    Hammer, Rubi; Kloet, Jim; Booth, James R.

    2016-01-01

    As children start attending school they are more likely to face situations where they have to autonomously learn about novel object categories (e.g. by reading a picture book with descriptions of novel animals). Such autonomous observational category learning (OCL) gradually complements interactive feedback-based category learning (FBCL), where a…

  12. Elevated depressive symptoms enhance reflexive but not reflective auditory category learning.

    PubMed

    Maddox, W Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G

    2014-09-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Elevated Depressive Symptoms Enhance Reflexive but not Reflective Auditory Category Learning

    PubMed Central

    Maddox, W. Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G.

    2014-01-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. PMID:25041936

  14. Pigeons acquire multiple categories in parallel via associative learning: A parallel to human word learning?

    PubMed Central

    Wasserman, Edward A.; Brooks, Daniel I.; McMurray, Bob

    2014-01-01

    Might there be parallels between category learning in animals and word learning in children? To examine this possibility, we devised a new associative learning technique for teaching pigeons to sort 128 photographs of objects into 16 human language categories. We found that pigeons learned all 16 categories in parallel, they perceived the perceptual coherence of the different object categories, and they generalized their categorization behavior to novel photographs from the training categories. More detailed analyses of the factors that predict trial-by-trial learning implicated a number of factors that may shape learning. First, we found considerable trial-by-trial dependency of pigeons’ categorization responses, consistent with several recent studies that invoke this dependency to claim that humans acquire words via symbolic or inferential mechanisms; this finding suggests that such dependencies may also arise in associative systems. Second, our trial-by-trial analyses divulged seemingly irrelevant aspects of the categorization task, like the spatial location of the report responses, which influenced learning. Third, those trial-by-trial analyses also supported the possibility that learning may be determined both by strengthening correct stimulus-response associations and by weakening incorrect stimulus-response associations. The parallel between all these findings and important aspects of human word learning suggests that associative learning mechanisms may play a much stronger part in complex human behavior than is commonly believed. PMID:25497520

  15. Timing of quizzes during learning: Effects on motivation and retention.

    PubMed

    Healy, Alice F; Jones, Matt; Lalchandani, Lakshmi A; Tack, Lindsay Anderson

    2017-06-01

    This article investigates how the timing of quizzes given during learning impacts retention of studied material. We investigated the hypothesis that interspersing quizzes among study blocks increases student engagement, thus improving learning. Participants learned 8 artificial facts about each of 8 plant categories, with the categories blocked during learning. Quizzes about 4 of the 8 facts from each category occurred either immediately after studying the facts for that category (standard) or after studying the facts from all 8 categories (postponed). In Experiment 1, participants were given tests shortly after learning and several days later, including both the initially quizzed and unquizzed facts. Test performance was better in the standard than in the postponed condition, especially for categories learned later in the sequence. This result held even for the facts not quizzed during learning, suggesting that the advantage cannot be due to any direct testing effects. Instead the results support the hypothesis that interrupting learning with quiz questions is beneficial because it can enhance learner engagement. Experiment 2 provided further support for this hypothesis, based on participants' retrospective ratings of their task engagement during the learning phase. These findings have practical implications for when to introduce quizzes in the classroom. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Can Semi-Supervised Learning Explain Incorrect Beliefs about Categories?

    ERIC Educational Resources Information Center

    Kalish, Charles W.; Rogers, Timothy T.; Lang, Jonathan; Zhu, Xiaojin

    2011-01-01

    Three experiments with 88 college-aged participants explored how unlabeled experiences--learning episodes in which people encounter objects without information about their category membership--influence beliefs about category structure. Participants performed a simple one-dimensional categorization task in a brief supervised learning phase, then…

  17. Explanation and Prior Knowledge Interact to Guide Learning

    ERIC Educational Resources Information Center

    Williams, Joseph J.; Lombrozo, Tania

    2013-01-01

    How do explaining and prior knowledge contribute to learning? Four experiments explored the relationship between explanation and prior knowledge in category learning. The experiments independently manipulated whether participants were prompted to explain the category membership of study observations and whether category labels were informative in…

  18. Conventional wisdom: negotiating conventions of reference enhances category learning.

    PubMed

    Voiklis, John; Corter, James E

    2012-01-01

    Collaborators generally coordinate their activities through communication, during which they readily negotiate a shared lexicon for activity-related objects. This social-pragmatic activity both recruits and affects cognitive and social-cognitive processes ranging from selective attention to perspective taking. We ask whether negotiating reference also facilitates category learning or might private verbalization yield comparable facilitation? Participants in three referential conditions learned to classify imaginary creatures according to combinations of functional features-nutritive and destructive-that implicitly defined four categories. Remote partners communicated in the Dialogue condition. In the Monologue condition, participants recorded audio descriptions for their own later use. Controls worked silently. Dialogue yielded better category learning, with wider distribution of attention. Monologue offered no benefits over working silently. We conclude that negotiating reference compels collaborators to find communicable structure in their shared activity; this social-pragmatic constraint accelerates category learning and likely provides much of the benefit recently ascribed to learning labeled categories. Copyright © 2012 Cognitive Science Society, Inc.

  19. More than one kind of inference: re-examining what's learned in feature inference and classification.

    PubMed

    Sweller, Naomi; Hayes, Brett K

    2010-08-01

    Three studies examined how task demands that impact on attention to typical or atypical category features shape the category representations formed through classification learning and inference learning. During training categories were learned via exemplar classification or by inferring missing exemplar features. In the latter condition inferences were made about missing typical features alone (typical feature inference) or about both missing typical and atypical features (mixed feature inference). Classification and mixed feature inference led to the incorporation of typical and atypical features into category representations, with both kinds of features influencing inferences about familiar (Experiments 1 and 2) and novel (Experiment 3) test items. Those in the typical inference condition focused primarily on typical features. Together with formal modelling, these results challenge previous accounts that have characterized inference learning as producing a focus on typical category features. The results show that two different kinds of inference learning are possible and that these are subserved by different kinds of category representations.

  20. Is Trypophobia a Phobia?

    PubMed

    Can, Wang; Zhuoran, Zhao; Zheng, Jin

    2017-04-01

    In the past 10 years, thousands of people have claimed to be affected by trypophobia, which is the fear of objects with small holes. Recent research suggests that people do not fear the holes; rather, images of clustered holes, which share basic visual characteristics with venomous organisms, lead to nonconscious fear. In the present study, both self-reported measures and the Preschool Single Category Implicit Association Test were adapted for use with preschoolers to investigate whether discomfort related to trypophobic stimuli was grounded in their visual features or based on a nonconsciously associated fear of venomous animals. The results indicated that trypophobic stimuli were associated with discomfort in children. This discomfort seemed to be related to the typical visual characteristics and pattern properties of trypophobic stimuli rather than to nonconscious associations with venomous animals. The association between trypophobic stimuli and venomous animals vanished when the typical visual characteristics of trypophobic features were removed from colored photos of venomous animals. Thus, the discomfort felt toward trypophobic images might be an instinctive response to their visual characteristics rather than the result of a learned but nonconscious association with venomous animals. Therefore, it is questionable whether it is justified to legitimize trypophobia.

  1. Using Neuropsychometric Measurements in the Differential Diagnosis of Specific Learning Disability

    PubMed Central

    TURGUT TURAN, Sevil; ERDOĞAN BAKAR, Emel; ERDEN, Gülsen; KARAKAŞ, Sirel

    2016-01-01

    Introduction The aim of this study was to develop a neuropsychometric battery for the differential diagnosis of specific learning disability (SLD), with specific respect to attention deficit hyperactivity disorder (ADHD), and to help resolve the conflicting results in the literature by an integrative utilization of scores on both the Bannatyne categories and neuropsychological tests. Methods The sample included 168 primary school boys who were assigned to SLD (n=21), ADHD (n=45), SLD and ADHD (n=57), and control groups (n=45). The exclusion criteria were a neurological or psychiatric comorbidity other than ADHD, a level of anxiety and/or depression above the cutoff score, medication affecting cognitive processes, visual and/or auditory disorders, and an intelligence level outside the IQ range of 85–129. Psychometric scores were obtained from the SLD Battery and Wechsler Intelligence Scale for Children-Revised in the form of Bannatyne category scores. Neuropsychological scores were from the Visual–Aural Digit Span Test-Form B, Serial Digit Learning Test, Judgment of Line Orientation, and Mangina Test. The battery was called the Integrative Battery of SLD. Results The correctness of estimation for classifying cases into the diagnostic dyads (SLD/ADHD, SLD/SLD+ADHD, and SLD+ADHD/ADHD) by an integrative utilization of both the Bannatyne category scores (n=4) and scores from the four neuropsychological tests (n=10) was 92.4%, 81.4%, and 71.8%, respectively. These proportions were generally higher than those obtained using the Bannatyne category scores alone (86.4%, 75.5%, and 73.1%, respectively). The same trend was seen in the classification of children into diagnostic and control groups. However, the proportion of the correctness of estimation was higher than that obtained for the diagnostic dyads. Conclusion When conducted using appropriately chosen research designs and statistical techniques and if confounding variables are sufficiently controlled, a neuropsychometric battery that includes capacities that relate to intelligence (Bannatyne categories) and those that relate to neurocognitive processes (neuropsychological tests) can be useful in the differential diagnosis of SLD. PMID:28360787

  2. A Comparison of the neural correlates that underlie rule-based and information-integration category learning.

    PubMed

    Carpenter, Kathryn L; Wills, Andy J; Benattayallah, Abdelmalek; Milton, Fraser

    2016-10-01

    The influential competition between verbal and implicit systems (COVIS) model proposes that category learning is driven by two competing neural systems-an explicit, verbal, system, and a procedural-based, implicit, system. In the current fMRI study, participants learned either a conjunctive, rule-based (RB), category structure that is believed to engage the explicit system, or an information-integration category structure that is thought to preferentially recruit the implicit system. The RB and information-integration category structures were matched for participant error rate, the number of relevant stimulus dimensions, and category separation. Under these conditions, considerable overlap in brain activation, including the prefrontal cortex, basal ganglia, and the hippocampus, was found between the RB and information-integration category structures. Contrary to the predictions of COVIS, the medial temporal lobes and in particular the hippocampus, key regions for explicit memory, were found to be more active in the information-integration condition than in the RB condition. No regions were more activated in RB than information-integration category learning. The implications of these results for theories of category learning are discussed. Hum Brain Mapp 37:3557-3574, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Color image definition evaluation method based on deep learning method

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  4. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features.

    PubMed

    Abbas, Qaisar; Fondon, Irene; Sarmiento, Auxiliadora; Jiménez, Soledad; Alemany, Pedro

    2017-11-01

    Diabetic retinopathy (DR) is leading cause of blindness among diabetic patients. Recognition of severity level is required by ophthalmologists to early detect and diagnose the DR. However, it is a challenging task for both medical experts and computer-aided diagnosis systems due to requiring extensive domain expert knowledge. In this article, a novel automatic recognition system for the five severity level of diabetic retinopathy (SLDR) is developed without performing any pre- and post-processing steps on retinal fundus images through learning of deep visual features (DVFs). These DVF features are extracted from each image by using color dense in scale-invariant and gradient location-orientation histogram techniques. To learn these DVF features, a semi-supervised multilayer deep-learning algorithm is utilized along with a new compressed layer and fine-tuning steps. This SLDR system was evaluated and compared with state-of-the-art techniques using the measures of sensitivity (SE), specificity (SP) and area under the receiving operating curves (AUC). On 750 fundus images (150 per category), the SE of 92.18%, SP of 94.50% and AUC of 0.924 values were obtained on average. These results demonstrate that the SLDR system is appropriate for early detection of DR and provide an effective treatment for prediction type of diabetes.

  5. Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).

    PubMed

    Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen

    2018-06-06

    Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.

  6. Delayed Feedback Disrupts the Procedural-Learning System but Not the Hypothesis-Testing System in Perceptual Category Learning

    ERIC Educational Resources Information Center

    Maddox, W. Todd; Ing, A. David

    2005-01-01

    W. T. Maddox, F. G. Ashby, and C. J. Bohil (2003) found that delayed feedback adversely affects information-integration but not rule-based category learning in support of a multiple-systems approach to category learning. However, differences in the number of stimulus dimensions relevant to solving the task and perceptual similarity failed to rule…

  7. Brief daily exposures to Asian females reverses perceptual narrowing for Asian faces in Caucasian infants

    PubMed Central

    Anzures, Gizelle; Wheeler, Andrea; Quinn, Paul C.; Pascalis, Olivier; Slater, Alan M.; Heron-Delaney, Michelle; Tanaka, James W.; Lee, Kang

    2012-01-01

    Perceptual narrowing in the visual, auditory, and multisensory domains has its developmental origins in infancy. The present study shows that experimentally induced experience can reverse the effects of perceptual narrowing on infants’ visual recognition memory of other-race faces. Caucasian 8- to 10-month-olds who could not discriminate between novel and familiarized Asian faces at the beginning of testing were given brief daily experience with Asian female faces in the experimental condition and Caucasian female faces in the control condition. At the end of three weeks, only infants who received daily experience with Asian females showed above-chance recognition of novel Asian female and male faces. Further, infants in the experimental condition showed greater efficiency in learning novel Asian females compared to infants in the control condition. Thus, visual experience with a novel stimulus category can reverse the effects of perceptual narrowing in infancy via improved stimulus recognition and encoding. PMID:22625845

  8. Problem based learning with scaffolding technique on geometry

    NASA Astrophysics Data System (ADS)

    Bayuningsih, A. S.; Usodo, B.; Subanti, S.

    2018-05-01

    Geometry as one of the branches of mathematics has an important role in the study of mathematics. This research aims to explore the effectiveness of Problem Based Learning (PBL) with scaffolding technique viewed from self-regulation learning toward students’ achievement learning in mathematics. The research data obtained through mathematics learning achievement test and self-regulated learning (SRL) questionnaire. This research employed quasi-experimental research. The subjects of this research are students of the junior high school in Banyumas Central Java. The result of the research showed that problem-based learning model with scaffolding technique is more effective to generate students’ mathematics learning achievement than direct learning (DL). This is because in PBL model students are more able to think actively and creatively. The high SRL category student has better mathematic learning achievement than middle and low SRL categories, and then the middle SRL category has better than low SRL category. So, there are interactions between learning model with self-regulated learning in increasing mathematic learning achievement.

  9. The emergence of the visual word form: Longitudinal evolution of category-specific ventral visual areas during reading acquisition

    PubMed Central

    Monzalvo, Karla; Dehaene, Stanislas

    2018-01-01

    How does education affect cortical organization? All literate adults possess a region specialized for letter strings, the visual word form area (VWFA), within the mosaic of ventral regions involved in processing other visual categories such as objects, places, faces, or body parts. Therefore, the acquisition of literacy may induce a reorientation of cortical maps towards letters at the expense of other categories such as faces. To test this cortical recycling hypothesis, we studied how the visual cortex of individual children changes during the first months of reading acquisition. Ten 6-year-old children were scanned longitudinally 6 or 7 times with functional magnetic resonance imaging (fMRI) before and throughout the first year of school. Subjects were exposed to a variety of pictures (words, numbers, tools, houses, faces, and bodies) while performing an unrelated target-detection task. Behavioral assessment indicated a sharp rise in grapheme–phoneme knowledge and reading speed in the first trimester of school. Concurrently, voxels specific to written words and digits emerged at the VWFA location. The responses to other categories remained largely stable, although right-hemispheric face-related activity increased in proportion to reading scores. Retrospective examination of the VWFA voxels prior to reading acquisition showed that reading encroaches on voxels that are initially weakly specialized for tools and close to but distinct from those responsive to faces. Remarkably, those voxels appear to keep their initial category selectivity while acquiring an additional and stronger responsivity to words. We propose a revised model of the neuronal recycling process in which new visual categories invade weakly specified cortex while leaving previously stabilized cortical responses unchanged. PMID:29509766

  10. The emergence of the visual word form: Longitudinal evolution of category-specific ventral visual areas during reading acquisition.

    PubMed

    Dehaene-Lambertz, Ghislaine; Monzalvo, Karla; Dehaene, Stanislas

    2018-03-01

    How does education affect cortical organization? All literate adults possess a region specialized for letter strings, the visual word form area (VWFA), within the mosaic of ventral regions involved in processing other visual categories such as objects, places, faces, or body parts. Therefore, the acquisition of literacy may induce a reorientation of cortical maps towards letters at the expense of other categories such as faces. To test this cortical recycling hypothesis, we studied how the visual cortex of individual children changes during the first months of reading acquisition. Ten 6-year-old children were scanned longitudinally 6 or 7 times with functional magnetic resonance imaging (fMRI) before and throughout the first year of school. Subjects were exposed to a variety of pictures (words, numbers, tools, houses, faces, and bodies) while performing an unrelated target-detection task. Behavioral assessment indicated a sharp rise in grapheme-phoneme knowledge and reading speed in the first trimester of school. Concurrently, voxels specific to written words and digits emerged at the VWFA location. The responses to other categories remained largely stable, although right-hemispheric face-related activity increased in proportion to reading scores. Retrospective examination of the VWFA voxels prior to reading acquisition showed that reading encroaches on voxels that are initially weakly specialized for tools and close to but distinct from those responsive to faces. Remarkably, those voxels appear to keep their initial category selectivity while acquiring an additional and stronger responsivity to words. We propose a revised model of the neuronal recycling process in which new visual categories invade weakly specified cortex while leaving previously stabilized cortical responses unchanged.

  11. Behavioral demand modulates object category representation in the inferior temporal cortex

    PubMed Central

    Emadi, Nazli

    2014-01-01

    Visual object categorization is a critical task in our daily life. Many studies have explored category representation in the inferior temporal (IT) cortex at the level of single neurons and population. However, it is not clear how behavioral demands modulate this category representation. Here, we recorded from the IT single neurons in monkeys performing two different tasks with identical visual stimuli: passive fixation and body/object categorization. We found that category selectivity of the IT neurons was improved in the categorization compared with the passive task where reward was not contingent on image category. The category improvement was the result of larger rate enhancement for the preferred category and smaller response variability for both preferred and nonpreferred categories. These specific modulations in the responses of IT category neurons enhanced signal-to-noise ratio of the neural responses to discriminate better between the preferred and nonpreferred categories. Our results provide new insight into the adaptable category representation in the IT cortex, which depends on behavioral demands. PMID:25080572

  12. Prevalence of oral health status in visually impaired children.

    PubMed

    Reddy, Kvkk; Sharma, A

    2011-01-01

    The epidemiological investigation was carried out among 228 children selected from two schools of similar socioeconomic strata in and around Chennai city. The study population consisted of 128 visually impaired and 100 normal school going children in the age group of 6-15 years. The examination procedure and criteria were those recommended by W.H.O. in 1997. The mean DMFT/deft was 1.1 and 0.17,0.87 and 0.47 in visually impaired and normal children, respectively. Oral hygiene levels in both groups were: mean value in good category was 0.19 and 0.67, in fair category was 0.22 and 0.1, and in poor category 0.40 and 0.23 in visually impaired children and normal children, respectively. Trauma experienced children were 0.29 and 0.13 in visually impaired children and normal children, respectively. The conclusions drawn from this study were that there was a greater prevalence of dental caries, poorer oral hygiene, and higher incidence of trauma in visually impaired children.

  13. An Examination of Strategy Implementation during Abstract Nonlinguistic Category Learning in Aphasia

    ERIC Educational Resources Information Center

    Vallila-Rohter, Sofia; Kiran, Swathi

    2015-01-01

    Purpose: Our purpose was to study strategy use during nonlinguistic category learning in aphasia. Method: Twelve control participants without aphasia and 53 participants with aphasia (PWA) completed a computerized feedback-based category learning task consisting of training and testing phases. Accuracy rates of categorization in testing phases…

  14. Semantic Categories and Context in L2 Vocabulary Learning

    ERIC Educational Resources Information Center

    Bolger, Patrick; Zapata, Gabriela

    2011-01-01

    This article extends recent findings that presenting semantically related vocabulary simultaneously inhibits learning. It does so by adding story contexts. Participants learned 32 new labels for known concepts from four different semantic categories in stories that were either semantically related (one category per story) or semantically unrelated…

  15. Rule-Based Category Learning in Down Syndrome

    ERIC Educational Resources Information Center

    Phillips, B. Allyson; Conners, Frances A.; Merrill, Edward; Klinger, Mark R.

    2014-01-01

    Rule-based category learning was examined in youths with Down syndrome (DS), youths with intellectual disability (ID), and typically developing (TD) youths. Two tasks measured category learning: the Modified Card Sort task (MCST) and the Concept Formation test of the Woodcock-Johnson-III (Woodcock, McGrew, & Mather, 2001). In regression-based…

  16. Study preferences for exemplar variability in self-regulated category learning.

    PubMed

    Wahlheim, Christopher N; DeSoto, K Andrew

    2017-02-01

    Increasing exemplar variability during category learning can enhance classification of novel exemplars from studied categories. Four experiments examined whether participants preferred variability when making study choices with the goal of later classifying novel exemplars. In Experiments 1-3, participants were familiarised with exemplars of birds from multiple categories prior to making category-level assessments of learning and subsequent choices about whether to receive more variability or repetitions of exemplars during study. After study, participants classified novel exemplars from studied categories. The majority of participants showed a consistent preference for variability in their study, but choices were not related to category-level assessments of learning. Experiment 4 provided evidence that study preferences were based primarily on theoretical beliefs in that most participants indicated a preference for variability on questionnaires that did not include prior experience with exemplars. Potential directions for theoretical development and applications to education are discussed.

  17. The Neural Correlates of Desire

    PubMed Central

    Kawabata, Hideaki; Zeki, Semir

    2008-01-01

    In an event-related fMRI study, we scanned eighteen normal human subjects while they viewed three categories of pictures (events, objects and persons) which they classified according to desirability (desirable, indifferent or undesirable). Each category produced activity in a distinct part of the visual brain, thus reflecting its functional specialization. We used conjunction analysis to learn whether there is a brain area which is always active when a desirable picture is viewed, regardless of the category to which it belongs. The conjunction analysis of the contrast desirable > undesirable revealed activity in the superior orbito-frontal cortex. This activity bore a positive linear relationship to the declared level of desirability. The conjunction analysis of desirable > indifferent revealed activity in the mid-cingulate cortex and in the anterior cingulate cortex. In the former, activity was greater for desirable and undesirable stimuli than for stimuli classed as indifferent. Other conjunction analyses produced no significant effects. These results show that categorizing any stimulus according to its desirability activates three different brain areas: the superior orbito-frontal, the mid-cingulate, and the anterior cingulate cortices. PMID:18728753

  18. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. [Application of Ocular Trauma Score in Mechanical Ocular Injury in Forensic Medicine].

    PubMed

    Xiang, Jian; Guo, Zhao-ming; Wang, Xu; Yu, Li-li; Liu, Hui

    2015-10-01

    To evaluate the application value for the prognosis of mechanical ocular injury cases using ocular trauma score (OTS). Four hundred and eleven cases of mechanical ocular trauma were retrospectively reviewed. Of the 449 eyes, there were 317 closed globe injury and 132 open globe injury. OTS variables included numerical values as initial visual acuity, rupture, endophthalmitis, perforat- ing or penetrating injury, retinal detachment and relative afferent pupillary block. The differences be- tween the distribution of the final visual acuity and the probability of standard final visual acuity were compared to analyze the correlation between OTS category and final visual acuity. The different types of ocular trauma were compared. Compared with the distribution of final visual acuity in standard OTS score, the ratio in OTS-3 category was statistically different in present study, and no differences were found in other categories. Final visual acuity showed a great linear correlation with OTS category (r = 0.71) and total score (r = 0.73). Compared with closed globe injury, open globe injury was generally associated with lower total score and poorer prognosis. Rupture injury had poorer prognosis compared with penetrating injury. The use of OTS for the patients with ocular trauma can provide re- liable information for the evaluation of prognosis in forensic medicine.

  20. Analogical reasoning in amazons.

    PubMed

    Obozova, Tanya; Smirnova, Anna; Zorina, Zoya; Wasserman, Edward

    2015-11-01

    Two juvenile orange-winged amazons (Amazona amazonica) were initially trained to match visual stimuli by color, shape, and number of items, but not by size. After learning these three identity matching-to-sample tasks, the parrots transferred discriminative responding to new stimuli from the same categories that had been used in training (other colors, shapes, and numbers of items) as well as to stimuli from a different category (stimuli varying in size). In the critical testing phase, both parrots exhibited reliable relational matching-to-sample (RMTS) behavior, suggesting that they perceived and compared the relationship between objects in the sample stimulus pair to the relationship between objects in the comparison stimulus pairs, even though no physical matches were possible between items in the sample and comparison pairs. The parrots spontaneously exhibited this higher-order relational responding without having ever before been trained on RMTS tasks, therefore joining apes and crows in displaying this abstract cognitive behavior.

  1. Shape Similarity, Better than Semantic Membership, Accounts for the Structure of Visual Object Representations in a Population of Monkey Inferotemporal Neurons

    PubMed Central

    DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide

    2013-01-01

    The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700

  2. Category Membership and Semantic Coding in the Cerebral Hemispheres.

    PubMed

    Turner, Casey E; Kellogg, Ronald T

    2016-01-01

    Although a gradient of category membership seems to form the internal structure of semantic categories, it is unclear whether the 2 hemispheres of the brain differ in terms of this gradient. The 2 experiments reported here examined this empirical question and explored alternative theoretical interpretations. Participants viewed category names centrally and determined whether a closely related or distantly related word presented to either the left visual field/right hemisphere (LVF/RH) or the right visual field/left hemisphere (RVF/LH) was a member of the category. Distantly related words were categorized more slowly in the LVF/RH relative to the RVF/LH, with no difference for words close to the prototype. The finding resolved past mixed results showing an unambiguous typicality effect for both visual field presentations. Furthermore, we examined items near the fuzzy border that were sometimes rejected as nonmembers of the category and found both hemispheres use the same category boundary. In Experiment 2, we presented 2 target words to be categorized, with the expectation of augmenting the speed advantage for the RVF/LH if the 2 hemispheres differ structurally. Instead the results showed a weakening of the hemispheric difference, arguing against a structural in favor of a processing explanation.

  3. Pigeons and humans use action and pose information to categorize complex human behaviors.

    PubMed

    Qadri, Muhammad A J; Cook, Robert G

    2017-02-01

    The biological mechanisms used to categorize and recognize behaviors are poorly understood in both human and non-human animals. Using animated digital models, we have recently shown that pigeons can categorize different locomotive animal gaits and types of complex human behaviors. In the current experiments, pigeons (go/no-go task) and humans (choice task) both learned to conditionally categorize two categories of human behaviors that did not repeat and were comprised of the coordinated motions of multiple limbs. These "martial arts" and "Indian dance" action sequences were depicted by a digital human model. Depending upon whether the model was in motion or not, each species was required to engage in different and opposing responses to the two behavioral categories. Both species learned to conditionally and correctly act on this dynamic and static behavioral information, indicating that both species use a combination of static pose cues that are available from stimulus onset in addition to less rapidly available action information in order to successfully discriminate between the behaviors. Human participants additionally demonstrated a bias towards the dynamic information in the display when re-learning the task. Theories that rely on generalized, non-specific visual mechanisms involving channels for motion and static cues offer a parsimonious account of how humans and pigeons recognize and categorize behaviors within and across species. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Technology-Assisted Learning: A Longitudinal Field Study of Knowledge Category, Learning Effectiveness and Satisfaction in Language Learning

    ERIC Educational Resources Information Center

    Hui, W.; Hu, P. J.-H.; Clark, T. H. K.; Tam, K. Y.; Milton, J.

    2008-01-01

    A field experiment compares the effectiveness and satisfaction associated with technology-assisted learning with that of face-to-face learning. The empirical evidence suggests that technology-assisted learning effectiveness depends on the target knowledge category. Building on Kolb's experiential learning model, we show that technology-assisted…

  5. Discrimination of artificial categories structured by family resemblances: a comparative study in people (Homo sapiens) and pigeons (Columba livia).

    PubMed

    Makino, Hiroshi; Jitsumori, Masako

    2007-02-01

    Adult humans (Homo sapiens) and pigeons (Columba livia) were trained to discriminate artificial categories that the authors created by mimicking 2 properties of natural categories. One was a family resemblance relationship: The highly variable exemplars, including those that did not have features in common, were structured by a similarity network with the features correlating to one another in each category. The other was a polymorphous rule: No single feature was essential for distinguishing the categories, and all the features overlapped between the categories. Pigeons learned the categories with ease and then showed a prototype effect in accord with the degrees of family resemblance for novel stimuli. Some evidence was also observed for interactive effects of learning of individual exemplars and feature frequencies. Humans had difficulty in learning the categories. The participants who learned the categories generally responded to novel stimuli in an all-or-none fashion on the basis of their acquired classification decision rules. The processes that underlie the classification performances of the 2 species are discussed.

  6. When does fading enhance perceptual category learning?

    PubMed

    Pashler, Harold; Mozer, Michael C

    2013-07-01

    Training that uses exaggerated versions of a stimulus discrimination (fading) has sometimes been found to enhance category learning, mostly in studies involving animals and impaired populations. However, little is known about whether and when fading facilitates learning for typical individuals. This issue was explored in 7 experiments. In Experiments 1 and 2, observers discriminated stimuli based on a single sensory continuum (time duration and line length, respectively). Adaptive fading dramatically improved performance in training (unsurprisingly) but did not enhance learning as assessed in a final test. The same was true for nonadaptive linear fading (Experiment 3). However, when variation in length (predicting category membership) was embedded among other (category-irrelevant) variation, fading dramatically enhanced not only performance in training but also learning as assessed in a final test (Experiments 4 and 5). Fading also helped learners to acquire a color saturation discrimination amid category-irrelevant variation in hue and brightness, although this learning proved transitory after feedback was withdrawn (Experiment 7). Theoretical implications are discussed, and we argue that fading should have practical utility in naturalistic category learning tasks, which involve extremely high dimensional stimuli and many irrelevant dimensions. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  7. An Examination of Strategy Implementation During Abstract Nonlinguistic Category Learning in Aphasia.

    PubMed

    Vallila-Rohter, Sofia; Kiran, Swathi

    2015-08-01

    Our purpose was to study strategy use during nonlinguistic category learning in aphasia. Twelve control participants without aphasia and 53 participants with aphasia (PWA) completed a computerized feedback-based category learning task consisting of training and testing phases. Accuracy rates of categorization in testing phases were calculated. To evaluate strategy use, strategy analyses were conducted over training and testing phases. Participant data were compared with model data that simulated complex multi-cue, single feature, and random pattern strategies. Learning success and strategy use were evaluated within the context of standardized cognitive-linguistic assessments. Categorization accuracy was higher among control participants than among PWA. The majority of control participants implemented suboptimal or optimal multi-cue and single-feature strategies by testing phases of the experiment. In contrast, a large subgroup of PWA implemented random patterns, or no strategy, during both training and testing phases of the experiment. Person-to-person variability arises not only in category learning ability but also in the strategies implemented to complete category learning tasks. PWA less frequently developed effective strategies during category learning tasks than control participants. Certain PWA may have impairments of strategy development or feedback processing not captured by language and currently probed cognitive abilities.

  8. Learning Vowel Categories from Maternal Speech in Gurindji Kriol

    ERIC Educational Resources Information Center

    Jones, Caroline; Meakins, Felicity; Muawiyath, Shujau

    2012-01-01

    Distributional learning is a proposal for how infants might learn early speech sound categories from acoustic input before they know many words. When categories in the input differ greatly in relative frequency and overlap in acoustic space, research in bilingual development suggests that this affects the course of development. In the present…

  9. Characterizing Rule-Based Category Learning Deficits in Patients with Parkinson's Disease

    ERIC Educational Resources Information Center

    Filoteo, J. Vincent; Maddox, W. Todd; Ing, A. David; Song, David D.

    2007-01-01

    Parkinson's disease (PD) patients and normal controls were tested in three category learning experiments to determine if previously observed rule-based category learning impairments in PD patients were due to deficits in selective attention or working memory. In Experiment 1, optimal categorization required participants to base their decision on a…

  10. Pamphlet Library [for Working with Multihandicapped, Visually-Impaired Individuals].

    ERIC Educational Resources Information Center

    Boston Center for Blind Children, MA.

    The Boston Center for Blind Children has prepared an annotated bibliography of pamphlets intended to be useful to persons working with multiply-handicapped, visually-impaired individuals. The pamphlets are organized under the following categories (number of entries in each category is listed in parentheses): bibliographies (4), epilepsy (4), facts…

  11. Aversive Learning Modulates Cortical Representations of Object Categories

    PubMed Central

    Dunsmoor, Joseph E.; Kragel, Philip A.; Martin, Alex; LaBar, Kevin S.

    2014-01-01

    Experimental studies of conditioned learning reveal activity changes in the amygdala and unimodal sensory cortex underlying fear acquisition to simple stimuli. However, real-world fears typically involve complex stimuli represented at the category level. A consequence of category-level representations of threat is that aversive experiences with particular category members may lead one to infer that related exemplars likewise pose a threat, despite variations in physical form. Here, we examined the effect of category-level representations of threat on human brain activation using 2 superordinate categories (animals and tools) as conditioned stimuli. Hemodynamic activity in the amygdala and category-selective cortex was modulated by the reinforcement contingency, leading to widespread fear of different exemplars from the reinforced category. Multivariate representational similarity analyses revealed that activity patterns in the amygdala and object-selective cortex were more similar among exemplars from the threat versus safe category. Learning to fear animate objects was additionally characterized by enhanced functional coupling between the amygdala and fusiform gyrus. Finally, hippocampal activity co-varied with object typicality and amygdala activation early during training. These findings provide novel evidence that aversive learning can modulate category-level representations of object concepts, thereby enabling individuals to express fear to a range of related stimuli. PMID:23709642

  12. How learning one category influences the learning of another: intercategory generalization based on analogy and specific stimulus information.

    PubMed

    Nahinsky, Irwin D; Lucas, Barbara A; Edgell, Stephen E; Overfelt, Joseph; Loeb, Richard

    2004-01-01

    We investigated the effect of learning one category structure on the learning of a related category structure. Photograph-name combinations, called identifiers, were associated with values of four demographic attributes. Two problems were related by analogous demographic attributes, common identifiers, or both to examine the impact of common identifier, related general characteristics, and the interaction of the two variables in mediating learning transfer from one category structure to another. Problems sharing the same identifier information prompted greater positive transfer than those not sharing the same identifier information. In contrast, analogous defining characteristics in the two problems did not facilitate transfer. We computed correlations between responses to first-problem stimuli and responses to analogous second-problem stimuli for each participant. The analogous characteristics produced a tendency to respond in the same way to corresponding stimuli in the two problems. The results support an alignment between category structures related by analogous defining characteristics, which is facilitated by specific identifier information shared by two category structures.

  13. When More Is Less: Feedback Effects in Perceptual Category Learning

    ERIC Educational Resources Information Center

    Maddox, W. Todd; Love, Bradley C.; Glass, Brian D.; Filoteo, J. Vincent

    2008-01-01

    Rule-based and information-integration category learning were compared under minimal and full feedback conditions. Rule-based category structures are those for which the optimal rule is verbalizable. Information-integration category structures are those for which the optimal rule is not verbalizable. With minimal feedback subjects are told whether…

  14. Incremental Bayesian Category Learning from Natural Language

    ERIC Educational Resources Information Center

    Frermann, Lea; Lapata, Mirella

    2016-01-01

    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., "chair" is a member of the furniture category). We present a Bayesian model that, unlike…

  15. Discovering and visualizing indirect associations between biomedical concepts

    PubMed Central

    Tsuruoka, Yoshimasa; Miwa, Makoto; Hamamoto, Kaisei; Tsujii, Jun'ichi; Ananiadou, Sophia

    2011-01-01

    Motivation: Discovering useful associations between biomedical concepts has been one of the main goals in biomedical text-mining, and understanding their biomedical contexts is crucial in the discovery process. Hence, we need a text-mining system that helps users explore various types of (possibly hidden) associations in an easy and comprehensible manner. Results: This article describes FACTA+, a real-time text-mining system for finding and visualizing indirect associations between biomedical concepts from MEDLINE abstracts. The system can be used as a text search engine like PubMed with additional features to help users discover and visualize indirect associations between important biomedical concepts such as genes, diseases and chemical compounds. FACTA+ inherits all functionality from its predecessor, FACTA, and extends it by incorporating three new features: (i) detecting biomolecular events in text using a machine learning model, (ii) discovering hidden associations using co-occurrence statistics between concepts, and (iii) visualizing associations to improve the interpretability of the output. To the best of our knowledge, FACTA+ is the first real-time web application that offers the functionality of finding concepts involving biomolecular events and visualizing indirect associations of concepts with both their categories and importance. Availability: FACTA+ is available as a web application at http://refine1-nactem.mc.man.ac.uk/facta/, and its visualizer is available at http://refine1-nactem.mc.man.ac.uk/facta-visualizer/. Contact: tsuruoka@jaist.ac.jp PMID:21685059

  16. The time course of explicit and implicit categorization.

    PubMed

    Smith, J David; Zakrzewski, Alexandria C; Herberger, Eric R; Boomer, Joseph; Roeder, Jessica L; Ashby, F Gregory; Church, Barbara A

    2015-10-01

    Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization.

  17. Honeybees in a virtual reality environment learn unique combinations of colour and shape.

    PubMed

    Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A

    2017-10-01

    Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.

  18. Learning and Retention through Predictive Inference and Classification

    ERIC Educational Resources Information Center

    Sakamoto, Yasuaki; Love, Bradley C.

    2010-01-01

    Work in category learning addresses how humans acquire knowledge and, thus, should inform classroom practices. In two experiments, we apply and evaluate intuitions garnered from laboratory-based research in category learning to learning tasks situated in an educational context. In Experiment 1, learning through predictive inference and…

  19. Testing Prepares Students to Learn Better: The Forward Effect of Testing in Category Learning

    ERIC Educational Resources Information Center

    Lee, Hee Seung; Ahn, Dahwi

    2018-01-01

    The forward effect of testing occurs when testing on previously studied information facilitates subsequent learning. The present research investigated whether interim testing on initially studied materials enhances the learning of new materials in category learning and examined the metacognitive judgments of such learning. Across the 4…

  20. Word-level information influences phonetic learning in adults and infants

    PubMed Central

    Feldman, Naomi H.; Myers, Emily B.; White, Katherine S.; Griffiths, Thomas L.; Morgan, James L.

    2013-01-01

    Infants begin to segment words from fluent speech during the same time period that they learn phonetic categories. Segmented words can provide a potentially useful cue for phonetic learning, yet accounts of phonetic category acquisition typically ignore the contexts in which sounds appear. We present two experiments to show that, contrary to the assumption that phonetic learning occurs in isolation, learners are sensitive to the words in which sounds appear and can use this information to constrain their interpretation of phonetic variability. Experiment 1 shows that adults use word-level information in a phonetic category learning task, assigning acoustically similar vowels to different categories more often when those sounds consistently appear in different words. Experiment 2 demonstrates that eight-month-old infants similarly pay attention to word-level information and that this information affects how they treat phonetic contrasts. These findings suggest that phonetic category learning is a rich, interactive process that takes advantage of many different types of cues that are present in the input. PMID:23562941

  1. Emerging Object Representations in the Visual System Predict Reaction Times for Categorization

    PubMed Central

    Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.

    2015-01-01

    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634

  2. The correlation between visual acuity and color vision as an indicator of the cause of visual loss.

    PubMed

    Almog, Yehoshua; Nemet, Arie

    2010-06-01

    To explore the correlation between visual acuity (VA) and color vision and to establish a guide for the diagnosis of the cause of visual loss based on this correlation. Retrospective comparative evaluation of a diagnostic test. A total of 259 patients with visual impairment caused by 1 of 4 possible disease categories were included. Patients were divided into 4 groups according to the etiology of visual loss: 1) optic neuropathies, 2) macular diseases, 3) media opacities, and 4) amblyopia. The best-corrected VA was established and a standard Ishihara 15 color plates was tested and correlated to the VA in every group separately. Correlation between the VA and the color vision along the different etiologies was evaluated. Frequency of each combination of color vision and VA in every disease category was established. VA is correlated with color vision in all 4 disease categories. For the same degree of VA loss, patients with optic neuropathy are most likely and patients with amblyopia are the least expected to have a significant color vision loss. Patients with optic neuropathy had considerably worse average color vision (6.7/15) compared to patients in the other 3 disease categories: 11.1/15 (macular diseases), 13.2/15 (media opacities), and 13.4/15 (amblyopia). Diseases of the optic nerve affect color vision earlier and more profoundly than other diseases. When the cause of visual loss is uncertain, the correlation between the severity of color vision and VA loss can imply the possible etiology of the visual loss. Copyright 2010 Elsevier Inc. All rights reserved.

  3. INCREASES IN FUNCTIONAL CONNECTIVITY BETWEEN PREFRONTAL CORTEX AND STRIATUM DURING CATEGORY LEARNING

    PubMed Central

    Antzoulatos, Evan G.; Miller, Earl K.

    2014-01-01

    SUMMARY Functional connectivity between the prefrontal cortex (PFC) and striatum (STR) is thought critical for cognition, and has been linked to conditions like autism and schizophrenia. We recorded from multiple electrodes in PFC and STR while monkeys acquired new categories. Category learning was accompanied by an increase in beta-band synchronization of LFPs between, but not within, the PFC and STR. After learning, different pairs of PFC-STR electrodes showed stronger synchrony for one or the other category, suggesting category-specific functional circuits. This category-specific synchrony was also seen between PFC spikes and STR LFPs, but not the reverse, reflecting the direct monosynaptic connections from the PFC to STR. However, causal connectivity analyses suggested that the polysynaptic connections from STR to the PFC exerted a stronger overall influence. This supports models positing that the basal ganglia “train” the PFC. Category learning may depend on the formation of functional circuits between the PFC and STR. PMID:24930701

  4. Human Category Learning 2.0

    PubMed Central

    Ashby, F. Gregory; Maddox, W. Todd

    2010-01-01

    During the 1990’s and early 2000’s, cognitive neuroscience investigations of human category learning focused on the primary goal of showing that humans have multiple category learning systems and on the secondary goals of identifying key qualitative properties of each system and of roughly mapping out the neural networks that mediate each system. Many researchers now accept the strength of the evidence supporting multiple systems, and as a result, during the past few years, work has begun on the second generation of research questions – that is, on questions that begin with the assumption that humans have multiple category learning systems. This article reviews much of this second generation of research. Topics covered include: 1) How do the various systems interact? 2) Are there different neural systems for categorization and category representation? 3) How does automaticity develop in each system?, and 4) Exactly how does each system learn? PMID:21182535

  5. Human category learning 2.0.

    PubMed

    Ashby, F Gregory; Maddox, W Todd

    2011-04-01

    During the 1990s and early 2000s, cognitive neuroscience investigations of human category learning focused on the primary goal of showing that humans have multiple category-learning systems and on the secondary goals of identifying key qualitative properties of each system and of roughly mapping out the neural networks that mediate each system. Many researchers now accept the strength of the evidence supporting multiple systems, and as a result, during the past few years, work has begun on the second generation of research questions-that is, on questions that begin with the assumption that humans have multiple category-learning systems. This article reviews much of this second generation of research. Topics covered include (1) How do the various systems interact? (2) Are there different neural systems for categorization and category representation? (3) How does automaticity develop in each system? and (4) Exactly how does each system learn? © 2010 New York Academy of Sciences.

  6. Differential item functioning analysis of the Vanderbilt Expertise Test for cars

    PubMed Central

    Lee, Woo-Yeol; Cho, Sun-Joo; McGugin, Rankin W.; Van Gulick, Ana Beth; Gauthier, Isabel

    2015-01-01

    The Vanderbilt Expertise Test for cars (VETcar) is a test of visual learning for contemporary car models. We used item response theory to assess the VETcar and in particular used differential item functioning (DIF) analysis to ask if the test functions the same way in laboratory versus online settings and for different groups based on age and gender. An exploratory factor analysis found evidence of multidimensionality in the VETcar, although a single dimension was deemed sufficient to capture the recognition ability measured by the test. We selected a unidimensional three-parameter logistic item response model to examine item characteristics and subject abilities. The VETcar had satisfactory internal consistency. A substantial number of items showed DIF at a medium effect size for test setting and for age group, whereas gender DIF was negligible. Because online subjects were on average older than those tested in the lab, we focused on the age groups to conduct a multigroup item response theory analysis. This revealed that most items on the test favored the younger group. DIF could be more the rule than the exception when measuring performance with familiar object categories, therefore posing a challenge for the measurement of either domain-general visual abilities or category-specific knowledge. PMID:26418499

  7. Overcoming default categorical bias in spatial memory.

    PubMed

    Sampaio, Cristina; Wang, Ranxiao Frances

    2010-12-01

    In the present study, we investigated whether a strong default categorical bias can be overcome in spatial memory by using alternative membership information. In three experiments, we tested location memory in a circular space while providing participants with an alternative categorization. We found that visual presentation of the boundaries of the alternative categories (Experiment 1) did not induce the use of the alternative categories in estimation. In contrast, visual cuing of the alternative category membership of a target (Experiment 2) and unique target feature information associated with each alternative category (Experiment 3) successfully led to the use of the alternative categories in estimation. Taken together, the results indicate that default categorical bias in spatial memory can be overcome when appropriate cues are provided. We discuss how these findings expand the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in spatial memory by proposing a retrieval-based category adjustment (RCA) model.

  8. The Transformative Experience in Engineering Education

    NASA Astrophysics Data System (ADS)

    Goodman, Katherine Ann

    This research evaluates the usefulness of transformative experience (TE) in engineering education. With TE, students 1) apply ideas from coursework to everyday experiences without prompting (motivated use); 2) see everyday situations through the lens of course content (expanded perception); and 3) value course content in new ways because it enriches everyday affective experience (affective value). In a three-part study, we examine how engineering educators can promote student progress toward TE and reliably measure that progress. For the first study, we select a mechanical engineering technical elective, Flow Visualization, that had evidence of promoting expanded perception of fluid physics. Through student surveys and interviews, we compare this elective to the required Fluid Mechanics course. We found student interest in fluids fell into four categories: complexity, application, ubiquity, and aesthetics. Fluid Mechanics promotes interest from application, while Flow Visualization promotes interest based in ubiquity and aesthetics. Coding for expanded perception, we found it associated with students' engineering identity, rather than a specific course. In our second study, we replicate atypical teaching methods from Flow Visualization in a new design course: Aesthetics of Design. Coding of surveys and interviews reveals that open-ended assignments and supportive teams lead to increased ownership of projects, which fuels risk-taking, and produces increased confidence as an engineer. The third study seeks to establish parallels between expanded perception and measurable perceptual expertise. Our visual expertise experiment uses fluid flow images with both novices and experts (students who had passed fluid mechanics). After training, subjects sort images into laminar and turbulent categories. The results demonstrate that novices learned to sort the flow stimuli in ways similar to subjects in prior perceptual expertise studies. In contrast, the experts' significantly better results suggest they are accessing conceptual fluids knowledge to perform this new, visual task. The ability to map concepts onto visual information is likely a necessary step toward expanded perception. Our findings suggest that open-ended aesthetic experiences with engineering content unexpectedly support engineering identity development, and that visual tasks could be developed to measure conceptual understanding, promoting expanded perception. Overall, we find TE a productive theoretical framework for engineering education research.

  9. Transfer between local and global processing levels by pigeons (Columba livia) and humans (Homo sapiens) in exemplar- and rule-based categorization tasks.

    PubMed

    Aust, Ulrike; Braunöder, Elisabeth

    2015-02-01

    The present experiment investigated pigeons' and humans' processing styles-local or global-in an exemplar-based visual categorization task in which category membership of every stimulus had to be learned individually, and in a rule-based task in which category membership was defined by a perceptual rule. Group Intact was trained with the original pictures (providing both intact local and global information), Group Scrambled was trained with scrambled versions of the same pictures (impairing global information), and Group Blurred was trained with blurred versions (impairing local information). Subsequently, all subjects were tested for transfer to the 2 untrained presentation modes. Humans outperformed pigeons regarding learning speed and accuracy as well as transfer performance and showed good learning irrespective of group assignment, whereas the pigeons of Group Blurred needed longer to learn the training tasks than the pigeons of Groups Intact and Scrambled. Also, whereas humans generalized equally well to any novel presentation mode, pigeons' transfer from and to blurred stimuli was impaired. Both species showed faster learning and, for the most part, better transfer in the rule-based than in the exemplar-based task, but there was no evidence of the used processing mode depending on the type of task (exemplar- or rule-based). Whereas pigeons relied on local information throughout, humans did not show a preference for either processing level. Additional tests with grayscale versions of the training stimuli, with versions that were both blurred and scrambled, and with novel instances of the rule-based task confirmed and further extended these findings. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  10. Picture Detection in Rapid Serial Visual Presentation: Features or Identity?

    ERIC Educational Resources Information Center

    Potter, Mary C.; Wyble, Brad; Pandav, Rijuta; Olejarczyk, Jennifer

    2010-01-01

    A pictured object can be readily detected in a rapid serial visual presentation sequence when the target is specified by a superordinate category name such as "animal" or "vehicle". Are category features the initial basis for detection, with identification of the specific object occurring in a second stage (Evans &…

  11. An Examination of Strategy Implementation During Abstract Nonlinguistic Category Learning in Aphasia

    PubMed Central

    Kiran, Swathi

    2015-01-01

    Purpose Our purpose was to study strategy use during nonlinguistic category learning in aphasia. Method Twelve control participants without aphasia and 53 participants with aphasia (PWA) completed a computerized feedback-based category learning task consisting of training and testing phases. Accuracy rates of categorization in testing phases were calculated. To evaluate strategy use, strategy analyses were conducted over training and testing phases. Participant data were compared with model data that simulated complex multi-cue, single feature, and random pattern strategies. Learning success and strategy use were evaluated within the context of standardized cognitive–linguistic assessments. Results Categorization accuracy was higher among control participants than among PWA. The majority of control participants implemented suboptimal or optimal multi-cue and single-feature strategies by testing phases of the experiment. In contrast, a large subgroup of PWA implemented random patterns, or no strategy, during both training and testing phases of the experiment. Conclusions Person-to-person variability arises not only in category learning ability but also in the strategies implemented to complete category learning tasks. PWA less frequently developed effective strategies during category learning tasks than control participants. Certain PWA may have impairments of strategy development or feedback processing not captured by language and currently probed cognitive abilities. PMID:25908438

  12. Effect of explicit dimension instruction on speech category learning

    PubMed Central

    Chandrasekaran, Bharath; Yi, Han-Gyol; Smayda, Kirsten E.; Maddox, W. Todd

    2015-01-01

    Learning non-native speech categories is often considered a challenging task in adulthood. This difficulty is driven by cross-language differences in weighting critical auditory dimensions that differentiate speech categories. For example, previous studies have shown that differentiating Mandarin tonal categories requires attending to dimensions related to pitch height and direction. Relative to native speakers of Mandarin, the pitch direction dimension is under-weighted by native English speakers. In the current study, we examined the effect of explicit instructions (dimension instruction) on native English speakers' Mandarin tone category learning within the framework of a dual-learning systems (DLS) model. This model predicts that successful speech category learning is initially mediated by an explicit, reflective learning system that frequently utilizes unidimensional rules, with an eventual switch to a more implicit, reflexive learning system that utilizes multidimensional rules. Participants were explicitly instructed to focus and/or ignore the pitch height dimension, the pitch direction dimension, or were given no explicit prime. Our results show that instruction instructing participants to focus on pitch direction, and instruction diverting attention away from pitch height resulted in enhanced tone categorization. Computational modeling of participant responses suggested that instruction related to pitch direction led to faster and more frequent use of multidimensional reflexive strategies, and enhanced perceptual selectivity along the previously underweighted pitch direction dimension. PMID:26542400

  13. Learning Object Names at Different Hierarchical Levels Using Cross-Situational Statistics.

    PubMed

    Chen, Chi-Hsin; Zhang, Yayun; Yu, Chen

    2018-05-01

    Objects in the world usually have names at different hierarchical levels (e.g., beagle, dog, animal). This research investigates adults' ability to use cross-situational statistics to simultaneously learn object labels at individual and category levels. The results revealed that adults were able to use co-occurrence information to learn hierarchical labels in contexts where the labels for individual objects and labels for categories were presented in completely separated blocks, in interleaved blocks, or mixed in the same trial. Temporal presentation schedules significantly affected the learning of individual object labels, but not the learning of category labels. Learners' subsequent generalization of category labels indicated sensitivity to the structure of statistical input. Copyright © 2017 Cognitive Science Society, Inc.

  14. The naming impairment of living and nonliving items in Alzheimer's disease.

    PubMed

    Montanes, P; Goldblum, M C; Boller, F

    1995-01-01

    Several studies of semantic abilities in Dementia of the Alzheimer Type (DAT) suggest that their semantic disorders may affect specific categories of knowledge. In particular, the existence of a category-specific semantic impairment affecting, selectively, living things has frequently been reported in association with DAT. We report here results from two naming tasks of 25 DAT patients and two subgroups within this population. The first naming task used 48 black and white line drawings from Snodgrass and Vanderwart (1980) which controlled the visual complexity of stimuli from living and nonliving categories. The second task used 44 colored pictures (to assess the influence of word frequency in living vs. nonliving categories). Within the set of black and white pictures, both DAT patients and controls obtained significantly lower scores on high visual complexity stimuli than on stimuli of low visual complexity. A clear effect of semantic category emerged for DAT patients and controls, with a lower performance on the living category. Within the colored set, pictures corresponding to high frequency words gave rise to significantly higher scores than pictures corresponding to low frequency words. No significant difference emerged between living versus nonliving categories, either in DAT patients or in controls. In the two tasks, the two subgroups of DAT patients presented a different profile of performance and error type. As color constitutes the main difference between the two sets of pictures, our results point to the relevance of this cue in the processing of semantic information, with visual complexity and frequency also being very relevant.

  15. The Role of Bottom-Up Processing in Perceptual Categorization by 3- to 4-Month-Old Infants: Simulations and Data

    ERIC Educational Resources Information Center

    French, Robert M.; Mareschal, Denis; Mermillod, Martial; Quinn, Paul C.

    2004-01-01

    Disentangling bottom-up and top-down processing in adult category learning is notoriously difficult. Studying category learning in infancy provides a simple way of exploring category learning while minimizing the contribution of top-down information. Three- to 4-month-old infants presented with cat or dog images will form a perceptual category…

  16. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects.

    PubMed

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  17. The Time Course of Explicit and Implicit Categorization

    PubMed Central

    Zakrzewski, Alexandria C.; Herberger, Eric; Boomer, Joseph; Roeder, Jessica; Ashby, F. Gregory; Church, Barbara A.

    2015-01-01

    Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization. PMID:26025556

  18. Differences in perceptual learning transfer as a function of training task.

    PubMed

    Green, C Shawn; Kattner, Florian; Siegel, Max H; Kersten, Daniel; Schrater, Paul R

    2015-01-01

    A growing body of research--including results from behavioral psychology, human structural and functional imaging, single-cell recordings in nonhuman primates, and computational modeling--suggests that perceptual learning effects are best understood as a change in the ability of higher-level integration or association areas to read out sensory information in the service of particular decisions. Work in this vein has argued that, depending on the training experience, the "rules" for this read-out can either be applicable to new contexts (thus engendering learning generalization) or can apply only to the exact training context (thus resulting in learning specificity). Here we contrast learning tasks designed to promote either stimulus-specific or stimulus-general rules. Specifically, we compare learning transfer across visual orientation following training on three different tasks: an orientation categorization task (which permits an orientation-specific learning solution), an orientation estimation task (which requires an orientation-general learning solution), and an orientation categorization task in which the relevant category boundary shifts on every trial (which lies somewhere between the two tasks above). While the simple orientation-categorization training task resulted in orientation-specific learning, the estimation and moving categorization tasks resulted in significant orientation learning generalization. The general framework tested here--that task specificity or generality can be predicted via an examination of the optimal learning solution--may be useful in building future training paradigms with certain desired outcomes.

  19. Adults' and Children's Understanding of How Expertise Influences Learning.

    PubMed

    Danovitch, Judith H; Shenouda, Christine K

    2018-01-01

    Adults and children use information about expertise to infer what a person is likely to know, but it is unclear whether they realize that expertise also has implications for learning. We explore adults' and children's understanding that expertise in a particular category supports learning about a closely related category. In four experiments, 5-year-olds and adults (n = 160) judged which of two people would be better at learning about a new category. When faced with an expert and a nonexpert, adults consistently indicated that expertise supports learning in a closely related category; however, children's judgments were inconsistent and were strongly influenced by the description of the nonexpert. The results suggest that although children understand what it means to be an expert, they may judge an individual's learning capacity based on different considerations than adults.

  20. Exploiting range imagery: techniques and applications

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-07-01

    Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.

  1. Level conceptual change pre-service elementary teachers on electric current conceptions through visual multimedia supported conceptual change

    NASA Astrophysics Data System (ADS)

    Hermita, N.; Suhandi, A.; Syaodih, E.; Samsudin, A.; Marhadi, H.; Sapriadil, S.; Zaenudin, Z.; Rochman, C.; Mansur, M.; Wibowo, F. C.

    2018-05-01

    Now a day, conceptual change is the most valuable issues in the science education perspective, especially in the elementary education. Researchers have already dialed with the aim of the research to increase level conceptual change process on the electric conceptions through Visual Multimedia Supported Conceptual Change Text (VMMSCCText). We have ever utilized research and development method namely 3D-1I stands for Define, Design, Development, and Implementation. The 27 pre-service elementary teachers were involved in the research. The battery function in circuit electric conception is the futuristic concept which should have been learned by the students. Moreover, the data which was collected reports that static about 0%, disorientation about 0%, reconstruction about 55.6%, and construction about 25.9%. It can be concluded that the implementation of VMMSCCText to pre-service elementary teachers are increased to level conceptual change categories.

  2. Visual mismatch negativity and categorization.

    PubMed

    Czigler, István

    2014-07-01

    Visual mismatch negativity (vMMN) component of event-related potentials is elicited by stimuli violating the category rule of stimulus sequences, even if such stimuli are outside the focus of attention. Category-related vMMN emerges to colors, and color-related vMMN is sensitive to language-related effects. A higher-order perceptual category, bilateral symmetry is also represented in the memory processes underlying vMMN. As a relatively large body of research shows, violating the emotional category of human faces elicits vMMN. Another face-related category sensitive to the violation of regular presentation is gender. Finally, vMMN was elicited to the laterality of hands. As results on category-related vMMN show, stimulus representation in the non-conscious change detection system is fairly complex, and it is not restricted to the registration of elementary perceptual regularities.

  3. Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.

    PubMed

    Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno

    2015-05-01

    The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Models as Relational Categories

    NASA Astrophysics Data System (ADS)

    Kokkonen, Tommi

    2017-11-01

    Model-based learning (MBL) has an established position within science education. It has been found to enhance conceptual understanding and provide a way for engaging students in authentic scientific activity. Despite ample research, few studies have examined the cognitive processes regarding learning scientific concepts within MBL. On the other hand, recent research within cognitive science has examined the learning of so-called relational categories. Relational categories are categories whose membership is determined on the basis of the common relational structure. In this theoretical paper, I argue that viewing models as relational categories provides a well-motivated cognitive basis for MBL. I discuss the different roles of models and modeling within MBL (using ready-made models, constructive modeling, and generative modeling) and discern the related cognitive aspects brought forward by the reinterpretation of models as relational categories. I will argue that relational knowledge is vital in learning novel models and in the transfer of learning. Moreover, relational knowledge underlies the coherent, hierarchical knowledge of experts. Lastly, I will examine how the format of external representations may affect the learning of models and the relevant relations. The nature of the learning mechanisms underlying students' mental representations of models is an interesting open question to be examined. Furthermore, the ways in which the expert-like knowledge develops and how to best support it is in need of more research. The discussion and conceptualization of models as relational categories allows discerning students' mental representations of models in terms of evolving relational structures in greater detail than previously done.

  5. A Brief Review of Facial Emotion Recognition Based on Visual Information.

    PubMed

    Ko, Byoung Chul

    2018-01-30

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling "end-to-end" learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  6. A Brief Review of Facial Emotion Recognition Based on Visual Information

    PubMed Central

    2018-01-01

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work. PMID:29385749

  7. A Note on DeCaro, Thomas, and Beilock (2008): Further Data Demonstrate Complexities in the Assessment of Information-Integration Category Learning

    ERIC Educational Resources Information Center

    Tharp, Ian J.; Pickering, Alan D.

    2009-01-01

    DeCaro et al. [DeCaro, M. S., Thomas, R. D., & Beilock, S. L. (2008). "Individual differences in category learning: Sometimes less working memory capacity is better than more." "Cognition, 107"(1), 284-294] explored how individual differences in working memory capacity differentially mediate the learning of distinct category structures.…

  8. Verb Learning in 14- and 18-Month-Old English-Learning Infants

    ERIC Educational Resources Information Center

    He, Angela Xiaoxue; Lidz, Jeffrey

    2017-01-01

    The present study investigates English-learning infants' early understanding of the link between the grammatical category "verb" and the conceptual category "event," and their ability to recruit morphosyntactic information online to learn novel verb meanings. We report two experiments using an infant-controlled…

  9. Transcranial infrared laser stimulation improves rule-based, but not information-integration, category learning in humans.

    PubMed

    Blanco, Nathaniel J; Saucedo, Celeste L; Gonzalez-Lima, F

    2017-03-01

    This is the first randomized, controlled study comparing the cognitive effects of transcranial laser stimulation on category learning tasks. Transcranial infrared laser stimulation is a new non-invasive form of brain stimulation that shows promise for wide-ranging experimental and neuropsychological applications. It involves using infrared laser to enhance cerebral oxygenation and energy metabolism through upregulation of the respiratory enzyme cytochrome oxidase, the primary infrared photon acceptor in cells. Previous research found that transcranial infrared laser stimulation aimed at the prefrontal cortex can improve sustained attention, short-term memory, and executive function. In this study, we directly investigated the influence of transcranial infrared laser stimulation on two neurobiologically dissociable systems of category learning: a prefrontal cortex mediated reflective system that learns categories using explicit rules, and a striatally mediated reflexive learning system that forms gradual stimulus-response associations. Participants (n=118) received either active infrared laser to the lateral prefrontal cortex or sham (placebo) stimulation, and then learned one of two category structures-a rule-based structure optimally learned by the reflective system, or an information-integration structure optimally learned by the reflexive system. We found that prefrontal rule-based learning was substantially improved following transcranial infrared laser stimulation as compared to placebo (treatment X block interaction: F(1, 298)=5.117, p=0.024), while information-integration learning did not show significant group differences (treatment X block interaction: F(1, 288)=1.633, p=0.202). These results highlight the exciting potential of transcranial infrared laser stimulation for cognitive enhancement and provide insight into the neurobiological underpinnings of category learning. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Visual Images of Subjective Perception of Time in a Literary Text

    ERIC Educational Resources Information Center

    Nesterik, Ella V.; Issina, Gaukhar I.; Pecherskikh, Taliya F.; Belikova, Oxana V.

    2016-01-01

    The article is devoted to the subjective perception of time, or psychological time, as a text category and a literary image. It focuses on the visual images that are characteristic of different types of literary time--accelerated, decelerated and frozen (vanished). The research is based on the assumption that the category of subjective perception…

  11. The Precategorical Nature of Visual Short-Term Memory

    ERIC Educational Resources Information Center

    Quinlan, Philip T.; Cohen, Dale J.

    2016-01-01

    We conducted a series of recognition experiments that assessed whether visual short-term memory (VSTM) is sensitive to shared category membership of to-be-remembered (tbr) images of common objects. In Experiment 1 some of the tbr items shared the same basic level category (e.g., hand axe): Such items were no better retained than others. In the…

  12. Brief Report: Simulations Suggest Heterogeneous Category Learning and Generalization in Children with Autism is a Result of Idiosyncratic Perceptual Transformations.

    PubMed

    Mercado, Eduardo; Church, Barbara A

    2016-08-01

    Children with autism spectrum disorder (ASD) sometimes have difficulties learning categories. Past computational work suggests that such deficits may result from atypical representations in cortical maps. Here we use neural networks to show that idiosyncratic transformations of inputs can result in the formation of feature maps that impair category learning for some inputs, but not for other closely related inputs. These simulations suggest that large inter- and intra-individual variations in learning capacities shown by children with ASD across similar categorization tasks may similarly result from idiosyncratic perceptual encoding that is resistant to experience-dependent changes. If so, then both feedback- and exposure-based category learning should lead to heterogeneous, stimulus-dependent deficits in children with ASD.

  13. Attribute conjunctions and the part configuration advantage in object category learning.

    PubMed

    Saiki, J; Hummel, J E

    1996-07-01

    Five experiments demonstrated that in object category learning people are particularly sensitive to conjunctions of part shapes and relative locations. Participants learned categories defined by a part's shape and color (part-color conjunctions) or by a part's shape and its location relative to another part (part-location conjunctions). The statistical properties of the categories were identical across these conditions, as were the salience of color and relative location. Participants were better at classifying objects defined by part-location conjunctions than objects defined by part-color conjunctions. Subsequent experiments revealed that this effect was not due to the specific color manipulation or the role of location per se. These results suggest that the shape bias in object categorization is at least partly due to sensitivity to part-location conjunctions and suggest a new processing constraint on category learning.

  14. Creating objects and object categories for studying perception and perceptual learning.

    PubMed

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-11-02

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties. Many innovative and useful methods currently exist for creating novel objects and object categories (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.

  15. Field-Dependence/Independence and Active Learning of Verbal and Geometric Material.

    ERIC Educational Resources Information Center

    Reardon, Richard; And Others

    1982-01-01

    Field-dependent and independent subjects sorted geometric and verbal material according to category exemplars, forcing active learning, and then recalled the category locations. Field-independent individuals generally performed better on learning and memory tasks with a more active approach. Active versus passive learning styles are discussed.…

  16. Associations between Chinese EFL Graduate Students' Beliefs and Language Learning Strategies

    ERIC Educational Resources Information Center

    Tang, Mailing; Tian, Jianrong

    2015-01-01

    This study, using Horwitz's Beliefs about Language Learning Inventory and Oxford's Strategy Inventory for Language Learning, investigated learners' beliefs about language learning and their choice of strategy categories among 546 graduate students in China. The correlation between learners' beliefs and their strategy categories use was examined.…

  17. Supralinear and Supramodal Integration of Visual and Tactile Signals in Rats: Psychophysics and Neuronal Mechanisms.

    PubMed

    Nikbakht, Nader; Tafreshiha, Azadeh; Zoccolan, Davide; Diamond, Mathew E

    2018-02-07

    To better understand how object recognition can be triggered independently of the sensory channel through which information is acquired, we devised a task in which rats judged the orientation of a raised, black and white grating. They learned to recognize two categories of orientation: 0° ± 45° ("horizontal") and 90° ± 45° ("vertical"). Each trial required a visual (V), a tactile (T), or a visual-tactile (VT) discrimination; VT performance was better than that predicted by optimal linear combination of V and T signals, indicating synergy between sensory channels. We examined posterior parietal cortex (PPC) and uncovered key neuronal correlates of the behavioral findings: PPC carried both graded information about object orientation and categorical information about the rat's upcoming choice; single neurons exhibited identical responses under the three modality conditions. Finally, a linear classifier of neuronal population firing replicated the behavioral findings. Taken together, these findings suggest that PPC is involved in the supramodal processing of shape. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Colour categories are reflected in sensory stages of colour perception when stimulus issues are resolved.

    PubMed

    Forder, Lewis; He, Xun; Franklin, Anna

    2017-01-01

    Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing.

  19. Colour categories are reflected in sensory stages of colour perception when stimulus issues are resolved

    PubMed Central

    He, Xun; Franklin, Anna

    2017-01-01

    Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing. PMID:28542426

  20. Categorical Effects in Children's Colour Search: A Cross-Linguistic Comparison

    ERIC Educational Resources Information Center

    Daoutis, Christine A.; Franklin, Anna; Riddett, Amy; Clifford, Alexandra; Davies, Ian R. L.

    2006-01-01

    In adults, visual search for a colour target is facilitated if the target and distractors fall in different colour categories (e.g. Daoutis, Pilling, & Davies, in press). The present study explored category effects in children's colour search. The relationship between linguistic colour categories and perceptual categories was addressed by…

  1. Influence of maneuverability on helicopter combat effectiveness

    NASA Technical Reports Server (NTRS)

    Falco, M.; Smith, R.

    1982-01-01

    A computational procedure employing a stochastic learning method in conjunction with dynamic simulation of helicopter flight and weapon system operation was used to derive helicopter maneuvering strategies. The derived strategies maximize either survival or kill probability and are in the form of a feedback control based upon threat visual or warning system cues. Maneuverability parameters implicit in the strategy development include maximum longitudinal acceleration and deceleration, maximum sustained and transient load factor turn rate at forward speed, and maximum pedal turn rate and lateral acceleration at hover. Results are presented in terms of probability of skill for all combat initial conditions for two threat categories.

  2. A Novel Clustering Method Curbing the Number of States in Reinforcement Learning

    NASA Astrophysics Data System (ADS)

    Kotani, Naoki; Nunobiki, Masayuki; Taniguchi, Kenji

    We propose an efficient state-space construction method for a reinforcement learning. Our method controls the number of categories with improving the clustering method of Fuzzy ART which is an autonomous state-space construction method. The proposed method represents weight vector as the mean value of input vectors in order to curb the number of new categories and eliminates categories whose state values are low to curb the total number of categories. As the state value is updated, the size of category becomes small to learn policy strictly. We verified the effectiveness of the proposed method with simulations of a reaching problem for a two-link robot arm. We confirmed that the number of categories was reduced and the agent achieved the complex task quickly.

  3. Effects of grammatical categories on children's visual language processing: Evidence from event-related brain potentials.

    PubMed

    Weber-Fox, Christine; Hart, Laura J; Spruill, John E

    2006-07-01

    This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and frequency. The categories included nouns, adjectives, verbs, pronouns, conjunctions, prepositions, and articles. The findings indicate that by the age of 9-10 years, children exhibit robust neural indicators differentiating grammatical categories; however, it is also evident that development of language processing is not yet adult-like at this age. The current findings are consistent with the hypothesis that for beginning readers a variety of cues and characteristics interact to affect processing of different grammatical categories and indicate the need to take into account linguistic functions, prosodic salience, and grammatical complexity as they relate to the development of language abilities.

  4. View-invariant object category learning, recognition, and search: how spatial and object attention are coordinated using surface-based attentional shrouds.

    PubMed

    Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio

    2009-02-01

    How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.

  5. Human Factors Engineering. Student Supplement,

    DTIC Science & Technology

    1981-08-01

    a job TASK TAXONOMY A classification scheme for the different levels of activities in a system, i.e., job - task - sub-task, etc. TASK-AN~ALYSIS...with the classification of learning objectives by learning category so as to identify learningPhas III guidelines necessary for optimum learning to...correct. .4... .the sequencing of all dependent tasks. .1.. .the classification of learning objectives by learning category and the Identification of

  6. Encodings of implied motion for animate and inanimate object categories in the two visual pathways.

    PubMed

    Lu, Zhengang; Li, Xueting; Meng, Ming

    2016-01-15

    Previous research has proposed two separate pathways for visual processing: the dorsal pathway for "where" information vs. the ventral pathway for "what" information. Interestingly, the middle temporal cortex (MT) in the dorsal pathway is involved in representing implied motion from still pictures, suggesting an interaction between motion and object related processing. However, the relationship between how the brain encodes implied motion and how the brain encodes object/scene categories is unclear. To address this question, fMRI was used to measure activity along the two pathways corresponding to different animate and inanimate categories of still pictures with different levels of implied motion speed. In the visual areas of both pathways, activity induced by pictures of humans and animals was hardly modulated by the implied motion speed. By contrast, activity in these areas correlated with the implied motion speed for pictures of inanimate objects and scenes. The interaction between implied motion speed and stimuli category was significant, suggesting different encoding mechanisms of implied motion for animate-inanimate distinction. Further multivariate pattern analysis of activity in the dorsal pathway revealed significant effects of stimulus category that are comparable to the ventral pathway. Moreover, still pictures of inanimate objects/scenes with higher implied motion speed evoked activation patterns that were difficult to differentiate from those evoked by pictures of humans and animals, indicating a functional role of implied motion in the representation of object categories. These results provide novel evidence to support integrated encoding of motion and object categories, suggesting a rethink of the relationship between the two visual pathways. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Distributional Language Learning: Mechanisms and Models of ategory Formation.

    PubMed

    Aslin, Richard N; Newport, Elissa L

    2014-09-01

    In the past 15 years, a substantial body of evidence has confirmed that a powerful distributional learning mechanism is present in infants, children, adults and (at least to some degree) in nonhuman animals as well. The present article briefly reviews this literature and then examines some of the fundamental questions that must be addressed for any distributional learning mechanism to operate effectively within the linguistic domain. In particular, how does a naive learner determine the number of categories that are present in a corpus of linguistic input and what distributional cues enable the learner to assign individual lexical items to those categories? Contrary to the hypothesis that distributional learning and category (or rule) learning are separate mechanisms, the present article argues that these two seemingly different processes---acquiring specific structure from linguistic input and generalizing beyond that input to novel exemplars---actually represent a single mechanism. Evidence in support of this single-mechanism hypothesis comes from a series of artificial grammar-learning studies that not only demonstrate that adults can learn grammatical categories from distributional information alone, but that the specific patterning of distributional information among attested utterances in the learning corpus enables adults to generalize to novel utterances or to restrict generalization when unattested utterances are consistently absent from the learning corpus. Finally, a computational model of distributional learning that accounts for the presence or absence of generalization is reviewed and the implications of this model for linguistic-category learning are summarized.

  8. Auditory working memory predicts individual differences in absolute pitch learning.

    PubMed

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the phenomenon of AP are also discussed. Copyright © 2015. Published by Elsevier B.V.

  9. Effect of between-category similarity on basic-level superiority in pigeons

    PubMed Central

    Lazareva, Olga F.; Soto, Fabián A.; Wasserman, Edward A.

    2010-01-01

    Children categorize stimuli at the basic level faster than at the superordinate level. We hypothesized that between-category similarity may affect this basic-level superiority effect. Dissimilar categories may be easy to distinguish at the basic level but be difficult to group at the superordinate level, whereas similar categories may be easy to group at the superordinate level but be difficult to distinguish at the basic level. Consequently, similar basic-level categories may produce a superordinate-before-basic learning trend, whereas dissimilar basic-level categories may result in a basic-before-superordinate learning trend. We tested this hypothesis in pigeons by constructing superordinate-level categories out of basic-level categories with known similarity. In Experiment 1, we experimentally evaluated the between-category similarity of four basic-level photographic categories using multiple fixed interval-extinction training (Astley & Wasserman, 1992). We used the resultant similarity matrices in Experiment 2 to construct two superordinate-level categories from basic-level categories with high between-category similarity (cars and persons; chairs and flowers). We then trained pigeons to concurrently classify those photographs into either the proper basic-level category or the proper superordinate-level category. Under these conditions, the pigeons learned the superordinate-level discrimination faster than the basic-level discrimination, confirming our hypothesis that basic-level superiority is affected by between-category similarity. PMID:20600696

  10. A Neural Basis of Facial Action Recognition in Humans

    PubMed Central

    Srinivasan, Ramprakash; Golomb, Julie D.

    2016-01-01

    By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment. PMID:27098688

  11. Do Object-Category Selective Regions in the Ventral Visual Stream Represent Perceived Distance Information?

    ERIC Educational Resources Information Center

    Amit, Elinor; Mehoudar, Eyal; Trope, Yaacov; Yovel, Galit

    2012-01-01

    It is well established that scenes and objects elicit a highly selective response in specific brain regions in the ventral visual cortex. An inherent difference between these categories that has not been explored yet is their perceived distance from the observer (i.e. scenes are distal whereas objects are proximal). The current study aimed to test…

  12. Caudate nucleus reactivity predicts perceptual learning rate for visual feature conjunctions.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2015-04-15

    Useful information in the visual environment is often contained in specific conjunctions of visual features (e.g., color and shape). The ability to quickly and accurately process such conjunctions can be learned. However, the neural mechanisms responsible for such learning remain largely unknown. It has been suggested that some forms of visual learning might involve the dopaminergic neuromodulatory system (Roelfsema et al., 2010; Seitz and Watanabe, 2005), but this hypothesis has not yet been directly tested. Here we test the hypothesis that learning visual feature conjunctions involves the dopaminergic system, using functional neuroimaging, genetic assays, and behavioral testing techniques. We use a correlative approach to evaluate potential associations between individual differences in visual feature conjunction learning rate and individual differences in dopaminergic function as indexed by neuroimaging and genetic markers. We find a significant correlation between activity in the caudate nucleus (a component of the dopaminergic system connected to visual areas of the brain) and visual feature conjunction learning rate. Specifically, individuals who showed a larger difference in activity between positive and negative feedback on an unrelated cognitive task, indicative of a more reactive dopaminergic system, learned visual feature conjunctions more quickly than those who showed a smaller activity difference. This finding supports the hypothesis that the dopaminergic system is involved in visual learning, and suggests that visual feature conjunction learning could be closely related to associative learning. However, no significant, reliable correlations were found between feature conjunction learning and genotype or dopaminergic activity in any other regions of interest. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Event-related potentials reveal linguistic suppression effect but not enhancement effect on categorical perception of color.

    PubMed

    Lu, Aitao; Yang, Ling; Yu, Yanping; Zhang, Meichao; Shao, Yulan; Zhang, Honghong

    2014-08-01

    The present study used the event-related potential technique to investigate the nature of linguistic effect on color perception. Four types of stimuli based on hue differences between a target color and a preceding color were used: zero hue step within-category color (0-WC); one hue step within-category color (1-WC); one hue step between-category color (1-BC); and two hue step between-category color (2-BC). The ERP results showed no significant effect of stimulus type in the 100-200 ms time window. However, in the 200-350 ms time window, ERP responses to 1-WC target color overlapped with that to 0-WC target color for right visual field (RVF) but not left visual field (LVF) presentation. For the 1-BC condition, ERP amplitudes were comparable in the two visual fields, both being significantly different from the 0-WC condition. The 2-BC condition showed the same pattern as the 1-BC condition. These results suggest that the categorical perception of color in RVF is due to linguistic suppression on within-category color discrimination but not between-category color enhancement, and that the effect is independent of early perceptual processes. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  14. Non-linguistic learning in aphasia: Effects of training method and stimulus characteristics

    PubMed Central

    Vallila-Rohter, Sofia; Kiran, Swathi

    2013-01-01

    Purpose The purpose of the current study was to explore non-linguistic learning ability in patients with aphasia, examining the impact of stimulus typicality and feedback on success with learning. Method Eighteen patients with aphasia and eight healthy controls participated in this study. All participants completed four computerized, non-linguistic category-learning tasks. We probed learning ability under two methods of instruction: feedback-based (FB) and paired-associate (PA). We also examined the impact of task complexity on learning ability, comparing two stimulus conditions: typical (Typ) and atypical (Atyp). Performance was compared between groups and across conditions. Results Results demonstrated that healthy controls were able to successfully learn categories under all conditions. For our patients with aphasia, two patterns of performance arose. One subgroup of patients was able to maintain learning across task manipulations and conditions. The other subgroup of patients demonstrated a sensitivity to task complexity, learning successfully only in the typical training conditions. Conclusions Results support the hypothesis that impairments of general learning are present in aphasia. Some patients demonstrated the ability to extract category information under complex training conditions, while others learned only under conditions that were simplified and emphasized salient category features. Overall, the typical training condition facilitated learning for all participants. Findings have implications for therapy, which are discussed. PMID:23695914

  15. The impact of CmapTools utilization towards students' conceptual change on optics topic

    NASA Astrophysics Data System (ADS)

    Rofiuddin, Muhammad Rifqi; Feranie, Selly

    2017-05-01

    Science teachers need to help students identify their prior ideas and modify them based on scientific knowledge. This process is called as conceptual change. One of essential tools to analyze students' conceptual change is by using concept map. Concept Maps are graphical representations of knowledge that are comprised of concepts and the relationships between them. Constructing concept map is implemented by adapting the role of technology to support learning process, as it is suitable with Educational Ministry Regulation No.68 year 2013. Institute for Human and Machine Cognition (IHMC) has developed CmapTools, a client-server software for easily construct and visualize concept maps. This research aims to investigate secondary students' conceptual change after experiencing five-stage conceptual teaching model by utilizing CmapTools in learning Optics. Weak experimental method through one group pretest-posttest design is implemented in this study to collect preliminary and post concept map as qualitative data. Sample was taken purposively of 8th grade students (n= 22) at one of private schools Bandung, West Java. Conceptual change based on comparison of preliminary and post concept map construction is assessed based on rubric of concept map scoring and structure. Results shows significance conceptual change differences at 50.92 % that is elaborated into concept map element such as prepositions and hierarchical level in high category, cross links in medium category and specific examples in low category. All of the results are supported with the students' positive response towards CmapTools utilization that indicates improvement of motivation, interest, and behavior aspect towards Physics lesson.

  16. Visual memory and learning in extremely low-birth-weight/extremely preterm adolescents compared with controls: a geographic study.

    PubMed

    Molloy, Carly S; Wilson-Ching, Michelle; Doyle, Lex W; Anderson, Vicki A; Anderson, Peter J

    2014-04-01

    Contemporary data on visual memory and learning in survivors born extremely preterm (EP; <28 weeks gestation) or with extremely low birth weight (ELBW; <1,000 g) are lacking. Geographically determined cohort study of 298 consecutive EP/ELBW survivors born in 1991 and 1992, and 262 randomly selected normal-birth-weight controls. Visual learning and memory data were available for 221 (74.2%) EP/ELBW subjects and 159 (60.7%) controls. EP/ELBW adolescents exhibited significantly poorer performance across visual memory and learning variables compared with controls. Visual learning and delayed visual memory were particularly problematic and remained so after controlling for visual-motor integration and visual perception and excluding adolescents with neurosensory disability, and/or IQ <70. Male EP/ELBW adolescents or those treated with corticosteroids had poorer outcomes. EP/ELBW adolescents have poorer visual memory and learning outcomes compared with controls, which cannot be entirely explained by poor visual perceptual or visual constructional skills or intellectual impairment.

  17. Temporal properties of material categorization and material rating: visual vs non-visual material features.

    PubMed

    Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki

    2015-10-01

    Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Temporal evolution of brain reorganization under cross-modal training: insights into the functional architecture of encoding and retrieval networks

    NASA Astrophysics Data System (ADS)

    Likova, Lora T.

    2015-03-01

    This study is based on the recent discovery of massive and well-structured cross-modal memory activation generated in the primary visual cortex (V1) of totally blind people as a result of novel training in drawing without any vision (Likova, 2012). This unexpected functional reorganization of primary visual cortex was obtained after undergoing only a week of training by the novel Cognitive-Kinesthetic Method, and was consistent across pilot groups of different categories of visual deprivation: congenitally blind, late-onset blind and blindfolded (Likova, 2014). These findings led us to implicate V1 as the implementation of the theoretical visuo-spatial 'sketchpad' for working memory in the human brain. Since neither the source nor the subsequent 'recipient' of this non-visual memory information in V1 is known, these results raise a number of important questions about the underlying functional organization of the respective encoding and retrieval networks in the brain. To address these questions, an individual totally blind from birth was given a week of Cognitive-Kinesthetic training, accompanied by functional magnetic resonance imaging (fMRI) both before and just after training, and again after a two-month consolidation period. The results revealed a remarkable temporal sequence of training-based response reorganization in both the hippocampal complex and the temporal-lobe object processing hierarchy over the prolonged consolidation period. In particular, a pattern of profound learning-based transformations in the hippocampus was strongly reflected in V1, with the retrieval function showing massive growth as result of the Cognitive-Kinesthetic memory training and consolidation, while the initially strong hippocampal response during tactile exploration and encoding became non-existent. Furthermore, after training, an alternating patch structure in the form of a cascade of discrete ventral regions underwent radical transformations to reach complete functional specialization in terms of either encoding or retrieval as a function of the stage of learning. Moreover, several distinct patterns of learning-evolution within the patches as a function of their anatomical location, implying a complex reorganization of the object processing sub-networks through the learning period. These first findings of complex patterns of training-based encoding/retrieval reorganization thus have broad implications for a newly emerging view of the perception/memory interactions and their reorganization through the learning process. Note that the temporal evolution of these forms of extended functional reorganization could not be uncovered with conventional assessment paradigms used in the traditional approaches to functional mapping, which may therefore have to be revisited. Moreover, as the present results are obtained in learning under life-long blindness, they imply modality-independent operations, transcending the usual tight association with visual processing. The present approach of memory drawing training in blindness, has the dual-advantage of being both non-visual and causal intervention, which makes it a promising 'scalpel' to disentangle interactions among diverse cognitive functions.

  19. The role of visual representation in physics learning: dynamic versus static visualization

    NASA Astrophysics Data System (ADS)

    Suyatna, Agus; Anggraini, Dian; Agustina, Dina; Widyastuti, Dini

    2017-11-01

    This study aims to examine the role of visual representation in physics learning and to compare the learning outcomes of using dynamic and static visualization media. The study was conducted using quasi-experiment with Pretest-Posttest Control Group Design. The samples of this research are students of six classes at State Senior High School in Lampung Province. The experimental class received a learning using dynamic visualization and control class using static visualization media. Both classes are given pre-test and post-test with the same instruments. Data were tested with N-gain analysis, normality test, homogeneity test and mean difference test. The results showed that there was a significant increase of mean (N-Gain) learning outcomes (p <0.05) in both experimental and control classes. The averages of students’ learning outcomes who are using dynamic visualization media are significantly higher than the class that obtains learning by using static visualization media. It can be seen from the characteristics of visual representation; each visualization provides different understanding support for the students. Dynamic visual media is more suitable for explaining material related to movement or describing a process, whereas static visual media is appropriately used for non-moving physical phenomena and requires long-term observation.

  20. Investigating category- and shape-selective neural processing in ventral and dorsal visual stream under interocular suppression.

    PubMed

    Ludwig, Karin; Kathmann, Norbert; Sterzer, Philipp; Hesselmann, Guido

    2015-01-01

    Recent behavioral and neuroimaging studies using continuous flash suppression (CFS) have suggested that action-related processing in the dorsal visual stream might be independent of perceptual awareness, in line with the "vision-for-perception" versus "vision-for-action" distinction of the influential dual-stream theory. It remains controversial if evidence suggesting exclusive dorsal stream processing of tool stimuli under CFS can be explained by their elongated shape alone or by action-relevant category representations in dorsal visual cortex. To approach this question, we investigated category- and shape-selective functional magnetic resonance imaging-blood-oxygen level-dependent responses in both visual streams using images of faces and tools. Multivariate pattern analysis showed enhanced decoding of elongated relative to non-elongated tools, both in the ventral and dorsal visual stream. The second aim of our study was to investigate whether the depth of interocular suppression might differentially affect processing in dorsal and ventral areas. However, parametric modulation of suppression depth by varying the CFS mask contrast did not yield any evidence for differential modulation of category-selective activity. Together, our data provide evidence for shape-selective processing under CFS in both dorsal and ventral stream areas and, therefore, do not support the notion that dorsal "vision-for-action" processing is exclusively preserved under interocular suppression. © 2014 Wiley Periodicals, Inc.

  1. Aging reduces neural specialization in ventral visual cortex

    PubMed Central

    Park, Denise C.; Polk, Thad A.; Park, Rob; Minear, Meredith; Savage, Anna; Smith, Mason R.

    2004-01-01

    The present study investigated whether neural structures become less functionally differentiated and specialized with age. We studied ventral visual cortex, an area of the brain that responds selectively to visual categories (faces, places, and words) in young adults, and that shows little atrophy with age. Functional MRI was used to estimate neural activity in this cortical area, while young and old adults viewed faces, houses, pseudowords, and chairs. The results demonstrated significantly less neural specialization for these stimulus categories in older adults across a range of analyses. PMID:15322270

  2. Associative visual learning by tethered bees in a controlled visual environment.

    PubMed

    Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin

    2017-10-10

    Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.

  3. The helpfulness of category labels in semi-supervised learning depends on category structure.

    PubMed

    Vong, Wai Keen; Navarro, Daniel J; Perfors, Amy

    2016-02-01

    The study of semi-supervised category learning has generally focused on how additional unlabeled information with given labeled information might benefit category learning. The literature is also somewhat contradictory, sometimes appearing to show a benefit to unlabeled information and sometimes not. In this paper, we frame the problem differently, focusing on when labels might be helpful to a learner who has access to lots of unlabeled information. Using an unconstrained free-sorting categorization experiment, we show that labels are useful to participants only when the category structure is ambiguous and that people's responses are driven by the specific set of labels they see. We present an extension of Anderson's Rational Model of Categorization that captures this effect.

  4. Visual hallucinatory syndromes and the anatomy of the visual brain.

    PubMed

    Santhouse, A M; Howard, R J; ffytche, D H

    2000-10-01

    We have set out to identify phenomenological correlates of cerebral functional architecture within Charles Bonnet syndrome (CBS) hallucinations by looking for associations between specific hallucination categories. Thirty-four CBS patients were examined with a structured interview/questionnaire to establish the presence of 28 different pathological visual experiences. Associations between categories of pathological experience were investigated by an exploratory factor analysis. Twelve of the pathological experiences partitioned into three segregated syndromic clusters. The first cluster consisted of hallucinations of extended landscape scenes and small figures in costumes with hats; the second, hallucinations of grotesque, disembodied and distorted faces with prominent eyes and teeth; and the third, visual perseveration and delayed palinopsia. The three visual psycho-syndromes mirror the segregation of hierarchical visual pathways into streams and suggest a novel theoretical framework for future research into the pathophysiology of neuropsychiatric syndromes.

  5. Reliability analysis of visual ranking of coronary artery calcification on low-dose CT of the thorax for lung cancer screening: comparison with ECG-gated calcium scoring CT.

    PubMed

    Kim, Yoon Kyung; Sung, Yon Mi; Cho, So Hyun; Park, Young Nam; Choi, Hye-Young

    2014-12-01

    Coronary artery calcification (CAC) is frequently detected on low-dose CT (LDCT) of the thorax. Concurrent assessment of CAC and lung cancer screening using LDCT is beneficial in terms of cost and radiation dose reduction. The aim of our study was to evaluate the reliability of visual ranking of positive CAC on LDCT compared to Agatston score (AS) on electrocardiogram (ECG)-gated calcium scoring CT. We studied 576 patients who were consecutively registered for health screening and undergoing both LDCT and ECG-gated calcium scoring CT. We excluded subjects with an AS of zero. The final study cohort included 117 patients with CAC (97 men; mean age, 53.4 ± 8.5). AS was used as the gold standard (mean score 166.0; range 0.4-3,719.3). Two board-certified radiologists and two radiology residents participated in an observer performance study. Visual ranking of CAC was performed according to four categories (1-10, 11-100, 101-400, and 401 or higher) for coronary artery disease risk stratification. Weighted kappa statistics were used to measure the degree of reliability on visual ranking of CAC on LDCT. The degree of reliability on visual ranking of CAC on LDCT compared to ECG-gated calcium scoring CT was excellent for board-certified radiologists and good for radiology residents. A high degree of association was observed with 71.6% of visual rankings in the same category as the Agatston category and 98.9% varying by no more than one category. Visual ranking of positive CAC on LDCT is reliable for predicting AS rank categorization.

  6. Similarity-Dissimilarity Competition in Disjunctive Classification Tasks

    PubMed Central

    Mathy, Fabien; Haladjian, Harry H.; Laurent, Eric; Goldstone, Robert L.

    2013-01-01

    Typical disjunctive artificial classification tasks require participants to sort stimuli according to rules such as “x likes cars only when black and coupe OR white and SUV.” For categories like this, increasing the salience of the diagnostic dimensions has two simultaneous effects: increasing the distance between members of the same category and increasing the distance between members of opposite categories. Potentially, these two effects respectively hinder and facilitate classification learning, leading to competing predictions for learning. Increasing saliency may lead to members of the same category to be considered less similar, while the members of separate categories might be considered more dissimilar. This implies a similarity-dissimilarity competition between two basic classification processes. When focusing on sub-category similarity, one would expect more difficult classification when members of the same category become less similar (disregarding the increase of between-category dissimilarity); however, the between-category dissimilarity increase predicts a less difficult classification. Our categorization study suggests that participants rely more on using dissimilarities between opposite categories than finding similarities between sub-categories. We connect our results to rule- and exemplar-based classification models. The pattern of influences of within- and between-category similarities are challenging for simple single-process categorization systems based on rules or exemplars. Instead, our results suggest that either these processes should be integrated in a hybrid model, or that category learning operates by forming clusters within each category. PMID:23403979

  7. Four types of ensemble coding in data visualizations.

    PubMed

    Szafir, Danielle Albers; Haroz, Steve; Gleicher, Michael; Franconeri, Steven

    2016-01-01

    Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.

  8. Prior knowledge of category size impacts visual search.

    PubMed

    Wu, Rachel; McGee, Brianna; Echiverri, Chelsea; Zinszer, Benjamin D

    2018-03-30

    Prior research has shown that category search can be similar to one-item search (as measured by the N2pc ERP marker of attentional selection) for highly familiar, smaller categories (e.g., letters and numbers) because the finite set of items in a category can be grouped into one unit to guide search. Other studies have shown that larger, more broadly defined categories (e.g., healthy food) also can elicit N2pc components during category search, but the amplitude of these components is typically attenuated. Two experiments investigated whether the perceived size of a familiar category impacts category and exemplar search. We presented participants with 16 familiar company logos: 8 from a smaller category (social media companies) and 8 from a larger category (entertainment/recreation manufacturing companies). The ERP results from Experiment 1 revealed that, in a two-item search array, search was more efficient for the smaller category of logos compared to the larger category. In a four-item search array (Experiment 2), where two of the four items were placeholders, search was largely similar between the category types, but there was more attentional capture by nontarget members from the same category as the target for smaller rather than larger categories. These results support a growing literature on how prior knowledge of categories affects attentional selection and capture during visual search. We discuss the implications of these findings in relation to assessing cognitive abilities across the lifespan, given that prior knowledge typically increases with age. © 2018 Society for Psychophysiological Research.

  9. Developing physics learning media using 3D cartoon

    NASA Astrophysics Data System (ADS)

    Wati, M.; Hartini, S.; Hikmah, N.; Mahtari, S.

    2018-03-01

    This study focuses on developing physics learning media using 3D cartoon on the static fluid topic. The purpose of this study is to describe: (1) the validity of the learning media, (2) the practicality of the learning media, and (3) the effectiveness of the learning media. This study is a research and development using ADDIE model. The subject of the implementation of media used class XI Science of SMAN 1 Pulau Laut Timur. The data were obtained from the validation sheet of the learning media, questionnaire, and the test of learning outcomes. The results showed that: (1) the validity of the media category is valid, (2) the practicality of the media category is practice, and (3) the effectiveness of the media category is effective. It is concluded that the learning using 3D cartoon on the static fluid topic is eligible to use in learning.

  10. Re-Evaluating Dissociations between Implicit and Explicit Category Learning: An Event-Related fMRI Study

    ERIC Educational Resources Information Center

    Gureckis, Todd M.; James, Thomas W.; Nosofsky, Robert M.

    2011-01-01

    Recent fMRI studies have found that distinct neural systems may mediate perceptual category learning under implicit and explicit learning conditions. In these previous studies, however, different stimulus-encoding processes may have been associated with implicit versus explicit learning. The present design was aimed at decoupling the influence of…

  11. More Is Generally Better: Higher Working Memory Capacity Does Not Impair Perceptual Category Learning

    ERIC Educational Resources Information Center

    Kalish, Michael L.; Newell, Ben R.; Dunn, John C.

    2017-01-01

    It is sometimes supposed that category learning involves competing explicit and procedural systems, with only the former reliant on working memory capacity (WMC). In 2 experiments participants were trained for 3 blocks on both filtering (often said to be learned explicitly) and condensation (often said to be learned procedurally) category…

  12. The Sequence of Study Changes What Information Is Attended to, Encoded, and Remembered during Category Learning

    ERIC Educational Resources Information Center

    Carvalho, Paulo F.; Goldstone, Robert L.

    2017-01-01

    The sequence of study influences how we learn. Previous research has identified different sequences as potentially beneficial for learning in different contexts and with different materials. Here we investigate the mechanisms involved in inductive category learning that give rise to these sequencing effects. Across 3 experiments we show evidence…

  13. Using reusable learning objects (RLOs) in wound care education: Undergraduate student nurse's evaluation of their learning gain.

    PubMed

    Redmond, Catherine; Davies, Carmel; Cornally, Deirdre; Adam, Ewa; Daly, Orla; Fegan, Marianne; O'Toole, Margaret

    2018-01-01

    Both nationally and internationally concerns have been expressed over the adequacy of preparation of undergraduate nurses for the clinical skill of wound care. This project describes the educational evaluation of a series of Reusable Learning Objects (RLOs) as a blended learning approach to facilitate undergraduate nursing students learning of wound care for competence development. Constructivism Learning Theory and Cognitive Theory of Multimedia Learning informed the design of the RLOs, promoting active learner approaches. Clinically based case studies and visual data from two large university teaching hospitals provided the authentic learning materials required. Interactive exercises and formative feedback were incorporated into the educational resource. Evaluation of student perceived learning gains in terms of knowledge, ability and attitudes were measured using a quantitative pre and posttest Wound Care Competency Outcomes Questionnaire. The RLO CETL Questionnaire was used to identify perceived learning enablers. Statistical and deductive thematic analyses inform the findings. Students (n=192) reported that their ability to meet the competency outcomes for wound care had increased significantly after engaging with the RLOs. Students rated the RLOs highly across all categories of perceived usefulness, impact, access and integration. These findings provide evidence that the use of RLOs for both knowledge-based and performance-based learning is effective. RLOs when designed using clinically real case scenarios reflect the true complexities of wound care and offer innovative interventions in nursing curricula. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Comparison Promotes Learning and Transfer of Relational Categories

    ERIC Educational Resources Information Center

    Kurtz, Kenneth J.; Boukrina, Olga; Gentner, Dedre

    2013-01-01

    We investigated the effect of co-presenting training items during supervised classification learning of novel relational categories. Strong evidence exists that comparison induces a structural alignment process that renders common relational structure more salient. We hypothesized that comparisons between exemplars would facilitate learning and…

  15. Category Representation for Classification and Feature Inference

    ERIC Educational Resources Information Center

    Johansen, Mark K.; Kruschke, John K.

    2005-01-01

    This research's purpose was to contrast the representations resulting from learning of the same categories by either classifying instances or inferring instance features. Prior inference learning research, particularly T. Yamauchi and A. B. Markman (1998), has suggested that feature inference learning fosters prototype representation, whereas…

  16. Repetitive Transcranial Direct Current Stimulation Induced Excitability Changes of Primary Visual Cortex and Visual Learning Effects-A Pilot Study.

    PubMed

    Sczesny-Kaiser, Matthias; Beckhaus, Katharina; Dinse, Hubert R; Schwenkreis, Peter; Tegenthoff, Martin; Höffken, Oliver

    2016-01-01

    Studies on noninvasive motor cortex stimulation and motor learning demonstrated cortical excitability as a marker for a learning effect. Transcranial direct current stimulation (tDCS) is a non-invasive tool to modulate cortical excitability. It is as yet unknown how tDCS-induced excitability changes and perceptual learning in visual cortex correlate. Our study aimed to examine the influence of tDCS on visual perceptual learning in healthy humans. Additionally, we measured excitability in primary visual cortex (V1). We hypothesized that anodal tDCS would improve and cathodal tDCS would have minor or no effects on visual learning. Anodal, cathodal or sham tDCS were applied over V1 in a randomized, double-blinded design over four consecutive days (n = 30). During 20 min of tDCS, subjects had to learn a visual orientation-discrimination task (ODT). Excitability parameters were measured by analyzing paired-stimulation behavior of visual-evoked potentials (ps-VEP) and by measuring phosphene thresholds (PTs) before and after the stimulation period of 4 days. Compared with sham-tDCS, anodal tDCS led to an improvement of visual discrimination learning (p < 0.003). We found reduced PTs and increased ps-VEP ratios indicating increased cortical excitability after anodal tDCS (PT: p = 0.002, ps-VEP: p = 0.003). Correlation analysis within the anodal tDCS group revealed no significant correlation between PTs and learning effect. For cathodal tDCS, no significant effects on learning or on excitability could be seen. Our results showed that anodal tDCS over V1 resulted in improved visual perceptual learning and increased cortical excitability. tDCS is a promising tool to alter V1 excitability and, hence, perceptual visual learning.

  17. Hue distinctiveness overrides category in determining performance in multiple object tracking.

    PubMed

    Sun, Mengdan; Zhang, Xuemin; Fan, Lingxia; Hu, Luming

    2018-02-01

    The visual distinctiveness between targets and distractors can significantly facilitate performance in multiple object tracking (MOT), in which color is a feature that has been commonly used. However, the processing of color can be more than "visual." Color is continuous in chromaticity, while it is commonly grouped into discrete categories (e.g., red, green). Evidence from color perception suggested that color categories may have a unique role in visual tasks independent of its chromatic appearance. Previous MOT studies have not examined the effect of chromatic and categorical distinctiveness on tracking separately. The current study aimed to reveal how chromatic (hue) and categorical distinctiveness of color between the targets and distractors affects tracking performance. With four experiments, we showed that tracking performance was largely facilitated by the increasing hue distance between the target set and the distractor set, suggesting that perceptual grouping was formed based on hue distinctiveness to aid tracking. However, we found no color categorical effect, because tracking performance was not significantly different when the targets and distractors were from the same or different categories. It was concluded that the chromatic distinctiveness of color overrides category in determining tracking performance, suggesting a dominant role of perceptual feature in MOT.

  18. Visual and semantic processing of living things and artifacts: an FMRI study.

    PubMed

    Zannino, Gian Daniele; Buccione, Ivana; Perri, Roberta; Macaluso, Emiliano; Lo Gerfo, Emanuele; Caltagirone, Carlo; Carlesimo, Giovanni A

    2010-03-01

    We carried out an fMRI study with a twofold purpose: to investigate the relationship between networks dedicated to semantic and visual processing and to address the issue of whether semantic memory is subserved by a unique network or by different subsystems, according to semantic category or feature type. To achieve our goals, we administered a word-picture matching task, with within-category foils, to 15 healthy subjects during scanning. Semantic distance between the target and the foil and semantic domain of the target-foil pairs were varied orthogonally. Our results suggest that an amodal, undifferentiated network for the semantic processing of living things and artifacts is located in the anterolateral aspects of the temporal lobes; in fact, activity in this substrate was driven by semantic distance, not by semantic category. By contrast, activity in ventral occipito-temporal cortex was driven by category, not by semantic distance. We interpret the latter finding as the effect exerted by systematic differences between living things and artifacts at the level of their structural representations and possibly of their lower-level visual features. Finally, we attempt to reconcile contrasting data in the neuropsychological and functional imaging literature on semantic substrate and category specificity.

  19. Visual discrimination transfer and modulation by biogenic amines in honeybees.

    PubMed

    Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo

    2018-05-10

    For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.

  20. Dorsal hippocampus is necessary for visual categorization in rats.

    PubMed

    Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H

    2018-02-23

    The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for the trained, novel, relocation, and singleton stimuli. Hippocampus-mediated pattern completion and pattern separation mechanisms may be necessary for visual categorization involving overlapping irrelevant features. © 2018 Wiley Periodicals, Inc.

  1. SUSTAIN: A Network Model of Category Learning

    ERIC Educational Resources Information Center

    Love, Bradley C.; Medin, Douglas L.; Gureckis, Todd M.

    2004-01-01

    SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN…

  2. Identifying Strategy Use in Category Learning Tasks: A Case for More Diagnostic Data and Models

    ERIC Educational Resources Information Center

    Donkin, Chris; Newell, Ben R.; Kalish, Mike; Dunn, John C.; Nosofsky, Robert M.

    2015-01-01

    The strength of conclusions about the adoption of different categorization strategies--and their implications for theories about the cognitive and neural bases of category learning--depend heavily on the techniques for identifying strategy use. We examine performance in an often-used "information-integration" category structure and…

  3. Deep Learning and Developmental Learning: Emergence of Fine-to-Coarse Conceptual Categories at Layers of Deep Belief Network.

    PubMed

    Sadeghi, Zahra

    2016-09-01

    In this paper, I investigate conceptual categories derived from developmental processing in a deep neural network. The similarity matrices of deep representation at each layer of neural network are computed and compared with their raw representation. While the clusters generated by raw representation stand at the basic level of abstraction, conceptual categories obtained from deep representation shows a bottom-up transition procedure. Results demonstrate a developmental course of learning from specific to general level of abstraction through learned layers of representations in a deep belief network. © The Author(s) 2016.

  4. Continuous executive function disruption interferes with application of an information integration categorization strategy.

    PubMed

    Miles, Sarah J; Matsuki, Kazunaga; Minda, John Paul

    2014-07-01

    Category learning is often characterized as being supported by two separate learning systems. A verbal system learns rule-defined (RD) categories that can be described using a verbal rule and relies on executive functions (EFs) to learn via hypothesis testing. A nonverbal system learns non-rule-defined (NRD) categories that cannot be described by a verbal rule and uses automatic, procedural learning. The verbal system is dominant in that adults tend to use it during initial learning but may switch to the nonverbal system when the verbal system is unsuccessful. The nonverbal system has traditionally been thought to operate independently of EFs, but recent studies suggest that EFs may play a role in the nonverbal system-specifically, to facilitate the transition away from the verbal system. Accordingly, continuously interfering with EFs during the categorization process, so that EFs are never fully available to facilitate the transition, may be more detrimental to the nonverbal system than is temporary EF interference. Participants learned an NRD or an RD category while EFs were untaxed, taxed temporarily, or taxed continuously. When EFs were continuously taxed during NRD categorization, participants were less likely to use a nonverbal categorization strategy than when EFs were temporarily taxed, suggesting that when EFs were unavailable, the transition to the nonverbal system was hindered. For the verbal system, temporary and continuous interference had similar effects on categorization performance and on strategy use, illustrating that EFs play an important but different role in each of the category-learning systems.

  5. Visual and Verbal Learning Deficits in Veterans with Alcohol and Substance Use Disorders

    PubMed Central

    Bell, Morris D.; Vissicchio, Nicholas A.; Weinstein, Andrea J.

    2015-01-01

    Background This study examined visual and verbal learning in the early phase of recovery for 48 Veterans with alcohol use (AUD) and substance use disorders (SUD, primarily cocaine and opiate abusers). Previous studies have demonstrated visual and verbal learning deficits in AUD, however little is known about the differences between AUD and SUD on these domains. Since the DSM-5 specifically identifies problems with learning in AUD and not in SUD, and problems with visual and verbal learning have been more prevalent in the literature for AUD than SUD, we predicted that people with AUD would be more impaired on measures of visual and verbal learning than people with SUD. Methods: Participants were enrolled in a comprehensive rehabilitation program and were assessed within the first 5 weeks of abstinence. Verbal learning was measured using the Hopkins Verbal Learning Test (HVLT) and visual learning was assessed using the Brief Visuospatial Memory Test (BVMT). Results Results indicated significantly greater decline in verbal learning on the HVLT across the three learning trials for AUD participants but not for SUD participants (F=4.653, df =48, p=.036). Visual learning was less impaired than verbal learning across learning trials for both diagnostic groups (F=0.197, df=48, p=.674); there was no significant difference between groups on visual learning (F=0.401, df=14, p=.538). Discussion Older Veterans in the early phase of recovery from AUD may have difficulty learning new verbal information. Deficits in verbal learning may reduce the effectiveness of verbally-based interventions such as psycho-education. PMID:26684868

  6. Visual and verbal learning deficits in Veterans with alcohol and substance use disorders.

    PubMed

    Bell, Morris D; Vissicchio, Nicholas A; Weinstein, Andrea J

    2016-02-01

    This study examined visual and verbal learning in the early phase of recovery for 48 Veterans with alcohol use (AUD) and substance use disorders (SUD, primarily cocaine and opiate abusers). Previous studies have demonstrated visual and verbal learning deficits in AUD, however little is known about the differences between AUD and SUD on these domains. Since the DSM-5 specifically identifies problems with learning in AUD and not in SUD, and problems with visual and verbal learning have been more prevalent in the literature for AUD than SUD, we predicted that people with AUD would be more impaired on measures of visual and verbal learning than people with SUD. Participants were enrolled in a comprehensive rehabilitation program and were assessed within the first 5 weeks of abstinence. Verbal learning was measured using the Hopkins Verbal Learning Test (HVLT) and visual learning was assessed using the Brief Visuospatial Memory Test (BVMT). Results indicated significantly greater decline in verbal learning on the HVLT across the three learning trials for AUD participants but not for SUD participants (F=4.653, df=48, p=0.036). Visual learning was less impaired than verbal learning across learning trials for both diagnostic groups (F=0.197, df=48, p=0.674); there was no significant difference between groups on visual learning (F=0.401, df=14, p=0.538). Older Veterans in the early phase of recovery from AUD may have difficulty learning new verbal information. Deficits in verbal learning may reduce the effectiveness of verbally-based interventions such as psycho-education. Published by Elsevier Ireland Ltd.

  7. Feature diagnosticity and task context shape activity in human scene-selective cortex.

    PubMed

    Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S

    2016-01-15

    Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Distributional learning aids linguistic category formation in school-age children.

    PubMed

    Hall, Jessica; Owen VAN Horne, Amanda; Farmer, Thomas

    2018-05-01

    The goal of this study was to determine if typically developing children could form grammatical categories from distributional information alone. Twenty-seven children aged six to nine listened to an artificial grammar which contained strategic gaps in its distribution. At test, we compared how children rated novel sentences that fit the grammar to sentences that were ungrammatical. Sentences could be distinguished only through the formation of categories of words with shared distributional properties. Children's ratings revealed that they could discriminate grammatical and ungrammatical sentences. These data lend support to the hypothesis that distributional learning is a potential mechanism for learning grammatical categories in a first language.

  9. When Does Fading Enhance Perceptual Category Learning?

    ERIC Educational Resources Information Center

    Pashler, Harold; Mozer, Michael C.

    2013-01-01

    Training that uses exaggerated versions of a stimulus discrimination (fading) has sometimes been found to enhance category learning, mostly in studies involving animals and impaired populations. However, little is known about whether and when fading facilitates learning for typical individuals. This issue was explored in 7 experiments. In…

  10. Learning to learn causal models.

    PubMed

    Kemp, Charles; Goodman, Noah D; Tenenbaum, Joshua B

    2010-09-01

    Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects, our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning. Copyright © 2010 Cognitive Science Society, Inc.

  11. A Preliminary Analysis of the Theoretical Parameters of Organizaational Learning.

    DTIC Science & Technology

    1995-09-01

    PARAMETERS OF ORGANIZATIONAL LEARNING THESIS Presented to the Faculty of the Graduate School of Logistics and Acquisition Management of the Air...Organizational Learning Parameters in the Knowledge Acquisition Category 2~™ 2-3. Organizational Learning Parameters in the Information Distribution Category...Learning Refined Scale 4-94 4-145. Composition of Refined Scale 4 Knowledge Flow 4-95 4-146. Cronbach’s Alpha Statistics for the Complete Knowledge Flow

  12. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    ERIC Educational Resources Information Center

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  13. Effects of Computer-Based Visual Representation on Mathematics Learning and Cognitive Load

    ERIC Educational Resources Information Center

    Yung, Hsin I.; Paas, Fred

    2015-01-01

    Visual representation has been recognized as a powerful learning tool in many learning domains. Based on the assumption that visual representations can support deeper understanding, we examined the effects of visual representations on learning performance and cognitive load in the domain of mathematics. An experimental condition with visual…

  14. Threat captures attention but does not affect learning of contextual regularities.

    PubMed

    Yamaguchi, Motonori; Harwood, Sarah L

    2017-04-01

    Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.

  15. Hippocampal BOLD response during category learning predicts subsequent performance on transfer generalization.

    PubMed

    Fera, Francesco; Passamonti, Luca; Herzallah, Mohammad M; Myers, Catherine E; Veltri, Pierangelo; Morganti, Giuseppina; Quattrone, Aldo; Gluck, Mark A

    2014-07-01

    To test a prediction of our previous computational model of cortico-hippocampal interaction (Gluck and Myers [1993, 2001]) for characterizing individual differences in category learning, we studied young healthy subjects using an fMRI-adapted category-learning task that has two phases, an initial phase in which associations are learned through trial-and-error feedback followed by a generalization phase in which previously learned rules can be applied to novel associations (Myers et al. [2003]). As expected by our model, we found a negative correlation between learning-related hippocampal responses and accuracy during transfer, demonstrating that hippocampal adaptation during learning is associated with better behavioral scores during transfer generalization. In addition, we found an inverse relationship between Blood Oxygenation Level Dependent (BOLD) activity in the striatum and that in the hippocampal formation and the orbitofrontal cortex during the initial learning phase. Conversely, activity in the dorsolateral prefrontal cortex, orbitofrontal cortex and parietal lobes dominated over that of the hippocampal formation during the generalization phase. These findings provide evidence in support of theories of the neural substrates of category learning which argue that the hippocampal region plays a critical role during learning for appropriately encoding and representing newly learned information so that that this learning can be successfully applied and generalized to subsequent novel task demands. Copyright © 2013 Wiley Periodicals, Inc.

  16. Test of a potential link between analytic and nonanalytic category learning and automatic, effortful processing.

    PubMed

    Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J

    2001-08-01

    The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.

  17. A comparison of different category scales for estimating disease severity

    USDA-ARS?s Scientific Manuscript database

    Plant pathologists most often obtain quantitative information on disease severity using visual assessments. Category scales are widely used for assessing disease severity, including for screening germplasm. The most widely used category scale is the Horsfall-Barratt (H-B) scale, but reports show tha...

  18. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli

    PubMed Central

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status. PMID:24187542

  19. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    PubMed

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  20. Attention during natural vision warps semantic representation across the human brain.

    PubMed

    Çukur, Tolga; Nishimoto, Shinji; Huth, Alexander G; Gallant, Jack L

    2013-06-01

    Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision.

  1. Reward Selectively Modulates the Lingering Neural Representation of Recently Attended Objects in Natural Scenes.

    PubMed

    Hickey, Clayton; Peelen, Marius V

    2017-08-02

    Theories of reinforcement learning and approach behavior suggest that reward can increase the perceptual salience of environmental stimuli, ensuring that potential predictors of outcome are noticed in the future. However, outcome commonly follows visual processing of the environment, occurring even when potential reward cues have long disappeared. How can reward feedback retroactively cause now-absent stimuli to become attention-drawing in the future? One possibility is that reward and attention interact to prime lingering visual representations of attended stimuli that sustain through the interval separating stimulus and outcome. Here, we test this idea using multivariate pattern analysis of fMRI data collected from male and female humans. While in the scanner, participants searched for examples of target categories in briefly presented pictures of cityscapes and landscapes. Correct task performance was followed by reward feedback that could randomly have either high or low magnitude. Analysis showed that high-magnitude reward feedback boosted the lingering representation of target categories while reducing the representation of nontarget categories. The magnitude of this effect in each participant predicted the behavioral impact of reward on search performance in subsequent trials. Other analyses show that sensitivity to reward-as expressed in a personality questionnaire and in reactivity to reward feedback in the dopaminergic midbrain-predicted reward-elicited variance in lingering target and nontarget representations. Credit for rewarding outcome thus appears to be assigned to the target representation, causing the visual system to become sensitized for similar objects in the future. SIGNIFICANCE STATEMENT How do reward-predictive visual stimuli become salient and attention-drawing? In the real world, reward cues precede outcome and reward is commonly received long after potential predictors have disappeared. How can the representation of environmental stimuli be affected by outcome that occurs later in time? Here, we show that reward acts on lingering representations of environmental stimuli that sustain through the interval between stimulus and outcome. Using naturalistic scene stimuli and multivariate pattern analysis of fMRI data, we show that reward boosts the representation of attended objects and reduces the representation of unattended objects. This interaction of attention and reward processing acts to prime vision for stimuli that may serve to predict outcome. Copyright © 2017 the authors 0270-6474/17/377297-08$15.00/0.

  2. The Effect of Feedback Delay and Feedback Type on Perceptual Category Learning: The Limits of Multiple Systems

    ERIC Educational Resources Information Center

    Dunn, John C.; Newell, Ben R.; Kalish, Michael L.

    2012-01-01

    Evidence that learning rule-based (RB) and information-integration (II) category structures can be dissociated across different experimental variables has been used to support the view that such learning is supported by multiple learning systems. Across 4 experiments, we examined the effects of 2 variables, the delay between response and feedback…

  3. Enhanced procedural learning of speech sound categories in a genetic variant of FOXP2.

    PubMed

    Chandrasekaran, Bharath; Yi, Han-Gyol; Blanco, Nathaniel J; McGeary, John E; Maddox, W Todd

    2015-05-20

    A mutation of the forkhead box protein P2 (FOXP2) gene is associated with severe deficits in human speech and language acquisition. In rodents, the humanized form of FOXP2 promotes faster switching from declarative to procedural learning strategies when the two learning systems compete. Here, we examined a polymorphism of FOXP2 (rs6980093) in humans (214 adults; 111 females) for associations with non-native speech category learning success. Neurocomputational modeling results showed that individuals with the GG genotype shifted faster to procedural learning strategies, which are optimal for the task. These findings support an adaptive role for the FOXP2 gene in modulating the function of neural learning systems that have a direct bearing on human speech category learning. Copyright © 2015 the authors 0270-6474/15/357808-05$15.00/0.

  4. Prefrontal Contributions to Rule-Based and Information-Integration Category Learning

    ERIC Educational Resources Information Center

    Schnyer, David M.; Maddox, W. Todd; Ell, Shawn; Davis, Sarah; Pacheco, Jenni; Verfaellie, Mieke

    2009-01-01

    Previous research revealed that the basal ganglia play a critical role in category learning [Ell, S. W., Marchant, N. L., & Ivry, R. B. (2006). "Focal putamen lesions impair learning in rule-based, but not information-integration categorization tasks." "Neuropsychologia", 44(10), 1737-1751; Maddox, W. T. & Filoteo, J.…

  5. Learning Problems and Classroom Instruction.

    ERIC Educational Resources Information Center

    Adelman, Howard S.

    Defined are categories of learning disabilities (LD) that can be remediated in regular public school classes, and offered are remedial approaches. Stressed in four studies is the heterogeneity of LD problems. Suggested is grouping LD children into three categories: no disorder (problem is from the learning environment); minor disorder (problem is…

  6. Statistical Learning of Phonetic Categories: Insights from a Computational Approach

    ERIC Educational Resources Information Center

    McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.

    2009-01-01

    Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…

  7. Establishing Visual Category Boundaries between Objects: A PET Study

    ERIC Educational Resources Information Center

    Saumier, Daniel; Chertkow, Howard; Arguin, Martin; Whatmough, Cristine

    2005-01-01

    Individuals with Alzheimer's disease (AD) often have problems in recognizing common objects. This visual agnosia may stem from difficulties in establishing appropriate visual boundaries between visually similar objects. In support of this hypothesis, Saumier, Arguin, Chertkow, and Renfrew (2001) showed that AD subjects have difficulties in…

  8. Visual Literacy. . .An Overview of Theory and Practice.

    ERIC Educational Resources Information Center

    DeSantis, Lucille Burbank; Pett, Dennis W.

    Visual Literacy is a field that encompasses a variety of theoretical constructs and practical considerations relating to communicating with visual signs. The theoretical constructs that influence visual communication primarily fall into two closely interrelated categories: those that relate to the individuals involved in the communication process,…

  9. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.

    PubMed

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.

  10. The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex

    PubMed Central

    Leibo, Joel Z.; Liao, Qianli; Anselmi, Fabio; Poggio, Tomaso

    2015-01-01

    Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system’s optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA). Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions in agreement with the available data. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions. PMID:26496457

  11. Category-contingent face adaptation for novel colour categories: Contingent effects are seen only after social or meaningful labelling.

    PubMed

    Little, Anthony C; DeBruine, Lisa M; Jones, Benedict C

    2011-01-01

    A face appears normal when it approximates the average of a population. Consequently, exposure to faces biases perceptions of subsequently viewed faces such that faces similar to those recently seen are perceived as more normal. Simultaneously inducing such aftereffects in opposite directions for two groups of faces indicates somewhat discrete representations for those groups. Here we examine how labelling influences the perception of category in faces differing in colour. We show category-contingent aftereffects following exposure to faces differing in eye spacing (wide versus narrow) for blue versus red faces when such groups are consistently labelled with socially meaningful labels (Extravert versus Introvert; Soldier versus Builder). Category-contingent aftereffects were not seen using identical methodology when labels were not meaningful or were absent. These data suggest that human representations of faces can be rapidly tuned to code for meaningful social categories and that such tuning requires both a label and an associated visual difference. Results highlight the flexibility of the cognitive visual system to discriminate categories even in adulthood. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Neural Changes Associated with Nonspeech Auditory Category Learning Parallel Those of Speech Category Acquisition

    ERIC Educational Resources Information Center

    Liu, Ran; Holt, Lori L.

    2011-01-01

    Native language experience plays a critical role in shaping speech categorization, but the exact mechanisms by which it does so are not well understood. Investigating category learning of nonspeech sounds with which listeners have no prior experience allows their experience to be systematically controlled in a way that is impossible to achieve by…

  13. The Advantage of Mixing Examples in Inductive Learning: A Comparison of Three Hypotheses

    ERIC Educational Resources Information Center

    Guzman-Munoz, Francisco Javier

    2017-01-01

    Mixing examples of different categories (interleaving) has been shown to promote inductive learning as compared with presenting examples of the same category together (massing). In three studies, we tested whether the advantage of interleaving is exclusively due to the mixing of examples from different categories or to the temporal gap introduced…

  14. Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects.

    PubMed

    VanRullen, R; Thorpe, S J

    2001-01-01

    Visual processing is known to be very fast in ultra-rapid categorisation tasks where the subject has to decide whether a briefly flashed image belongs to a target category or not. Human subjects can respond in under 400 ms, and event-related-potential studies have shown that the underlying processing can be done in less than 150 ms. Monkeys trained to perform the same task have proved even faster. However, most of these experiments have only been done with biologically relevant target categories such as animals or food. Here we performed the same study on human subjects, alternating between a task in which the target category was 'animal', and a task in which the target category was 'means of transport'. These natural images of clearly artificial objects contained targets as varied as cars, trucks, trains, boats, aircraft, and hot-air balloons. However, the subjects performed almost identically in both tasks, with reaction times not significantly longer in the 'means of transport' task. These reaction times were much shorter than in any previous study on natural-image processing. We conclude that, at least for these two superordinate categories, the speed of ultra-rapid visual categorisation of natural scenes does not depend on the target category, and that this processing could rely primarily on feed-forward, automatic mechanisms.

  15. Colouring the Gaps in Learning Design: Aesthetics and the Visual in Learning

    ERIC Educational Resources Information Center

    Carroll, Fiona; Kop, Rita

    2016-01-01

    The visual is a dominant mode of information retrieval and understanding however, the focus on the visual dimension of Technology Enhanced Learning (TEL) is still quite weak in relation to its predominant focus on usability. To accommodate the future needs of the visual learner, designers of e-learning environments should advance the current…

  16. Visual and Verbal Learning in a Genetic Metabolic Disorder

    ERIC Educational Resources Information Center

    Spilkin, Amy M.; Ballantyne, Angela O.; Trauner, Doris A.

    2009-01-01

    Visual and verbal learning in a genetic metabolic disorder (cystinosis) were examined in the following three studies. The goal of Study I was to provide a normative database and establish the reliability and validity of a new test of visual learning and memory (Visual Learning and Memory Test; VLMT) that was modeled after a widely used test of…

  17. [Cognitive Functions in the Prefrontal Association Cortex; Transitive Inference and the Lateral Prefrontal Cortex].

    PubMed

    Tanaka, Shingo; Oguchi, Mineki; Sakagami, Masamichi

    2016-11-01

    To behave appropriately in a complex and uncertain world, the brain makes use of several distinct learning systems. One such system is called the "model-free process", via which conditioning allows the association between a stimulus or response and a given reward to be learned. Another system is called the "model-based process". Via this process, the state transition between a stimulus and a response is learned so that the brain is able to plan actions prior to their execution. Several studies have tried to relate the difference between model-based and model-free processes to the difference in functions of the lateral prefrontal cortex (LPFC) and the striatum. Here, we describe a series of studies that demonstrate the ability of LPFC neurons to categorize visual stimuli by their associated behavioral responses and to generate abstract information. If LPFC neurons utilize abstract code to associate a stimulus with a reward, they should be able to infer similar relationships between other stimuli of the same category and their rewards without direct experience of these stimulus-reward contingencies. We propose that this ability of LPFC neurons to utilize abstract information can contribute to the model-based learning process.

  18. The role of object categories in hybrid visual and memory search

    PubMed Central

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  19. Selectivity in Postencoding Connectivity with High-Level Visual Cortex Is Associated with Reward-Motivated Memory

    PubMed Central

    Murty, Vishnu P.; Tompary, Alexa; Adcock, R. Alison

    2017-01-01

    Reward motivation has been demonstrated to enhance declarative memory by facilitating systems-level consolidation. Although high-reward information is often intermixed with lower reward information during an experience, memory for high value information is prioritized. How is this selectivity achieved? One possibility is that postencoding consolidation processes bias memory strengthening to those representations associated with higher reward. To test this hypothesis, we investigated the influence of differential reward motivation on the selectivity of postencoding markers of systems-level memory consolidation. Human participants encoded intermixed, trial-unique memoranda that were associated with either high or low-value during fMRI acquisition. Encoding was interleaved with periods of rest, allowing us to investigate experience-dependent changes in connectivity as they related to later memory. Behaviorally, we found that reward motivation enhanced 24 h associative memory. Analysis of patterns of postencoding connectivity showed that, even though learning trials were intermixed, there was significantly greater connectivity with regions of high-level, category-selective visual cortex associated with high-reward trials. Specifically, increased connectivity of category-selective visual cortex with both the VTA and the anterior hippocampus predicted associative memory for high- but not low-reward memories. Critically, these results were independent of encoding-related connectivity and univariate activity measures. Thus, these findings support a model by which the selective stabilization of memories for salient events is supported by postencoding interactions with sensory cortex associated with reward. SIGNIFICANCE STATEMENT Reward motivation is thought to promote memory by supporting memory consolidation. Yet, little is known as to how brain selects relevant information for subsequent consolidation based on reward. We show that experience-dependent changes in connectivity of both the anterior hippocampus and the VTA with high-level visual cortex selectively predicts memory for high-reward memoranda at a 24 h delay. These findings provide evidence for a novel mechanism guiding the consolidation of memories for valuable events, namely, postencoding interactions between neural systems supporting mesolimbic dopamine activation, episodic memory, and perception. PMID:28100737

  20. Brain connections of words, perceptions and actions: A neurobiological model of spatio-temporal semantic activation in the human cortex.

    PubMed

    Tomasello, Rosario; Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann

    2017-04-01

    Neuroimaging and patient studies show that different areas of cortex respectively specialize for general and selective, or category-specific, semantic processing. Why are there both semantic hubs and category-specificity, and how come that they emerge in different cortical regions? Can the activation time-course of these areas be predicted and explained by brain-like network models? In this present work, we extend a neurocomputational model of human cortical function to simulate the time-course of cortical processes of understanding meaningful concrete words. The model implements frontal and temporal cortical areas for language, perception, and action along with their connectivity. It uses Hebbian learning to semantically ground words in aspects of their referential object- and action-related meaning. Compared with earlier proposals, the present model incorporates additional neuroanatomical links supported by connectivity studies and downscaled synaptic weights in order to control for functional between-area differences purely due to the number of in- or output links of an area. We show that learning of semantic relationships between words and the objects and actions these symbols are used to speak about, leads to the formation of distributed circuits, which all include neuronal material in connector hub areas bridging between sensory and motor cortical systems. Therefore, these connector hub areas acquire a role as semantic hubs. By differentially reaching into motor or visual areas, the cortical distributions of the emergent 'semantic circuits' reflect aspects of the represented symbols' meaning, thus explaining category-specificity. The improved connectivity structure of our model entails a degree of category-specificity even in the 'semantic hubs' of the model. The relative time-course of activation of these areas is typically fast and near-simultaneous, with semantic hubs central to the network structure activating before modality-preferential areas carrying semantic information. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. A qualitative inquiry into the effects of visualization on high school chemistry students' learning process of molecular structure

    NASA Astrophysics Data System (ADS)

    Deratzou, Susan

    This research studies the process of high school chemistry students visualizing chemical structures and its role in learning chemical bonding and molecular structure. Minimal research exists with high school chemistry students and more research is necessary (Gabel & Sherwood, 1980; Seddon & Moore, 1986; Seddon, Tariq, & Dos Santos Veiga, 1984). Using visualization tests (Ekstrom, French, Harman, & Dermen, 1990a), a learning style inventory (Brown & Cooper, 1999), and observations through a case study design, this study found visual learners performed better, but needed more practice and training. Statistically, all five pre- and post-test visualization test comparisons were highly significant in the two-tailed t-test (p > .01). The research findings are: (1) Students who tested high in the Visual (Language and/or Numerical) and Tactile Learning Styles (and Social Learning) had an advantage. Students who learned the chemistry concepts more effectively were better at visualizing structures and using molecular models to enhance their knowledge. (2) Students showed improvement in learning after visualization practice. Training in visualization would improve students' visualization abilities and provide them with a way to think about these concepts. (3) Conceptualization of concepts indicated that visualizing ability was critical and that it could be acquired. Support for this finding was provided by pre- and post-Visualization Test data with a highly significant t-test. (4) Various molecular animation programs and websites were found to be effective. (5) Visualization and modeling of structures encompassed both two- and three-dimensional space. The Visualization Test findings suggested that the students performed better with basic rotation of structures as compared to two- and three-dimensional objects. (6) Data from observations suggest that teaching style was an important factor in student learning of molecular structure. (7) Students did learn the chemistry concepts. Based on the Visualization Test results, which showed that most of the students performed better on the post-test, the visualization experience and the abstract nature of the content allowed them to transfer some of their chemical understanding and practice to non-chemical structures. Finally, implications for teaching of chemistry, students learning chemistry, curriculum, and research for the field of chemical education were discussed.

  2. Economic impact of stimulated technological activity. Part 3: Case study, knowledge additions and earth links from space crew systems

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A case study of knowledge contributions from the crew life support aspect of the manned space program is reported. The new information needed to be learned, the solutions developed, and the relation of new knowledge gained to earthly problems were investigated. Illustrations are given in the following categories: supplying atmosphere for spacecraft; providing carbon dioxide removal and recycling; providing contaminant control and removal; maintaining the body's thermal balance; protecting against the space hazards of decompression, radiation, and meteorites; minimizing fire and blast hazards; providing adequate light and conditions for adequate visual performance; providing mobility and work physiology; and providing adequate habitability.

  3. How may the basal ganglia contribute to auditory categorization and speech perception?

    PubMed Central

    Lim, Sung-Joo; Fiez, Julie A.; Holt, Lori L.

    2014-01-01

    Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood. PMID:25136291

  4. Creating Objects and Object Categories for Studying Perception and Perceptual Learning

    PubMed Central

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-01-01

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis. PMID:23149420

  5. 76 FR 51002 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    ... with visual dysfunction related to traumatic brain injury, with an eye injury and a visual acuity in... of visual field in the injured eye. Categories of records in the system: Individual's full name... interventions or other operative procedures, follow up services and treatment, visual outcomes, and records with...

  6. A Prospective Curriculum Using Visual Literacy.

    ERIC Educational Resources Information Center

    Hortin, John A.

    This report describes the uses of visual literacy programs in the schools and outlines four categories for incorporating training in visual thinking into school curriculums as part of the back to basics movement in education. The report recommends that curriculum writers include materials pertaining to: (1) reading visual language and…

  7. Visual Perceptual Learning and Models.

    PubMed

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  8. Creating visual explanations improves learning.

    PubMed

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  9. Time limits during visual foraging reveal flexible working memory templates.

    PubMed

    Kristjánsson, Tómas; Thornton, Ian M; Kristjánsson, Árni

    2018-06-01

    During difficult foraging tasks, humans rarely switch between target categories, but switch frequently during easier foraging. Does this reflect fundamental limits on visual working memory (VWM) capacity or simply strategic choice due to effort? Our participants performed time-limited or unlimited foraging tasks where they tapped stimuli from 2 target categories while avoiding items from 2 distractor categories. These time limits should have no effect if capacity imposes limits on VWM representations but more flexible VWM could allow observers to use VWM according to task demands in each case. We found that with time limits, participants switched more frequently and switch-costs became much smaller than during unlimited foraging. Observers can therefore switch between complex (conjunction) target categories when needed. We propose that while maintaining many complex templates in working memory is effortful and observers avoid this, they can do so if this fits task demands, showing the flexibility of working memory representations used for visual exploration. This is in contrast with recent proposals, and we discuss the implications of these findings for theoretical accounts of working memory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. The influence of visual ability on learning and memory performance in 13 strains of mice.

    PubMed

    Brown, Richard E; Wong, Aimée A

    2007-03-01

    We calculated visual ability in 13 strains of mice (129SI/Sv1mJ, A/J, AKR/J, BALB/cByJ, C3H/HeJ, C57BL/6J, CAST/EiJ, DBA/2J, FVB/NJ, MOLF/EiJ, SJL/J, SM/J, and SPRET/EiJ) on visual detection, pattern discrimination, and visual acuity and tested these and other mice of the same strains in a behavioral test battery that evaluated visuo-spatial learning and memory, conditioned odor preference, and motor learning. Strain differences in visual acuity accounted for a significant proportion of the variance between strains in measures of learning and memory in the Morris water maze. Strain differences in motor learning performance were not influenced by visual ability. Conditioned odor preference was enhanced in mice with visual defects. These results indicate that visual ability must be accounted for when testing for strain differences in learning and memory in mice because differences in performance in many tasks may be due to visual deficits rather than differences in higher order cognitive functions. These results have significant implications for the search for the neural and genetic basis of learning and memory in mice.

  11. Neural differentiation of lexico-syntactic categories or semantic features? event-related potential evidence for both.

    PubMed

    Kellenbach, Marion L; Wijers, Albertus A; Hovius, Marjolijn; Mulder, Juul; Mulder, Gijsbertus

    2002-05-15

    Event-related potentials (ERPs) were used to investigate whether processing differences between nouns and verbs can be accounted for by the differential salience of visual-perceptual and motor attributes in their semantic specifications. Three subclasses of nouns and verbs were selected, which differed in their semantic attribute composition (abstract, high visual, high visual and motor). Single visual word presentation with a recognition memory task was used. While multiple robust and parallel ERP effects were observed for both grammatical class and attribute type, there were no interactions between these. This pattern of effects provides support for lexical-semantic knowledge being organized in a manner that takes account both of category-based (grammatical class) and attribute-based distinctions.

  12. Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements.

    PubMed

    Brockmole, James R; Henderson, John M

    2006-07-01

    When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.

  13. Who needs a referee? How incorrect basketball actions are automatically detected by basketball players' brain

    PubMed Central

    Proverbio, Alice Mado; Crotti, Nicola; Manfredi, Mirella; Adorni, Roberta; Zani, Alberto

    2012-01-01

    While the existence of a mirror neuron system (MNS) representing and mirroring simple purposeful actions (such as reaching) is known, neural mechanisms underlying the representation of complex actions (such as ballet, fencing, etc.) that are learned by imitation and exercise are not well understood. In this study, correct and incorrect basketball actions were visually presented to professional basketball players and naïve viewers while their EEG was recorded. The participants had to respond to rare targets (unanimated scenes). No category or group differences were found at perceptual level, ruling out the possibility that correct actions might be more visually familiar. Large, anterior N400 responses of event-related brain potentials to incorrectly performed basketball actions were recorded in skilled brains only. The swLORETA inverse solution for incorrect–correct contrast showed that the automatic detection of action ineffectiveness/incorrectness involved the fronto/parietal MNS, the cerebellum, the extra-striate body area, and the superior temporal sulcus. PMID:23181191

  14. Perceptual learning in children with visual impairment improves near visual acuity.

    PubMed

    Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N

    2013-09-17

    This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P < 0.001). Only the children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).

  15. Schematic Influences on Category Learning and Recognition Memory

    ERIC Educational Resources Information Center

    Sakamoto, Yasuaki; Love, Bradley C.

    2004-01-01

    The results from 3 category learning experiments suggest that items are better remembered when they violate a salient knowledge structure such as a rule. The more salient the knowledge structure, the stronger the memory for deviant items. The effect of learning errors on subsequent recognition appears to be mediated through the imposed knowledge…

  16. Categorical Structure among Shared Features in Networks of Early-Learned Nouns

    ERIC Educational Resources Information Center

    Hills, Thomas T.; Maouene, Mounir; Maouene, Josita; Sheya, Adam; Smith, Linda

    2009-01-01

    The shared features that characterize the noun categories that young children learn first are a formative basis of the human category system. To investigate the potential categorical information contained in the features of early-learned nouns, we examine the graph-theoretic properties of noun-feature networks. The networks are built from the…

  17. Mapping the “What” and “Where” Visual Cortices and Their Atrophy in Alzheimer's Disease: Combined Activation Likelihood Estimation with Voxel-Based Morphometry

    PubMed Central

    Deng, Yanjia; Shi, Lin; Lei, Yi; Liang, Peipeng; Li, Kuncheng; Chu, Winnie C. W.; Wang, Defeng

    2016-01-01

    The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing “what” and “where” visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in “what” and “where” visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three “where” visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both “what” and “where” visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses. PMID:27445770

  18. Comparison promotes learning and transfer of relational categories.

    PubMed

    Kurtz, Kenneth J; Boukrina, Olga; Gentner, Dedre

    2013-07-01

    We investigated the effect of co-presenting training items during supervised classification learning of novel relational categories. Strong evidence exists that comparison induces a structural alignment process that renders common relational structure more salient. We hypothesized that comparisons between exemplars would facilitate learning and transfer of categories that cohere around a common relational property. The effect of comparison was investigated using learning trials that elicited a separate classification response for each item in presentation pairs that could be drawn from the same or different categories. This methodology ensures consideration of both items and invites comparison through an implicit same-different judgment inherent in making the two responses. In a test phase measuring learning and transfer, the comparison group significantly outperformed a control group receiving an equivalent training session of single-item classification learning. Comparison-based learners also outperformed the control group on a test of far transfer, that is, the ability to accurately classify items from a novel domain that was relationally alike, but surface-dissimilar, to the training materials. Theoretical and applied implications of this comparison advantage are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  19. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  20. Visual Learning in Application of Integration

    NASA Astrophysics Data System (ADS)

    Bt Shafie, Afza; Barnachea Janier, Josefina; Bt Wan Ahmad, Wan Fatimah

    Innovative use of technology can improve the way how Mathematics should be taught. It can enhance student's learning the concepts through visualization. Visualization in Mathematics refers to us of texts, pictures, graphs and animations to hold the attention of the learners in order to learn the concepts. This paper describes the use of a developed multimedia courseware as an effective tool for visual learning mathematics. The focus is on the application of integration which is a topic in Engineering Mathematics 2. The course is offered to the foundation students in the Universiti Teknologi of PETRONAS. Questionnaire has been distributed to get a feedback on the visual representation and students' attitudes towards using visual representation as a learning tool. The questionnaire consists of 3 sections: Courseware Design (Part A), courseware usability (Part B) and attitudes towards using the courseware (Part C). The results showed that students demonstrated the use of visual representation has benefited them in learning the topic.

  1. Mental Health Risk Adjustment with Clinical Categories and Machine Learning.

    PubMed

    Shrestha, Akritee; Bergquist, Savannah; Montz, Ellen; Rose, Sherri

    2017-12-15

    To propose nonparametric ensemble machine learning for mental health and substance use disorders (MHSUD) spending risk adjustment formulas, including considering Clinical Classification Software (CCS) categories as diagnostic covariates over the commonly used Hierarchical Condition Category (HCC) system. 2012-2013 Truven MarketScan database. We implement 21 algorithms to predict MHSUD spending, as well as a weighted combination of these algorithms called super learning. The algorithm collection included seven unique algorithms that were supplied with three differing sets of MHSUD-related predictors alongside demographic covariates: HCC, CCS, and HCC + CCS diagnostic variables. Performance was evaluated based on cross-validated R 2 and predictive ratios. Results show that super learning had the best performance based on both metrics. The top single algorithm was random forests, which improved on ordinary least squares regression by 10 percent with respect to relative efficiency. CCS categories-based formulas were generally more predictive of MHSUD spending compared to HCC-based formulas. Literature supports the potential benefit of implementing a separate MHSUD spending risk adjustment formula. Our results suggest there is an incentive to explore machine learning for MHSUD-specific risk adjustment, as well as considering CCS categories over HCCs. © Health Research and Educational Trust.

  2. Adaptive categorization of ART networks in robot behavior learning using game-theoretic formulation.

    PubMed

    Fung, Wai-keung; Liu, Yun-hui

    2003-12-01

    Adaptive Resonance Theory (ART) networks are employed in robot behavior learning. Two of the difficulties in online robot behavior learning, namely, (1) exponential memory increases with time, (2) difficulty for operators to specify learning tasks accuracy and control learning attention before learning. In order to remedy the aforementioned difficulties, an adaptive categorization mechanism is introduced in ART networks for perceptual and action patterns categorization in this paper. A game-theoretic formulation of adaptive categorization for ART networks is proposed for vigilance parameter adaptation for category size control on the categories formed. The proposed vigilance parameter update rule can help improving categorization performance in the aspect of category number stability and solve the problem of selecting initial vigilance parameter prior to pattern categorization in traditional ART networks. Behavior learning using physical robot is conducted to demonstrate the effectiveness of the proposed adaptive categorization mechanism in ART networks.

  3. Neurophysiological Evidence for Categorical Perception of Color

    ERIC Educational Resources Information Center

    Holmes, Amanda; Franklin, Anna; Clifford, Alexandra; Davies, Ian

    2009-01-01

    The aim of this investigation was to examine the time course and the relative contributions of perceptual and post-perceptual processes to categorical perception (CP) of color. A visual oddball task was used with standard and deviant stimuli from same (within-category) or different (between-category) categories, with chromatic separations for…

  4. 77 FR 9163 - Removal of Category IIIa, IIIb, and IIIc Definitions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-16

    ... to Docket Operations at 202-493-2251. Privacy: The FAA will post all comments it receives, without..., and IIIc operations. Category III aircraft operations are precision approach and landing operations... approach and landing with a runway visual range (RVR) below 1000 feet is considered a Category III...

  5. The Use of Aftereffects in the Study of Relationships among Emotion Categories

    ERIC Educational Resources Information Center

    Rutherford, M. D.; Chattha, Harnimrat Monica; Krysko, Kristen M.

    2008-01-01

    The perception of visual aftereffects has been long recognized, and these aftereffects reveal a relationship between perceptual categories. Thus, emotional expression aftereffects can be used to map the categorical relationships among emotion percepts. One might expect a symmetric relationship among categories, but an evolutionary, functional…

  6. Learning of grammar-like visual sequences by adults with and without language-learning disabilities.

    PubMed

    Aguilar, Jessica M; Plante, Elena

    2014-08-01

    Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.

  7. Category Specificity in Normal Episodic Learning: Applications to Object Recognition and Category-Specific Agnosia

    ERIC Educational Resources Information Center

    Bukach, Cindy M.; Bub, Daniel N.; Masson, Michael E. J.; Lindsay, D. Stephen

    2004-01-01

    Studies of patients with category-specific agnosia (CSA) have given rise to multiple theories of object recognition, most of which assume the existence of a stable, abstract semantic memory system. We applied an episodic view of memory to questions raised by CSA in a series of studies examining normal observers' recall of newly learned attributes…

  8. Visual Learning Alters the Spontaneous Activity of the Resting Human Brain: An fNIRS Study

    PubMed Central

    Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan

    2014-01-01

    Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning. PMID:25243168

  9. Visual learning alters the spontaneous activity of the resting human brain: an fNIRS study.

    PubMed

    Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan

    2014-01-01

    Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning.

  10. Perceptual learning increases the strength of the earliest signals in visual cortex.

    PubMed

    Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A

    2010-11-10

    Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.

  11. Unconscious symmetrical inferences: A role of consciousness in event integration.

    PubMed

    Alonso, Diego; Fuentes, Luis J; Hommel, Bernhard

    2006-06-01

    Explicit and implicit learning have been attributed to different learning processes that create different types of knowledge structures. Consistent with that claim, our study provides evidence that people integrate stimulus events differently when consciously aware versus unaware of the relationship between the events. In a first, acquisition phase participants sorted words into two categories (A and B), which were fully predicted by task-irrelevant primes-the labels of two other, semantically unrelated categories (C and D). In a second, test phase participants performed a lexical decision task, in which all word stimuli stemmed from the previous prime categories (C and D) and the (now nonpredictive) primes were the labels of the previous target categories (A and B). Reliable priming effects in the second phase demonstrated that bidirectional associations between the respective categories had been formed in the acquisition phase (A<-->C and B<-->D), but these effects were found only in participants that were unaware of the relationship between the categories! We suggest that unconscious, implicit learning of event relationships results in the rather unsophisticated integration (i.e., bidirectional association) of the underlying event representations, whereas explicit learning takes the meaning of the order of the events into account, and thus creates unidirectional associations.

  12. Cognitive Strategies for Learning from Static and Dynamic Visuals.

    ERIC Educational Resources Information Center

    Lewalter, D.

    2003-01-01

    Studied the effects of including static or dynamic visuals in an expository text on a learning outcome and the use of learning strategies when working with these visuals. Results for 60 undergraduates for both types of illustration indicate different frequencies in the use of learning strategies relevant for the learning outcome. (SLD)

  13. The Effects of Similarity on High-Level Visual Working Memory Processing.

    PubMed

    Yang, Li; Mo, Lei

    2017-01-01

    Similarity has been observed to have opposite effects on visual working memory (VWM) for complex images. How can these discrepant results be reconciled? To answer this question, we used a change-detection paradigm to test visual working memory performance for multiple real-world objects. We found that working memory for moderate similarity items was worse than that for either high or low similarity items. This pattern was unaffected by manipulations of stimulus type (faces vs. scenes), encoding duration (limited vs. self-paced), and presentation format (simultaneous vs. sequential). We also found that the similarity effects differed in strength in different categories (scenes vs. faces). These results suggest that complex real-world objects are represented using a centre-surround inhibition organization . These results support the category-specific cortical resource theory and further suggest that centre-surround inhibition organization may differ by category.

  14. The role of feedback contingency in perceptual category learning.

    PubMed

    Ashby, F Gregory; Vucovich, Lauren E

    2016-11-01

    Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from 2 or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all 4 conditions, and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects, as well as their theoretical implications, are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. The Role of Feedback Contingency in Perceptual Category Learning

    PubMed Central

    Ashby, F. Gregory; Vucovich, Lauren E.

    2016-01-01

    Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from two or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all four conditions and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects are discussed, as well as their theoretical implications. PMID:27149393

  16. Media/Device Configurations for Platoon Leader Tactical Training

    DTIC Science & Technology

    1985-02-01

    munication and visual communication sig- na ls, VInputs to the The device should simulate the real- Platoon Leader time receipt of all tactical voice...communication, audio and visual battle- field cues, and visual communication signals. 14- Table 4 (Continued) Functional Capability Categories and...battlefield cues, and visual communication signals. 0.8 Receipt of limited tactical voice communication, plus audio and visual battlefield cues, and visual

  17. Selective impairment of living things and musical instruments on a verbal 'Semantic Knowledge Questionnaire' in a case of apperceptive visual agnosia.

    PubMed

    Masullo, Carlo; Piccininni, Chiara; Quaranta, Davide; Vita, Maria Gabriella; Gaudino, Simona; Gainotti, Guido

    2012-10-01

    Semantic memory was investigated in a patient (MR) affected by a severe apperceptive visual agnosia, due to an ischemic cerebral lesion, bilaterally affecting the infero-mesial parts of the temporo-occipital cortices. The study was made by means of a Semantic Knowledge Questionnaire (Laiacona, Barbarotto, Trivelli, & Capitani, 1993), which takes separately into account four categories of living beings (animals, fruits, vegetables and body parts) and of artefacts (furniture, tools, vehicles and musical instruments), does not require a visual analysis and allows to distinguish errors concerning super-ordinate categorization, perceptual features and functional/encyclopedic knowledge. When the total number of errors obtained on all the categories of living and non-living beings was considered, a non-significant trend toward a higher number of errors in living stimuli was observed. This difference, however, became significant when body parts and musical instruments were excluded from the analysis. Furthermore, the number of errors obtained on the musical instruments was similar to that obtained on the living categories of animals, fruits and vegetables and significantly higher of that obtained in the other artefact categories. This difference was still significant when familiarity, frequency of use and prototypicality of each stimulus entered into a logistic regression analysis. On the other hand, a separate analysis of errors obtained on questions exploring super-ordinate categorization, perceptual features and functional/encyclopedic attributes showed that the differences between living and non-living stimuli and between musical instruments and other artefact categories were mainly due to errors obtained on questions exploring perceptual features. All these data are at variance with the 'domains of knowledge' hypothesis', which assumes that the breakdown of different categories of living and non-living things respects the distinction between biological entities and artefacts and support the models assuming that 'category-specific semantic disorders' are the by-product of the differential weighting that visual-perceptual and functional (or action-related) attributes have in the construction of different biological and artefacts categories. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Progress with modeling activity landscapes in drug discovery.

    PubMed

    Vogt, Martin

    2018-04-19

    Activity landscapes (ALs) are representations and models of compound data sets annotated with a target-specific activity. In contrast to quantitative structure-activity relationship (QSAR) models, ALs aim at characterizing structure-activity relationships (SARs) on a large-scale level encompassing all active compounds for specific targets. The popularity of AL modeling has grown substantially with the public availability of large activity-annotated compound data sets. AL modeling crucially depends on molecular representations and similarity metrics used to assess structural similarity. Areas covered: The concepts of AL modeling are introduced and its basis in quantitatively assessing molecular similarity is discussed. The different types of AL modeling approaches are introduced. AL designs can broadly be divided into three categories: compound-pair based, dimensionality reduction, and network approaches. Recent developments for each of these categories are discussed focusing on the application of mathematical, statistical, and machine learning tools for AL modeling. AL modeling using chemical space networks is covered in more detail. Expert opinion: AL modeling has remained a largely descriptive approach for the analysis of SARs. Beyond mere visualization, the application of analytical tools from statistics, machine learning and network theory has aided in the sophistication of AL designs and provides a step forward in transforming ALs from descriptive to predictive tools. To this end, optimizing representations that encode activity relevant features of molecules might prove to be a crucial step.

  19. Visual Aversive Learning Compromises Sensory Discrimination.

    PubMed

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.

  20. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  1. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  2. Interactions between statistical and semantic information in infant language development

    PubMed Central

    Lany, Jill; Saffran, Jenny R.

    2013-01-01

    Infants can use statistical regularities to form rudimentary word categories (e.g. noun, verb), and to learn the meanings common to words from those categories. Using an artificial language methodology, we probed the mechanisms by which two types of statistical cues (distributional and phonological regularities) affect word learning. Because linking distributional cues vs. phonological information to semantics make different computational demands on learners, we also tested whether their use is related to language proficiency. We found that 22-month-old infants with smaller vocabularies generalized using phonological cues; however, infants with larger vocabularies showed the opposite pattern of results, generalizing based on distributional cues. These findings suggest that both phonological and distributional cues marking word categories promote early word learning. Moreover, while correlations between these cues are important to forming word categories, we found infants’ weighting of these cues in subsequent word-learning tasks changes over the course of early language development. PMID:21884336

  3. 9 CFR 160.1 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... Visual study of the physical appearance, physical condition, and behavior of animals (singly or in groups... other than Category II animals, e.g., cats and dogs. Category II animals. Food and fiber animal species...

  4. Learning to associate orientation with color in early visual areas by associative decoded fMRI neurofeedback

    PubMed Central

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2016-01-01

    Summary Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded functional magnetic resonance imaging (fMRI) neurofeedback, termed “DecNef” [9], we tested whether associative learning of color and orientation can be created in early visual areas. During three days' training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive “red” significantly more frequently than “green” in an achromatic vertical grating. This effect was also observed 3 to 5 months after the training. These results suggest that long-term associative learning of the two different visual features such as color and orientation was created most likely in early visual areas. This newly extended technique that induces associative learning is called “A(ssociative)-DecNef” and may be used as an important tool for understanding and modifying brain functions, since associations are fundamental and ubiquitous functions in the brain. PMID:27374335

  5. Learning to Associate Orientation with Color in Early Visual Areas by Associative Decoded fMRI Neurofeedback.

    PubMed

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2016-07-25

    Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded fMRI neurofeedback termed "DecNef" [9], we tested whether associative learning of orientation and color can be created in early visual areas. During 3 days of training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive "red" significantly more frequently than "green" in an achromatic vertical grating. This effect was also observed 3-5 months after the training. These results suggest that long-term associative learning of two different visual features such as orientation and color was created, most likely in early visual areas. This newly extended technique that induces associative learning is called "A-DecNef," and it may be used as an important tool for understanding and modifying brain functions because associations are fundamental and ubiquitous functions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Word Learning: Homophony and the Distribution of Learning Exemplars

    ERIC Educational Resources Information Center

    Dautriche, Isabelle; Chemla, Emmanuel; Christophe, Anne

    2016-01-01

    How do children infer the meaning of a word? Current accounts of word learning assume that children expect a word to map onto exactly one concept whose members form a coherent category. If this assumption was strictly true, children should infer that a homophone, such as "bat," refers to a single superordinate category that encompasses…

  7. The Impact of Personality Traits on the Affective Category of English Language Learning Strategies

    ERIC Educational Resources Information Center

    Fazeli, Seyed Hossein

    2011-01-01

    The present study aims at discovering the impact of personality traits in the prediction use of the Affective English Language Learning Strategies (AELLSs) for learners of English as a foreign language. Four instruments were used, which were Adapted Inventory for Affective English Language Learning Strategies based on Affective category of…

  8. Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.

    PubMed

    Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne

    2016-05-01

    We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.

  9. Aminergic neuromodulation of associative visual learning in harnessed honey bees.

    PubMed

    Mancini, Nino; Giurfa, Martin; Sandoz, Jean-Christophe; Avarguès-Weber, Aurore

    2018-05-21

    The honey bee Apis mellifera is a major insect model for studying visual cognition. Free-flying honey bees learn to associate different visual cues with a sucrose reward and may deploy sophisticated cognitive strategies to this end. Yet, the neural bases of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but training them to respond appetitively to visual stimuli paired with sucrose reward is difficult. Here we succeeded in coupling visual conditioning in harnessed bees with pharmacological analyses on the role of octopamine (OA), dopamine (DA) and serotonin (5-HT) in visual learning. We also studied if and how these biogenic amines modulate sucrose responsiveness and phototaxis behaviour as intact reward and visual perception are essential prerequisites for appetitive visual learning. Our results suggest that both octopaminergic and dopaminergic signaling mediate either the appetitive sucrose signaling or the association between color and sucrose reward in the bee brain. Enhancing and inhibiting serotonergic signaling both compromised learning performances, probably via an impairment of visual perception. We thus provide a first analysis of the role of aminergic signaling in visual learning and retention in the honey bee and discuss further research trends necessary to understand the neural bases of visual cognition in this insect. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Advances and limitations of visual conditioning protocols in harnessed bees.

    PubMed

    Avarguès-Weber, Aurore; Mota, Theo

    2016-10-01

    Bees are excellent invertebrate models for studying visual learning and memory mechanisms, because of their sophisticated visual system and impressive cognitive capacities associated with a relatively simple brain. Visual learning in free-flying bees has been traditionally studied using an operant conditioning paradigm. This well-established protocol, however, can hardly be combined with invasive procedures for studying the neurobiological basis of visual learning. Different efforts have been made to develop protocols in which harnessed honey bees could associate visual cues with reinforcement, though learning performances remain poorer than those obtained with free-flying animals. Especially in the last decade, the intention of improving visual learning performances of harnessed bees led many authors to adopt distinct visual conditioning protocols, altering parameters like harnessing method, nature and duration of visual stimulation, number of trials, inter-trial intervals, among others. As a result, the literature provides data hardly comparable and sometimes contradictory. In the present review, we provide an extensive analysis of the literature available on visual conditioning of harnessed bees, with special emphasis on the comparison of diverse conditioning parameters adopted by different authors. Together with this comparative overview, we discuss how these diverse conditioning parameters could modulate visual learning performances of harnessed bees. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. The effectiveness of physics learning material based on South Kalimantan local wisdom

    NASA Astrophysics Data System (ADS)

    Hartini, Sri; Misbah, Helda, Dewantara, Dewi

    2017-08-01

    The local wisdom is essential element incorporated into learning process. However, there are no learning materials in Physics learning process which contain South Kalimantan local wisdom. Therefore, it is necessary to develop a Physics learning material based on South Kalimantan local wisdom. The objective of this research is to produce products in the form of learning material based on South Kalimantan local wisdom that is feasible and effective based on the validity, practicality, effectiveness of learning material and achievement of waja sampai kaputing (wasaka) character. This research is a research and development which refers to the ADDIE model. Data were obtained through the validation sheet of learning material, questionnaire, the test of learning outcomes and the sheet of character assesment. The research results showed that (1) the validity category of the learning material was very valid, (2) the practicality category of the learning material was very practical, (3) the effectiveness category of thelearning material was very effective, and (4) the achivement of wasaka characters was very good. In conclusion, the Physics learning materials based on South Kalimantan local wisdom are feasible and effective to be used in learning activities.

  12. A new selective developmental deficit: Impaired object recognition with normal face recognition.

    PubMed

    Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley

    2011-05-01

    Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual recognition. Copyright © 2010 Elsevier Srl. All rights reserved.

  13. Why some colors appear more memorable than others: A model combining categories and particulars in color working memory.

    PubMed

    Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Flombaum, Jonathan I

    2015-08-01

    Categorization with basic color terms is an intuitive and universal aspect of color perception. Yet research on visual working memory capacity has largely assumed that only continuous estimates within color space are relevant to memory. As a result, the influence of color categories on working memory remains unknown. We propose a dual content model of color representation in which color matches to objects that are either present (perception) or absent (memory) integrate category representations along with estimates of specific values on a continuous scale ("particulars"). We develop and test the model through 4 experiments. In a first experiment pair, participants reproduce a color target, both with and without a delay, using a recently influential estimation paradigm. In a second experiment pair, we use standard methods in color perception to identify boundary and focal colors in the stimulus set. The main results are that responses drawn from working memory are significantly biased away from category boundaries and toward category centers. Importantly, the same pattern of results is present without a memory delay. The proposed dual content model parsimoniously explains these results, and it should replace prevailing single content models in studies of visual working memory. More broadly, the model and the results demonstrate how the main consequence of visual working memory maintenance is the amplification of category related biases and stimulus-specific variability that originate in perception. (c) 2015 APA, all rights reserved).

  14. Category Learning Strategies in Younger and Older Adults: Rule Abstraction and Memorization

    PubMed Central

    Wahlheim, Christopher N.; McDaniel, Mark A.; Little, Jeri L.

    2016-01-01

    Despite the fundamental role of category learning in cognition, few studies have examined how this ability differs between younger and older adults. The present experiment examined possible age differences in category learning strategies and their effects on learning. Participants were trained on a category determined by a disjunctive rule applied to relational features. The utilization of rule- and exemplar-based strategies was indexed by self-reports and transfer performance. Based on self-reported strategies, both age groups had comparable frequencies of rule- and exemplar-based learners, but older adults had a higher frequency of intermediate learners (i.e., learners not identifying with a reliance on either rule- or exemplar-based strategies). Training performance was higher for younger than older adults regardless of the strategy utilized, showing that older adults were impaired in their ability to learn the correct rule or to remember exemplar-label associations. Transfer performance converged with strategy reports in showing higher fidelity category representations for younger adults. Younger adults with high working memory capacity were more likely to use an exemplar-based strategy, and older adults with high working memory capacity showed better training performance. Age groups did not differ in their self-reported memory beliefs, and these beliefs did not predict training strategies or performance. Overall, the present results contradict earlier findings that older adults prefer rule- to exemplar-based learning strategies, presumably to compensate for memory deficits. PMID:26950225

  15. Separability of Abstract-Category and Specific-Exemplar Visual Object Subsystems: Evidence from fMRI Pattern Analysis

    PubMed Central

    McMenamin, Brenton W.; Deason, Rebecca G.; Steele, Vaughn R.; Koutstaal, Wilma; Marsolek, Chad J.

    2014-01-01

    Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. PMID:25528436

  16. Separability of abstract-category and specific-exemplar visual object subsystems: evidence from fMRI pattern analysis.

    PubMed

    McMenamin, Brenton W; Deason, Rebecca G; Steele, Vaughn R; Koutstaal, Wilma; Marsolek, Chad J

    2015-02-01

    Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  18. Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning

    ERIC Educational Resources Information Center

    Rau, Martina A.

    2017-01-01

    Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…

  19. Taxonomic and ad hoc categorization within the two cerebral hemispheres.

    PubMed

    Shen, Yeshayahu; Aharoni, Bat-El; Mashal, Nira

    2015-01-01

    A typicality effect refers to categorization which is performed more quickly or more accurately for typical than for atypical members of a given category. Previous studies reported a typicality effect for category members presented in the left visual field/right hemisphere (RH), suggesting that the RH applies a similarity-based categorization strategy. However, findings regarding the typicality effect within the left hemisphere (LH) are less conclusive. The current study tested the pattern of typicality effects within each hemisphere for both taxonomic and ad hoc categories, using words presented to the left or right visual fields. Experiment 1 tested typical and atypical members of taxonomic categories as well as non-members, and Experiment 2 tested typical and atypical members of ad hoc categories as well as non-members. The results revealed a typicality effect in both hemispheres and in both types of categories. Furthermore, the RH categorized atypical stimuli more accurately than did the LH. Our findings suggest that both hemispheres rely on a similarity-based categorization strategy, but the coarse semantic coding of the RH seems to facilitate the categorization of atypical members.

  20. Noun and knowledge retrieval for biological and non-biological entities following right occipitotemporal lesions.

    PubMed

    Bruffaerts, Rose; De Weer, An-Sofie; De Grauwe, Sophie; Thys, Miek; Dries, Eva; Thijs, Vincent; Sunaert, Stefan; Vandenbulcke, Mathieu; De Deyne, Simon; Storms, Gerrit; Vandenberghe, Rik

    2014-09-01

    We investigated the critical contribution of right ventral occipitotemporal cortex to knowledge of visual and functional-associative attributes of biological and non-biological entities and how this relates to category-specificity during confrontation naming. In a consecutive series of 7 patients with lesions confined to right ventral occipitotemporal cortex, we conducted an extensive assessment of oral generation of visual-sensory and functional-associative features in response to the names of biological and nonbiological entities. Subjects also performed a confrontation naming task for these categories. Our main novel finding related to a unique case with a small lesion confined to right medial fusiform gyrus who showed disproportionate naming impairment for nonbiological versus biological entities, specifically for tools. Generation of visual and functional-associative features was preserved for biological and non-biological entities. In two other cases, who had a relatively small posterior lesion restricted to primary visual and posterior fusiform cortex, retrieval of visual attributes was disproportionately impaired compared to functional-associative attributes, in particular for biological entities. However, these cases did not show a category-specific naming deficit. Two final cases with the largest lesions showed a classical dissociation between biological versus nonbiological entities during naming, with normal feature generation performance. This is the first lesion-based evidence of a critical contribution of the right medial fusiform cortex to tool naming. Second, dissociations along the dimension of attribute type during feature generation do not co-occur with category-specificity during naming in the current patient sample. Copyright © 2014 Elsevier Ltd. All rights reserved.

Top