Changing viewer perspectives reveals constraints to implicit visual statistical learning.
Jiang, Yuhong V; Swallow, Khena M
2014-10-07
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.
Jeste, Shafali S; Kirkham, Natasha; Senturk, Damla; Hasenstab, Kyle; Sugar, Catherine; Kupelian, Chloe; Baker, Elizabeth; Sanders, Andrew J; Shimizu, Christina; Norona, Amanda; Paparella, Tanya; Freeman, Stephanny F N; Johnson, Scott P
2015-01-01
Statistical learning is characterized by detection of regularities in one's environment without an awareness or intention to learn, and it may play a critical role in language and social behavior. Accordingly, in this study we investigated the electrophysiological correlates of visual statistical learning in young children with autism spectrum disorder (ASD) using an event-related potential shape learning paradigm, and we examined the relation between visual statistical learning and cognitive function. Compared to typically developing (TD) controls, the ASD group as a whole showed reduced evidence of learning as defined by N1 (early visual discrimination) and P300 (attention to novelty) components. Upon further analysis, in the ASD group there was a positive correlation between N1 amplitude difference and non-verbal IQ, and a positive correlation between P300 amplitude difference and adaptive social function. Children with ASD and a high non-verbal IQ and high adaptive social function demonstrated a distinctive pattern of learning. This is the first study to identify electrophysiological markers of visual statistical learning in children with ASD. Through this work we have demonstrated heterogeneity in statistical learning in ASD that maps onto non-verbal cognition and adaptive social function. © 2014 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Jeste, Shafali S.; Kirkham, Natasha; Senturk, Damla; Hasenstab, Kyle; Sugar, Catherine; Kupelian, Chloe; Baker, Elizabeth; Sanders, Andrew J.; Shimizu, Christina; Norona, Amanda; Paparella, Tanya; Freeman, Stephanny F. N.; Johnson, Scott P.
2015-01-01
Statistical learning is characterized by detection of regularities in one's environment without an awareness or intention to learn, and it may play a critical role in language and social behavior. Accordingly, in this study we investigated the electrophysiological correlates of visual statistical learning in young children with autism…
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne
2016-12-01
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.
Learning of Grammar-Like Visual Sequences by Adults with and without Language-Learning Disabilities
ERIC Educational Resources Information Center
Aguilar, Jessica M.; Plante, Elena
2014-01-01
Purpose: Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. Method: In Study 1,…
Learning of grammar-like visual sequences by adults with and without language-learning disabilities.
Aguilar, Jessica M; Plante, Elena
2014-08-01
Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.
Infants' statistical learning: 2- and 5-month-olds' segmentation of continuous visual sequences.
Slone, Lauren Krogh; Johnson, Scott P
2015-05-01
Past research suggests that infants have powerful statistical learning abilities; however, studies of infants' visual statistical learning offer differing accounts of the developmental trajectory of and constraints on this learning. To elucidate this issue, the current study tested the hypothesis that young infants' segmentation of visual sequences depends on redundant statistical cues to segmentation. A sample of 20 2-month-olds and 20 5-month-olds observed a continuous sequence of looming shapes in which unit boundaries were defined by both transitional probability and co-occurrence frequency. Following habituation, only 5-month-olds showed evidence of statistically segmenting the sequence, looking longer to a statistically improbable shape pair than to a probable pair. These results reaffirm the power of statistical learning in infants as young as 5 months but also suggest considerable development of statistical segmentation ability between 2 and 5 months of age. Moreover, the results do not support the idea that infants' ability to segment visual sequences based on transitional probabilities and/or co-occurrence frequencies is functional at the onset of visual experience, as has been suggested previously. Rather, this type of statistical segmentation appears to be constrained by the developmental state of the learner. Factors contributing to the development of statistical segmentation ability during early infancy, including memory and attention, are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
Siegelman, Noam; Bogaerts, Louisa; Kronenfeld, Ofer; Frost, Ram
2017-10-07
From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible "statistical" properties that are the object of learning. Much less attention has been given to defining what "learning" is in the context of "statistical learning." One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL. © 2017 Cognitive Science Society, Inc.
Second Language Experience Facilitates Statistical Learning of Novel Linguistic Materials.
Potter, Christine E; Wang, Tianlin; Saffran, Jenny R
2017-04-01
Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, 6 months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, whereas both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. Copyright © 2016 Cognitive Science Society, Inc.
Second language experience facilitates statistical learning of novel linguistic materials
Potter, Christine E.; Wang, Tianlin; Saffran, Jenny R.
2016-01-01
Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In the present research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, six months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, while both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. PMID:27988939
Statistical learning and auditory processing in children with music training: An ERP study.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne
2017-07-01
The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Implicit Statistical Learning and Language Skills in Bilingual Children
ERIC Educational Resources Information Center
Yim, Dongsun; Rudoy, John
2013-01-01
Purpose: Implicit statistical learning in 2 nonlinguistic domains (visual and auditory) was used to investigate (a) whether linguistic experience influences the underlying learning mechanism and (b) whether there are modality constraints in predicting implicit statistical learning with age and language skills. Method: Implicit statistical learning…
Learning Across Senses: Cross-Modal Effects in Multisensory Statistical Learning
Mitchel, Aaron D.; Weiss, Daniel J.
2014-01-01
It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms. PMID:21574745
Statistical learning of multisensory regularities is enhanced in musicians: An MEG study.
Paraskevopoulos, Evangelos; Chalas, Nikolas; Kartsidis, Panagiotis; Wollbrink, Andreas; Bamidis, Panagiotis
2018-07-15
The present study used magnetoencephalography (MEG) to identify the neural correlates of audiovisual statistical learning, while disentangling the differential contributions of uni- and multi-modal statistical mismatch responses in humans. The applied paradigm was based on a combination of a statistical learning paradigm and a multisensory oddball one, combining an audiovisual, an auditory and a visual stimulation stream, along with the corresponding deviances. Plasticity effects due to musical expertise were investigated by comparing the behavioral and MEG responses of musicians to non-musicians. The behavioral results indicated that the learning was successful for both musicians and non-musicians. The unimodal MEG responses are consistent with previous studies, revealing the contribution of Heschl's gyrus for the identification of auditory statistical mismatches and the contribution of medial temporal and visual association areas for the visual modality. The cortical network underlying audiovisual statistical learning was found to be partly common and partly distinct from the corresponding unimodal networks, comprising right temporal and left inferior frontal sources. Musicians showed enhanced activation in superior temporal and superior frontal gyrus. Connectivity and information processing flow amongst the sources comprising the cortical network of audiovisual statistical learning, as estimated by transfer entropy, was reorganized in musicians, indicating enhanced top-down processing. This neuroplastic effect showed a cross-modal stability between the auditory and audiovisual modalities. Copyright © 2018 Elsevier Inc. All rights reserved.
Hall, Michelle G; Mattingley, Jason B; Dux, Paul E
2015-08-01
The brain exploits redundancies in the environment to efficiently represent the complexity of the visual world. One example of this is ensemble processing, which provides a statistical summary of elements within a set (e.g., mean size). Another is statistical learning, which involves the encoding of stable spatial or temporal relationships between objects. It has been suggested that ensemble processing over arrays of oriented lines disrupts statistical learning of structure within the arrays (Zhao, Ngo, McKendrick, & Turk-Browne, 2011). Here we asked whether ensemble processing and statistical learning are mutually incompatible, or whether this disruption might occur because ensemble processing encourages participants to process the stimulus arrays in a way that impedes statistical learning. In Experiment 1, we replicated Zhao and colleagues' finding that ensemble processing disrupts statistical learning. In Experiments 2 and 3, we found that statistical learning was unimpaired by ensemble processing when task demands necessitated (a) focal attention to individual items within the stimulus arrays and (b) the retention of individual items in working memory. Together, these results are consistent with an account suggesting that ensemble processing and statistical learning can operate over the same stimuli given appropriate stimulus processing demands during exposure to regularities. (c) 2015 APA, all rights reserved).
A computational visual saliency model based on statistics and machine learning.
Lin, Ru-Je; Lin, Wei-Song
2014-08-01
Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.
The Developing Infant Creates a Curriculum for Statistical Learning.
Smith, Linda B; Jayaraman, Swapnaa; Clerkin, Elizabeth; Yu, Chen
2018-04-01
New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the training sets for statistical learning develop as the sensorimotor abilities of the infant develop, yielding a series of ordered datasets for visual learning that differ in content and structure between timepoints but are highly selective at each timepoint. These changing environments may constitute a developmentally ordered curriculum that optimizes learning across many domains. Future advances in computational models will be necessary to connect the developmentally changing content and statistics of infant experience to the internal machinery that does the learning. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modeling the Development of Audiovisual Cue Integration in Speech Perception
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.
2017-01-01
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558
Modeling the Development of Audiovisual Cue Integration in Speech Perception.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
2017-03-21
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
Domain general learning: Infants use social and non-social cues when learning object statistics
Barry, Ryan A.; Graf Estes, Katharine; Rivera, Susan M.
2015-01-01
Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N = 55) could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person) or a non-social cue (a rectangle). The results show that both social and non-social cues can guide infants’ attention to a visual shape sequence (and away from a distracting sequence). The social cue more effectively directed attention than the non-social cue during the familiarization phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions. PMID:25999879
Statistical Regularities Attract Attention when Task-Relevant.
Alamia, Andrea; Zénon, Alexandre
2016-01-01
Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e., the effect of color predictability on reaction times (RTs), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the two colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task.
Mutual interference between statistical summary perception and statistical learning.
Zhao, Jiaying; Ngo, Nhi; McKendrick, Ryan; Turk-Browne, Nicholas B
2011-09-01
The visual system is an efficient statistician, extracting statistical summaries over sets of objects (statistical summary perception) and statistical regularities among individual objects (statistical learning). Although these two kinds of statistical processing have been studied extensively in isolation, their relationship is not yet understood. We first examined how statistical summary perception influences statistical learning by manipulating the task that participants performed over sets of objects containing statistical regularities (Experiment 1). Participants who performed a summary task showed no statistical learning of the regularities, whereas those who performed control tasks showed robust learning. We then examined how statistical learning influences statistical summary perception by manipulating whether the sets being summarized contained regularities (Experiment 2) and whether such regularities had already been learned (Experiment 3). The accuracy of summary judgments improved when regularities were removed and when learning had occurred in advance. In sum, calculating summary statistics impeded statistical learning, and extracting statistical regularities impeded statistical summary perception. This mutual interference suggests that statistical summary perception and statistical learning are fundamentally related.
Neger, Thordis M.; Rietveld, Toni; Janse, Esther
2014-01-01
Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly. PMID:25225475
Neger, Thordis M; Rietveld, Toni; Janse, Esther
2014-01-01
Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
TRACX2: a connectionist autoencoder using graded chunks to model infant visual statistical learning.
Mareschal, Denis; French, Robert M
2017-01-05
Even newborn infants are able to extract structure from a stream of sensory inputs; yet how this is achieved remains largely a mystery. We present a connectionist autoencoder model, TRACX2, that learns to extract sequence structure by gradually constructing chunks, storing these chunks in a distributed manner across its synaptic weights and recognizing these chunks when they re-occur in the input stream. Chunks are graded rather than all-or-nothing in nature. As chunks are learnt their component parts become more and more tightly bound together. TRACX2 successfully models the data from five experiments from the infant visual statistical learning literature, including tasks involving forward and backward transitional probabilities, low-salience embedded chunk items, part-sequences and illusory items. The model also captures performance differences across ages through the tuning of a single-learning rate parameter. These results suggest that infant statistical learning is underpinned by the same domain-general learning mechanism that operates in auditory statistical learning and, potentially, in adult artificial grammar learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
TRACX2: a connectionist autoencoder using graded chunks to model infant visual statistical learning
French, Robert M.
2017-01-01
Even newborn infants are able to extract structure from a stream of sensory inputs; yet how this is achieved remains largely a mystery. We present a connectionist autoencoder model, TRACX2, that learns to extract sequence structure by gradually constructing chunks, storing these chunks in a distributed manner across its synaptic weights and recognizing these chunks when they re-occur in the input stream. Chunks are graded rather than all-or-nothing in nature. As chunks are learnt their component parts become more and more tightly bound together. TRACX2 successfully models the data from five experiments from the infant visual statistical learning literature, including tasks involving forward and backward transitional probabilities, low-salience embedded chunk items, part-sequences and illusory items. The model also captures performance differences across ages through the tuning of a single-learning rate parameter. These results suggest that infant statistical learning is underpinned by the same domain-general learning mechanism that operates in auditory statistical learning and, potentially, in adult artificial grammar learning. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872375
Real-world visual statistics and infants' first-learned object names
Clerkin, Elizabeth M.; Hart, Elizabeth; Rehg, James M.; Yu, Chen
2017-01-01
We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present—a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872373
Impact of feature saliency on visual category learning.
Hammer, Rubi
2015-01-01
People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.
Impact of feature saliency on visual category learning
Hammer, Rubi
2015-01-01
People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the ‘essence’ of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies. PMID:25954220
Global Statistical Learning in a Visual Search Task
ERIC Educational Resources Information Center
Jones, John L.; Kaschak, Michael P.
2012-01-01
Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…
Real-world visual statistics and infants' first-learned object names.
Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B
2017-01-05
We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Samigulina, Galina A.; Shayakhmetova, Assem S.
2016-11-01
Research objective is the creation of intellectual innovative technology and information Smart-system of distance learning for visually impaired people. The organization of the available environment for receiving quality education for visually impaired people, their social adaptation in society are important and topical issues of modern education.The proposed Smart-system of distance learning for visually impaired people can significantly improve the efficiency and quality of education of this category of people. The scientific novelty of proposed Smart-system is using intelligent and statistical methods of processing multi-dimensional data, and taking into account psycho-physiological characteristics of perception and awareness learning information by visually impaired people.
A Developmental Approach to Machine Learning?
Smith, Linda B.; Slone, Lauren K.
2017-01-01
Visual learning depends on both the algorithms and the training material. This essay considers the natural statistics of infant- and toddler-egocentric vision. These natural training sets for human visual object recognition are very different from the training data fed into machine vision systems. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order – with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. We propose that the skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines. PMID:29259573
ERIC Educational Resources Information Center
Turk-Browne, Nicholas B.; Scholl, Brian J.; Chun, Marvin M.; Johnson, Marcia K.
2009-01-01
Our environment contains regularities distributed in space and time that can be detected by way of statistical learning. This unsupervised learning occurs without intent or awareness, but little is known about how it relates to other types of learning, how it affects perceptual processing, and how quickly it can occur. Here we use fMRI during…
Right Hemisphere Dominance in Visual Statistical Learning
ERIC Educational Resources Information Center
Roser, Matthew E.; Fiser, Jozsef; Aslin, Richard N.; Gazzaniga, Michael S.
2011-01-01
Several studies report a right hemisphere advantage for visuospatial integration and a left hemisphere advantage for inferring conceptual knowledge from patterns of covariation. The present study examined hemispheric asymmetry in the implicit learning of new visual feature combinations. A split-brain patient and normal control participants viewed…
Infants are superior in implicit crossmodal learning and use other learning mechanisms than adults
von Frieling, Marco; Röder, Brigitte
2017-01-01
During development internal models of the sensory world must be acquired which have to be continuously adapted later. We used event-related potentials (ERP) to test the hypothesis that infants extract crossmodal statistics implicitly while adults learn them when task relevant. Participants were passively exposed to frequent standard audio-visual combinations (A1V1, A2V2, p=0.35 each), rare recombinations of these standard stimuli (A1V2, A2V1, p=0.10 each), and a rare audio-visual deviant with infrequent auditory and visual elements (A3V3, p=0.10). While both six-month-old infants and adults differentiated between rare deviants and standards involving early neural processing stages only infants were sensitive to crossmodal statistics as indicated by a late ERP difference between standard and recombined stimuli. A second experiment revealed that adults differentiated recombined and standard combinations when crossmodal combinations were task relevant. These results demonstrate a heightened sensitivity for crossmodal statistics in infants and a change in learning mode from infancy to adulthood. PMID:28949291
Spatial Probability Cuing and Right Hemisphere Damage
ERIC Educational Resources Information Center
Shaqiri, Albulena; Anderson, Britt
2012-01-01
In this experiment we studied statistical learning, inter-trial priming, and visual attention. We assessed healthy controls and right brain damaged (RBD) patients with and without neglect, on a simple visual discrimination task designed to measure priming effects and probability learning. All participants showed a preserved priming effect for item…
ERIC Educational Resources Information Center
Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen
2017-01-01
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…
What predicts successful literacy acquisition in a second language?
Frost, Ram; Siegelman, Noam; Narkiss, Alona; Afek, Liron
2013-01-01
We examined whether success (or failure) in assimilating the structure of a second language could be predicted by general statistical learning abilities that are non-linguistic in nature. We employed a visual statistical learning (VSL) task, monitoring our participants’ implicit learning of the transitional probabilities of visual shapes. A pretest revealed that performance in the VSL task is not correlated with abilities related to a general G factor or working memory. We found that native speakers of English who picked up the implicit statistical structure embedded in the continuous stream of shapes, on average, better assimilated the Semitic structure of Hebrew words. Our findings thus suggest that languages and their writing systems are characterized by idiosyncratic correlations of form and meaning, and these are picked up in the process of literacy acquisition, as they are picked up in any other type of learning, for the purpose of making sense of the environment. PMID:23698615
Johnston, Sandra; Parker, Christina N; Fox, Amanda
2017-09-01
Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, p<0.02) were evident in the subscale of transferability of learning from simulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this is vital to quality educational outcomes. Copyright © 2017. Published by Elsevier Ltd.
Visual Prediction in Infancy: What Is the Association with Later Vocabulary?
ERIC Educational Resources Information Center
Ellis, Erica M.; Gonzalez, Marybel Robledo; Deák, Gedeon O.
2014-01-01
Young infants can learn statistical regularities and patterns in sequences of events. Studies have demonstrated a relationship between early sequence learning skills and later development of cognitive and language skills. We investigated the relation between infants' visual response speed to novel event sequences, and their later receptive and…
Integrating Statistical Visualization Research into the Political Science Classroom
ERIC Educational Resources Information Center
Draper, Geoffrey M.; Liu, Baodong; Riesenfeld, Richard F.
2011-01-01
The use of computer software to facilitate learning in political science courses is well established. However, the statistical software packages used in many political science courses can be difficult to use and counter-intuitive. We describe the results of a preliminary user study suggesting that visually-oriented analysis software can help…
Mere exposure alters category learning of novel objects.
Folstein, Jonathan R; Gauthier, Isabel; Palmeri, Thomas J
2010-01-01
We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning.
Mere Exposure Alters Category Learning of Novel Objects
Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.
2010-01-01
We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning. PMID:21833209
Visual learning in drosophila: application on a roving robot and comparisons
NASA Astrophysics Data System (ADS)
Arena, P.; De Fiore, S.; Patané, L.; Termini, P. S.; Strauss, R.
2011-05-01
Visual learning is an important aspect of fly life. Flies are able to extract visual cues from objects, like colors, vertical and horizontal distributedness, and others, that can be used for learning to associate a meaning to specific features (i.e. a reward or a punishment). Interesting biological experiments show trained stationary flying flies avoiding flying towards specific visual objects, appearing on the surrounding environment. Wild-type flies effectively learn to avoid those objects but this is not the case for the learning mutant rutabaga defective in the cyclic AMP dependent pathway for plasticity. A bio-inspired architecture has been proposed to model the fly behavior and experiments on roving robots were performed. Statistical comparisons have been considered and mutant-like effect on the model has been also investigated.
Learning-dependent plasticity with and without training in the human brain.
Zhang, Jiaxiang; Kourtzi, Zoe
2010-07-27
Long-term experience through development and evolution and shorter-term training in adulthood have both been suggested to contribute to the optimization of visual functions that mediate our ability to interpret complex scenes. However, the brain plasticity mechanisms that mediate the detection of objects in cluttered scenes remain largely unknown. Here, we combine behavioral and functional MRI (fMRI) measurements to investigate the human-brain mechanisms that mediate our ability to learn statistical regularities and detect targets in clutter. We show two different routes to visual learning in clutter with discrete brain plasticity signatures. Specifically, opportunistic learning of regularities typical in natural contours (i.e., collinearity) can occur simply through frequent exposure, generalize across untrained stimulus features, and shape processing in occipitotemporal regions implicated in the representation of global forms. In contrast, learning to integrate discontinuities (i.e., elements orthogonal to contour paths) requires task-specific training (bootstrap-based learning), is stimulus-dependent, and enhances processing in intraparietal regions implicated in attention-gated learning. We propose that long-term experience with statistical regularities may facilitate opportunistic learning of collinear contours, whereas learning to integrate discontinuities entails bootstrap-based training for the detection of contours in clutter. These findings provide insights in understanding how long-term experience and short-term training interact to shape the optimization of visual recognition processes.
Bayesian learning of visual chunks by human observers
Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté
2008-01-01
Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. PMID:18268353
Side Effects of Being Blue: Influence of Sad Mood on Visual Statistical Learning
Bertels, Julie; Demoulin, Catherine; Franco, Ana; Destrebecqz, Arnaud
2013-01-01
It is well established that mood influences many cognitive processes, such as learning and executive functions. Although statistical learning is assumed to be part of our daily life, as mood does, the influence of mood on statistical learning has never been investigated before. In the present study, a sad vs. neutral mood was induced to the participants through the listening of stories while they were exposed to a stream of visual shapes made up of the repeated presentation of four triplets, namely sequences of three shapes presented in a fixed order. Given that the inter-stimulus interval was held constant within and between triplets, the only cues available for triplet segmentation were the transitional probabilities between shapes. Direct and indirect measures of learning taken either immediately or 20 minutes after the exposure/mood induction phase revealed that participants learned the statistical regularities between shapes. Interestingly, although participants from the sad and neutral groups performed similarly in these tasks, subjective measures (confidence judgments taken after each trial) revealed that participants who experienced the sad mood induction showed increased conscious access to their statistical knowledge. These effects were not modulated by the time delay between the exposure/mood induction and the test phases. These results are discussed within the scope of the robustness principle and the influence of negative affects on processing style. PMID:23555797
Modulation of spatial attention by goals, statistical learning, and monetary reward.
Jiang, Yuhong V; Sha, Li Z; Remington, Roger W
2015-10-01
This study documented the relative strength of task goals, visual statistical learning, and monetary reward in guiding spatial attention. Using a difficult T-among-L search task, we cued spatial attention to one visual quadrant by (i) instructing people to prioritize it (goal-driven attention), (ii) placing the target frequently there (location probability learning), or (iii) associating that quadrant with greater monetary gain (reward-based attention). Results showed that successful goal-driven attention exerted the strongest influence on search RT. Incidental location probability learning yielded a smaller though still robust effect. Incidental reward learning produced negligible guidance for spatial attention. The 95 % confidence intervals of the three effects were largely nonoverlapping. To understand these results, we simulated the role of location repetition priming in probability cuing and reward learning. Repetition priming underestimated the strength of location probability cuing, suggesting that probability cuing involved long-term statistical learning of how to shift attention. Repetition priming provided a reasonable account for the negligible effect of reward on spatial attention. We propose a multiple-systems view of spatial attention that includes task goals, search habit, and priming as primary drivers of top-down attention.
Modulation of spatial attention by goals, statistical learning, and monetary reward
Sha, Li Z.; Remington, Roger W.
2015-01-01
This study documented the relative strength of task goals, visual statistical learning, and monetary reward in guiding spatial attention. Using a difficult T-among-L search task, we cued spatial attention to one visual quadrant by (i) instructing people to prioritize it (goal-driven attention), (ii) placing the target frequently there (location probability learning), or (iii) associating that quadrant with greater monetary gain (reward-based attention). Results showed that successful goal-driven attention exerted the strongest influence on search RT. Incidental location probability learning yielded a smaller though still robust effect. Incidental reward learning produced negligible guidance for spatial attention. The 95 % confidence intervals of the three effects were largely nonoverlapping. To understand these results, we simulated the role of location repetition priming in probability cuing and reward learning. Repetition priming underestimated the strength of location probability cuing, suggesting that probability cuing involved long-term statistical learning of how to shift attention. Repetition priming provided a reasonable account for the negligible effect of reward on spatial attention. We propose a multiple-systems view of spatial attention that includes task goals, search habit, and priming as primary drivers of top-down attention. PMID:26105657
Brady, Timothy F; Oliva, Aude
2008-07-01
Recent work has shown that observers can parse streams of syllables, tones, or visual shapes and learn statistical regularities in them without conscious intent (e.g., learn that A is always followed by B). Here, we demonstrate that these statistical-learning mechanisms can operate at an abstract, conceptual level. In Experiments 1 and 2, observers incidentally learned which semantic categories of natural scenes covaried (e.g., kitchen scenes were always followed by forest scenes). In Experiments 3 and 4, category learning with images of scenes transferred to words that represented the categories. In each experiment, the category of the scenes was irrelevant to the task. Together, these results suggest that statistical-learning mechanisms can operate at a categorical level, enabling generalization of learned regularities using existing conceptual knowledge. Such mechanisms may guide learning in domains as disparate as the acquisition of causal knowledge and the development of cognitive maps from environmental exploration.
Learning what to expect (in visual perception)
Seriès, Peggy; Seitz, Aaron R.
2013-01-01
Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual’s priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues. We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors. PMID:24187536
Learning semantic and visual similarity for endomicroscopy video retrieval.
Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2012-06-01
Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.
Otsuka, Sachio; Saiki, Jun
2016-02-01
Prior studies have shown that visual statistical learning (VSL) enhances familiarity (a type of memory) of sequences. How do statistical regularities influence the processing of each triplet element and inserted distractors that disrupt the regularity? Given that increased attention to triplets induced by VSL and inhibition of unattended triplets, we predicted that VSL would promote memory for each triplet constituent, and degrade memory for inserted stimuli. Across the first two experiments, we found that objects from structured sequences were more likely to be remembered than objects from random sequences, and that letters (Experiment 1) or objects (Experiment 2) inserted into structured sequences were less likely to be remembered than those inserted into random sequences. In the subsequent two experiments, we examined an alternative account for our results, whereby the difference in memory for inserted items between structured and random conditions is due to individuation of items within random sequences. Our findings replicated even when control letters (Experiment 3A) or objects (Experiment 3B) were presented before or after, rather than inserted into, random sequences. Our findings suggest that statistical learning enhances memory for each item in a regular set and impairs memory for items that disrupt the regularity. Copyright © 2015 Elsevier B.V. All rights reserved.
Enhanced visual statistical learning in adults with autism
Roser, Matthew E.; Aslin, Richard N.; McKenzie, Rebecca; Zahra, Daniel; Fiser, József
2014-01-01
Individuals with autism spectrum disorder (ASD) are often characterized as having social engagement and language deficiencies, but a sparing of visuo-spatial processing and short-term memory, with some evidence of supra-normal levels of performance in these domains. The present study expanded on this evidence by investigating the observational learning of visuospatial concepts from patterns of covariation across multiple exemplars. Child and adult participants with ASD, and age-matched control participants, viewed multi-shape arrays composed from a random combination of pairs of shapes that were each positioned in a fixed spatial arrangement. After this passive exposure phase, a post-test revealed that all participant groups could discriminate pairs of shapes with high covariation from randomly paired shapes with low covariation. Moreover, learning these shape-pairs with high covariation was superior in adults with ASD than in age-matched controls, while performance in children with ASD was no different than controls. These results extend previous observations of visuospatial enhancement in ASD into the domain of learning, and suggest that enhanced visual statistical learning may have arisen from a sustained bias to attend to local details in complex arrays of visual features. PMID:25151115
NASA Astrophysics Data System (ADS)
Deratzou, Susan
This research studies the process of high school chemistry students visualizing chemical structures and its role in learning chemical bonding and molecular structure. Minimal research exists with high school chemistry students and more research is necessary (Gabel & Sherwood, 1980; Seddon & Moore, 1986; Seddon, Tariq, & Dos Santos Veiga, 1984). Using visualization tests (Ekstrom, French, Harman, & Dermen, 1990a), a learning style inventory (Brown & Cooper, 1999), and observations through a case study design, this study found visual learners performed better, but needed more practice and training. Statistically, all five pre- and post-test visualization test comparisons were highly significant in the two-tailed t-test (p > .01). The research findings are: (1) Students who tested high in the Visual (Language and/or Numerical) and Tactile Learning Styles (and Social Learning) had an advantage. Students who learned the chemistry concepts more effectively were better at visualizing structures and using molecular models to enhance their knowledge. (2) Students showed improvement in learning after visualization practice. Training in visualization would improve students' visualization abilities and provide them with a way to think about these concepts. (3) Conceptualization of concepts indicated that visualizing ability was critical and that it could be acquired. Support for this finding was provided by pre- and post-Visualization Test data with a highly significant t-test. (4) Various molecular animation programs and websites were found to be effective. (5) Visualization and modeling of structures encompassed both two- and three-dimensional space. The Visualization Test findings suggested that the students performed better with basic rotation of structures as compared to two- and three-dimensional objects. (6) Data from observations suggest that teaching style was an important factor in student learning of molecular structure. (7) Students did learn the chemistry concepts. Based on the Visualization Test results, which showed that most of the students performed better on the post-test, the visualization experience and the abstract nature of the content allowed them to transfer some of their chemical understanding and practice to non-chemical structures. Finally, implications for teaching of chemistry, students learning chemistry, curriculum, and research for the field of chemical education were discussed.
Learning the Association between a Context and a Target Location in Infancy
ERIC Educational Resources Information Center
Bertels, Julie; San Anton, Estibaliz; Gebuis, Titia; Destrebecqz, Arnaud
2017-01-01
Extracting the statistical regularities present in the environment is a central learning mechanism in infancy. For instance, infants are able to learn the associations between simultaneously or successively presented visual objects (Fiser & Aslin, 2002; Kirkham, Slemmer & Johnson, 2002). The present study extends these results by…
Visual statistical learning is not reliably modulated by selective attention to isolated events
Musz, Elizabeth; Weber, Matthew J.; Thompson-Schill, Sharon L.
2014-01-01
Recent studies of visual statistical learning (VSL) indicate that the visual system can automatically extract temporal and spatial relationships between objects. We report several attempts to replicate and extend earlier work (Turk-Browne et al., 2005) in which observers performed a cover task on one of two interleaved stimulus sets, resulting in learning of temporal relationships that occur in the attended stream, but not those present in the unattended stream. Across four experiments, we exposed observers to a similar or identical familiarization protocol, directing attention to one of two interleaved stimulus sets; afterward, we assessed VSL efficacy for both sets using either implicit response-time measures or explicit familiarity judgments. In line with prior work, we observe learning for the attended stimulus set. However, unlike previous reports, we also observe learning for the unattended stimulus set. When instructed to selectively attend to only one of the stimulus sets and ignore the other set, observers could extract temporal regularities for both sets. Our efforts to experimentally decrease this effect by changing the cover task (Experiment 1) or the complexity of the statistical regularities (Experiment 3) were unsuccessful. A fourth experiment using a different assessment of learning likewise failed to show an attentional effect. Simulations drawing random samples our first three experiments (n=64) confirm that the distribution of attentional effects in our sample closely approximates the null. We offer several potential explanations for our failure to replicate earlier findings, and discuss how our results suggest limiting conditions on the relevance of attention to VSL. PMID:25172196
Facilitating role of 3D multimodal visualization and learning rehearsal in memory recall.
Do, Phuong T; Moreland, John R
2014-04-01
The present study investigated the influence of 3D multimodal visualization and learning rehearsal on memory recall. Participants (N = 175 college students ranging from 21 to 25 years) were assigned to different training conditions and rehearsal processes to learn a list of 14 terms associated with construction of a wood-frame house. They then completed a memory test determining their cognitive ability to free recall the definitions of the 14 studied terms immediately after training and rehearsal. The audiovisual modality training condition was associated with the highest accuracy, and the visual- and auditory-modality conditions with lower accuracy rates. The no-training condition indicated little learning acquisition. A statistically significant increase in performance accuracy for the audiovisual condition as a function of rehearsal suggested the relative importance of rehearsal strategies in 3D observational learning. Findings revealed the potential application of integrating virtual reality and cognitive sciences to enhance learning and teaching effectiveness.
3D Visualization of Machine Learning Algorithms with Astronomical Data
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2016-01-01
We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.
[Is it possible to train Achatina fulica using visual stimulation?].
Baĭkova, I B; Zhukov, V V
2001-01-01
The conditioned behavior to visual stimuli was obtained in Achatina fulica mollusk on the basis of its negative phototaxis. Directional moving of snails toward black cards was accompanied by the negative unconditioned stimulation (electric current). Learning was expressed in a statistically significant decrease in locomotor activity of animals and decrease in the rate of preference of sections with black cards. Learning developed within two daily training sessions with 30 trials in each of them. Learning traces were observed as defensive behavior at least during a month after reinforcement elimination.
Toward User Interfaces and Data Visualization Criteria for Learning Design of Digital Textbooks
ERIC Educational Resources Information Center
Railean, Elena
2014-01-01
User interface and data visualisation criteria are central issues in digital textbooks design. However, when applying mathematical modelling of learning process to the analysis of the possible solutions, it could be observed that results differ. Mathematical learning views cognition in on the base on statistics and probability theory, graph…
What You Learn is What You See: Using Eye Movements to Study Infant Cross-Situational Word Learning
Smith, Linda
2016-01-01
Recent studies show that both adults and young children possess powerful statistical learning capabilities to solve the word-to-world mapping problem. However, the underlying mechanisms that make statistical learning possible and powerful are not yet known. With the goal of providing new insights into this issue, the research reported in this paper used an eye tracker to record the moment-by-moment eye movement data of 14-month-old babies in statistical learning tasks. Various measures are applied to such fine-grained temporal data, such as looking duration and shift rate (the number of shifts in gaze from one visual object to the other) trial by trial, showing different eye movement patterns between strong and weak statistical learners. Moreover, an information-theoretic measure is developed and applied to gaze data to quantify the degree of learning uncertainty trial by trial. Next, a simple associative statistical learning model is applied to eye movement data and these simulation results are compared with empirical results from young children, showing strong correlations between these two. This suggests that an associative learning mechanism with selective attention can provide a cognitively plausible model of cross-situational statistical learning. The work represents the first steps to use eye movement data to infer underlying real-time processes in statistical word learning. PMID:22213894
Mobile Visual Search Based on Histogram Matching and Zone Weight Learning
NASA Astrophysics Data System (ADS)
Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong
2018-01-01
In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.
Detecting Visually Observable Disease Symptoms from Faces.
Wang, Kuan; Luo, Jiebo
2016-12-01
Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.
Al-Saud, Loulwa Mohammed Saad
2013-10-01
The aim of this study was to investigate the learning style preferences of a group of first-year dental students and their relation to gender and past academic performance. A total of 113 first-year dental students (forty-two female, seventy-one male) at King Saud University in Riyadh, Saudi Arabia, participated. The Visual, Aural, Read-write, and Kinesthetic (VARK) questionnaire was used to determine the students' preferred mode of learning. This sixteen-item questionnaire defines preference of learning based on the sensory modalities: visual, aural, reading/writing, and kinesthetic. More than half (59 percent) of the students were found to have multimodal learning preferences. The most common single learning preferences were aural (20 percent) followed by kinesthetic (15.2 percent). Gender differences were not statistically significant. However, a statistically significant difference was found in the mean values of GPA in relation to the students' learning style preferences (p=0.019). Students with a single learning style preference had a lower mean GPA than those with multiple (quad-modal) learning style preferences. For effective instruction, dental educators need to broaden their range of presentation styles to help create more positive and effective learning environments for all students.
Enhancing the Teaching and Learning of Mathematical Visual Images
ERIC Educational Resources Information Center
Quinnell, Lorna
2014-01-01
The importance of mathematical visual images is indicated by the introductory paragraph in the Statistics and Probability content strand of the Australian Curriculum, which draws attention to the importance of learners developing skills to analyse and draw inferences from data and "represent, summarise and interpret data and undertake…
Georgiades, Anna; Rijsdijk, Fruhling; Kane, Fergus; Rebollo-Mesa, Irene; Kalidindi, Sridevi; Schulze, Katja K; Stahl, Daniel; Walshe, Muriel; Sahakian, Barbara J; McDonald, Colm; Hall, Mei-Hua; Murray, Robin M; Kravariti, Eugenia
2016-06-01
Twin studies have lacked statistical power to apply advanced genetic modelling techniques to the search for cognitive endophenotypes for bipolar disorder. To quantify the shared genetic variability between bipolar disorder and cognitive measures. Structural equation modelling was performed on cognitive data collected from 331 twins/siblings of varying genetic relatedness, disease status and concordance for bipolar disorder. Using a parsimonious AE model, verbal episodic and spatial working memory showed statistically significant genetic correlations with bipolar disorder (rg = |0.23|-|0.27|), which lost statistical significance after covarying for affective symptoms. Using an ACE model, IQ and visual-spatial learning showed statistically significant genetic correlations with bipolar disorder (rg = |0.51|-|1.00|), which remained significant after covarying for affective symptoms. Verbal episodic and spatial working memory capture a modest fraction of the bipolar diathesis. IQ and visual-spatial learning may tap into genetic substrates of non-affective symptomatology in bipolar disorder. © The Royal College of Psychiatrists 2016.
Visual statistical learning is related to natural language ability in adults: An ERP study.
Daltrozzo, Jerome; Emerson, Samantha N; Deocampo, Joanne; Singh, Sonia; Freggens, Marjorie; Branum-Martin, Lee; Conway, Christopher M
2017-03-01
Statistical learning (SL) is believed to enable language acquisition by allowing individuals to learn regularities within linguistic input. However, neural evidence supporting a direct relationship between SL and language ability is scarce. We investigated whether there are associations between event-related potential (ERP) correlates of SL and language abilities while controlling for the general level of selective attention. Seventeen adults completed tests of visual SL, receptive vocabulary, grammatical ability, and sentence completion. Response times and ERPs showed that SL is related to receptive vocabulary and grammatical ability. ERPs indicated that the relationship between SL and grammatical ability was independent of attention while the association between SL and receptive vocabulary depended on attention. The implications of these dissociative relationships in terms of underlying mechanisms of SL and language are discussed. These results further elucidate the cognitive nature of the links between SL mechanisms and language abilities. Copyright © 2017 Elsevier Inc. All rights reserved.
Visual statistical learning is related to natural language ability in adults: An ERP Study
Daltrozzo, Jerome; Emerson, Samantha N.; Deocampo, Joanne; Singh, Sonia; Freggens, Marjorie; Branum-Martin, Lee; Conway, Christopher M.
2017-01-01
Statistical learning (SL) is believed to enable language acquisition by allowing individuals to learn regularities within linguistic input. However, neural evidence supporting a direct relationship between SL and language ability is scarce. We investigated whether there are associations between event-related potential (ERP) correlates of SL and language abilities while controlling for the general level of selective attention. Seventeen adults completed tests of visual SL, receptive vocabulary, grammatical ability, and sentence completion. Response times and ERPs showed that SL is related to receptive vocabulary and grammatical ability. ERPs indicated that the relationship between SL and grammatical ability was independent of attention while the association between SL and receptive vocabulary depended on attention. The implications of these dissociative relationships in terms of underlying mechanisms of SL and language are discussed. These results further elucidate the cognitive nature of the links between SL mechanisms and language abilities. PMID:28086142
Across Space and Time: Infants Learn from Backward and Forward Visual Statistics
ERIC Educational Resources Information Center
Tummeltshammer, Kristen; Amso, Dima; French, Robert M.; Kirkham, Natasha Z.
2017-01-01
This study investigates whether infants are sensitive to backward and forward transitional probabilities within temporal and spatial visual streams. Two groups of 8-month-old infants were familiarized with an artificial grammar of shapes, comprising backward and forward base pairs (i.e. two shapes linked by strong backward or forward transitional…
Postural-Sway Response in Learning-Disabled Children: Pilot Data.
ERIC Educational Resources Information Center
Polatajko, H. J.
1987-01-01
The postural-sway response of five learning disabled (LD) and five nondisabled children was evaluated using a force platform. From statistical analysis of the two groups, the LD children appeared to use visual input to compensate for postural problems and had significant difficulty controlling posture with eyes closed. (SK)
Visualization and Analytics Software Tools for Peregrine System |
R is a language and environment for statistical computing and graphics. Go to the R web site for System Visualization and Analytics Software Tools for Peregrine System Learn about the available visualization for OpenGL-based applications. For more information, please go to the FastX page. ParaView An open
Visualizing and Understanding Probability and Statistics: Graphical Simulations Using Excel
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
2009-01-01
The authors describe a collection of dynamic interactive simulations for teaching and learning most of the important ideas and techniques of introductory statistics and probability. The modules cover such topics as randomness, simulations of probability experiments such as coin flipping, dice rolling and general binomial experiments, a simulation…
ERIC Educational Resources Information Center
Vaughn, Brandon K.; Wang, Pei-Yu
2009-01-01
The emergence of technology has led to numerous changes in mathematical and statistical teaching and learning which has improved the quality of instruction and teacher/student interactions. The teaching of statistics, for example, has shifted from mathematical calculations to higher level cognitive abilities such as reasoning, interpretation, and…
The Effect of Visual of a Courseware towards Pre-University Students' Learning in Literature
NASA Astrophysics Data System (ADS)
Masri, Mazyrah; Wan Ahmad, Wan Fatimah; Nordin, Shahrina Md.; Sulaiman, Suziah
This paper highlights the effect of visual of a multimedia courseware, Black Cat Courseware (BC-C), developed for learning literature at a pre-university level in University Teknologi PETRONAS (UTP). The contents of the courseware are based on a Black Cat story which is covered in an English course at the university. The objective of this paper is to evaluate the usability and effectiveness of BC-C. A total of sixty foundation students were involved in the study. Quasi-experimental design was employed, forming two groups: experimental and control groups. The experimental group had to interact with BC-C as part of the learning activities while the control group used the conventional learning methods. The results indicate that the experimental group achieved a statistically significant compared to the control group in understanding the Black Cat story. The study result also proves that the effect of visual increases the students' performances in literature learning at a pre-university level.
Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias
2018-05-16
There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.
Invariant visual object recognition: a model, with lighting invariance.
Rolls, Edmund T; Stringer, Simon M
2006-01-01
How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.
Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.
Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas
2016-01-01
While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.
Learning to Look: Probabilistic Variation and Noise Guide Infants' Eye Movements
ERIC Educational Resources Information Center
Tummeltshammer, Kristen Swan; Kirkham, Natasha Z.
2013-01-01
Young infants have demonstrated a remarkable sensitivity to probabilistic relations among visual features (Fiser & Aslin, 2002; Kirkham et al., 2002). Previous research has raised important questions regarding the usefulness of statistical learning in an environment filled with variability and noise, such as an infant's natural world. In…
Decoding the future from past experience: learning shapes predictions in early visual cortex.
Luft, Caroline D B; Meeson, Alan; Welchman, Andrew E; Kourtzi, Zoe
2015-05-01
Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex. Copyright © 2015 the American Physiological Society.
How much to trust the senses: Likelihood learning
Sato, Yoshiyuki; Kording, Konrad P.
2014-01-01
Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975
Eguchi, Akihiro; Mender, Bedeho M. W.; Evans, Benjamin D.; Humphreys, Glyn W.; Stringer, Simon M.
2015-01-01
Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognize the whole object. PMID:26300766
Specificity and timescales of cortical adaptation as inferences about natural movie statistics.
Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia
2016-10-01
Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation.
Specificity and timescales of cortical adaptation as inferences about natural movie statistics
Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia
2016-01-01
Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416
NASA Astrophysics Data System (ADS)
Devetak, Iztok; Aleksij Glažar, Saša
2010-08-01
Submicrorepresentations (SMRs) are a powerful tool for identifying misconceptions of chemical concepts and for generating proper mental models of chemical phenomena in students' long-term memory during chemical education. The main purpose of the study was to determine which independent variables (gender, formal reasoning abilities, visualization abilities, and intrinsic motivation for learning chemistry) have the maximum influence on students' reading and drawing SMRs. A total of 386 secondary school students (aged 16.3 years) participated in the study. The instruments used in the study were: test of Chemical Knowledge, Test of Logical Thinking, two tests of visualization abilities Patterns and Rotations, and questionnaire on Intrinsic Motivation for Learning Science. The results show moderate, but statistically significant correlations between students' intrinsic motivation, formal reasoning abilities and chemical knowledge at submicroscopic level based on reading and drawing SMRs. Visualization abilities are not statistically significantly correlated with students' success on items that comprise reading or drawing SMRs. It can be also concluded that there is a statistically significant difference between male and female students in solving problems that include reading or drawing SMRs. Based on these statistical results and content analysis of the sample problems, several educational strategies can be implemented for students to develop adequate mental models of chemical concepts on all three levels of representations.
Perceptual context and individual differences in the language proficiency of preschool children.
Banai, Karen; Yifat, Rachel
2016-02-01
Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.
Set size manipulations reveal the boundary conditions of perceptual ensemble learning.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-11-01
Recent evidence suggests that observers can grasp patterns of feature variations in the environment with surprising efficiency. During visual search tasks where all distractors are randomly drawn from a certain distribution rather than all being homogeneous, observers are capable of learning highly complex statistical properties of distractor sets. After only a few trials (learning phase), the statistical properties of distributions - mean, variance and crucially, shape - can be learned, and these representations affect search during a subsequent test phase (Chetverikov, Campana, & Kristjánsson, 2016). To assess the limits of such distribution learning, we varied the information available to observers about the underlying distractor distributions by manipulating set size during the learning phase in two experiments. We found that robust distribution learning only occurred for large set sizes. We also used set size to assess whether the learning of distribution properties makes search more efficient. The results reveal how a certain minimum of information is required for learning to occur, thereby delineating the boundary conditions of learning of statistical variation in the environment. However, the benefits of distribution learning for search efficiency remain unclear. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Yi; Chen, Lihan
2016-01-01
Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910
NASA Astrophysics Data System (ADS)
Alenizi, Abdulaziz
The purpose of the study was to investigate the relevance of teachers in Kuwait when utilizing photographic aids in the classroom. Specifically, this study assessed learning outcomes of teachers amongst high school students in schools at Kuwait. The learning outcomes were then compared with teachers who are barred from using photographic aids. The research utilized a descriptive quantitative research design. The number of participants was limited to an acceptable number in the range of 250--300. Data were collected through a questionnaire and analyses were conducted using various types of statistical designs for interpretation, specifically Spearman correlation analysis. The study revealed that visual media such as images and photographs made it easy for the students to understand the concepts of science subjects, specifically biology, physics, and chemistry. Visual media should be included in the curriculum to enhance the comprehension level of students. The government of Kuwaiti, therefore, should to encourage the use of visual aids in schools to enhance learning. The research did not indicate a capacity of skills students and teachers can employ effectively when using visual aids. There also remains a gap between possessing the skills and applying them in the school. Benefits associated with visuals aids in teaching are evident in the study. With the adoption of audio-visual methods of learning, students are presented with opportunities to develop their own ideas and opinions, thus boosting their own interpersonal skills while at the same time questioning the authenticity and relevance of the concepts at hand. The major merit of audio-visual platforms in classroom learning is they cause students to break complex science concepts into finer components that can be easily understood.
Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.
Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei
2016-05-01
Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.
Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin
2017-07-05
Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Sadeghi, Zahra; Testolin, Alberto
2017-08-01
In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.
ERIC Educational Resources Information Center
Salverda, Anne Pier
2016-01-01
Lieberman, Borovsky, Hatrak, and Mayberry (2015) used a modified version of the visual-world paradigm to examine the real-time processing of signs in American Sign Language. They examined the activation of phonological and semantic competitors in native signers and late-learning signers and concluded that their results provide evidence that the…
Emberson, Lauren L.; Rubinstein, Dani
2016-01-01
The influence of statistical information on behavior (either through learning or adaptation) is quickly becoming foundational to many domains of cognitive psychology and cognitive neuroscience, from language comprehension to visual development. We investigate a central problem impacting these diverse fields: when encountering input with rich statistical information, are there any constraints on learning? This paper examines learning outcomes when adult learners are given statistical information across multiple levels of abstraction simultaneously: from abstract, semantic categories of everyday objects to individual viewpoints on these objects. After revealing statistical learning of abstract, semantic categories with scrambled individual exemplars (Exp. 1), participants viewed pictures where the categories as well as the individual objects predicted picture order (e.g., bird1—dog1, bird2—dog2). Our findings suggest that participants preferentially encode the relationships between the individual objects, even in the presence of statistical regularities linking semantic categories (Exps. 2 and 3). In a final experiment we investigate whether learners are biased towards learning object-level regularities or simply construct the most detailed model given the data (and therefore best able to predict the specifics of the upcoming stimulus) by investigating whether participants preferentially learn from the statistical regularities linking individual snapshots of objects or the relationship between the objects themselves (e.g., bird_picture1— dog_picture1, bird_picture2—dog_picture2). We find that participants fail to learn the relationships between individual snapshots, suggesting a bias towards object-level statistical regularities as opposed to merely constructing the most complete model of the input. This work moves beyond the previous existence proofs that statistical learning is possible at both very high and very low levels of abstraction (categories vs. individual objects) and suggests that, at least with the current categories and type of learner, there are biases to pick up on statistical regularities between individual objects even when robust statistical information is present at other levels of abstraction. These findings speak directly to emerging theories about how systems supporting statistical learning and prediction operate in our structure-rich environments. Moreover, the theoretical implications of the current work across multiple domains of study is already clear: statistical learning cannot be assumed to be unconstrained even if statistical learning has previously been established at a given level of abstraction when that information is presented in isolation. PMID:27139779
Humans make efficient use of natural image statistics when performing spatial interpolation.
D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S
2013-12-16
Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.
Correlation Between University Students' Kinematic Achievement and Learning Styles
NASA Astrophysics Data System (ADS)
Çirkinoǧlu, A. G.; Dem&ircidot, N.
2007-04-01
In the literature, some researches on kinematics revealed that students have many difficulties in connecting graphs and physics. Also some researches showed that the method used in classroom affects students' further learning. In this study the correlation between university students' kinematics achieve and learning style are investigated. In this purpose Kinematics Achievement Test and Learning Style Inventory were applied to 573 students enrolled in general physics 1 courses at Balikesir University in the fall semester of 2005-2006. Kinematics Test, consists of 12 multiple choose and 6 open ended questions, was developed by researchers to assess students' understanding, interpreting, and drawing graphs. Learning Style Inventory, a 24 items test including visual, auditory, and kinesthetic learning styles, was developed and used by Barsch. The data obtained from in this study were analyzed necessary statistical calculations (T-test, correlation, ANOVA, etc.) by using SPSS statistical program. Based on the research findings, the tentative recommendations are made.
The company objects keep: Linking referents together during cross-situational word learning.
Zettersten, Martin; Wojcik, Erica; Benitez, Viridiana L; Saffran, Jenny
2018-04-01
Learning the meanings of words involves not only linking individual words to referents but also building a network of connections among entities in the world, concepts, and words. Previous studies reveal that infants and adults track the statistical co-occurrence of labels and objects across multiple ambiguous training instances to learn words. However, it is less clear whether, given distributional or attentional cues, learners also encode associations amongst the novel objects. We investigated the consequences of two types of cues that highlighted object-object links in a cross-situational word learning task: distributional structure - how frequently the referents of novel words occurred together - and visual context - whether the referents were seen on matching backgrounds. Across three experiments, we found that in addition to learning novel words, adults formed connections between frequently co-occurring objects. These findings indicate that learners exploit statistical regularities to form multiple types of associations during word learning.
Mathematical Representation Ability by Using Project Based Learning on the Topic of Statistics
NASA Astrophysics Data System (ADS)
Widakdo, W. A.
2017-09-01
Seeing the importance of the role of mathematics in everyday life, mastery of the subject areas of mathematics is a must. Representation ability is one of the fundamental ability that used in mathematics to make connection between abstract idea with logical thinking to understanding mathematics. Researcher see the lack of mathematical representation and try to find alternative solution to dolve it by using project based learning. This research use literature study from some books and articles in journals to see the importance of mathematical representation abiliy in mathemtics learning and how project based learning able to increase this mathematical representation ability on the topic of Statistics. The indicators for mathematical representation ability in this research classifies namely visual representation (picture, diagram, graph, or table); symbolize representation (mathematical statement. Mathematical notation, numerical/algebra symbol) and verbal representation (written text). This article explain about why project based learning able to influence student’s mathematical representation by using some theories in cognitive psychology, also showing the example of project based learning that able to use in teaching statistics, one of mathematics topic that very useful to analyze data.
Howard, James H.; Howard, Darlene V.; Dennis, Nancy A.; Kelly, Andrew J.
2008-01-01
Knowledge of sequential relationships enables future events to be anticipated and processed efficiently. Research with the serial reaction time task (SRTT) has shown that sequence learning often occurs implicitly without effort or awareness. Here we report four experiments that use a triplet-learning task (TLT) to investigate sequence learning in young and older adults. In the TLT people respond only to the last target event in a series of discrete, three-event sequences or triplets. Target predictability is manipulated by varying the triplet frequency (joint probability) and/or the statistical relationships (conditional probabilities) among events within the triplets. Results revealed that both groups learned, though older adults showed less learning of both joint and conditional probabilities. Young people used the statistical information in both cues, but older adults relied primarily on information in the second cue alone. We conclude that the TLT complements and extends the SRTT and other tasks by offering flexibility in the kinds of sequential statistical regularities that may be studied as well as by controlling event timing and eliminating motor response sequencing. PMID:18763897
Learning style-based teaching harvests a superior comprehension of respiratory physiology.
Anbarasi, M; Rajkumar, G; Krishnakumar, S; Rajendran, P; Venkatesan, R; Dinesh, T; Mohan, J; Venkidusamy, S
2015-09-01
Students entering medical college generally show vast diversity in their school education. It becomes the responsibility of teachers to motivate students and meet the needs of all diversities. One such measure is teaching students in their own preferred learning style. The present study was aimed to incorporate a learning style-based teaching-learning program for medical students and to reveal its significance and utility. Learning styles of students were assessed online using the visual-auditory-kinesthetic (VAK) learning style self-assessment questionnaire. When respiratory physiology was taught, students were divided into three groups, namely, visual (n = 34), auditory (n = 44), and kinesthetic (n = 28), based on their learning style. A fourth group (the traditional group; n = 40) was formed by choosing students randomly from the above three groups. Visual, auditory, and kinesthetic groups were taught following the appropriate teaching-learning strategies. The traditional group was taught via the routine didactic lecture method. The effectiveness of this intervention was evaluated by a pretest and two posttests, posttest 1 immediately after the intervention and posttest 2 after a month. In posttest 1, one-way ANOVA showed a significant statistical difference (P=0.005). Post hoc analysis showed significance between the kinesthetic group and traditional group (P=0.002). One-way ANOVA showed a significant difference in posttest 2 scores (P < 0.0001). Post hoc analysis showed significance between the three learning style-based groups compared with the traditional group [visual vs. traditional groups (p=0.002), auditory vs. traditional groups (p=0.03), and Kinesthetic vs. traditional groups (p=0.001)]. This study emphasizes that teaching methods tailored to students' style of learning definitely improve their understanding, performance, and retrieval of the subject. Copyright © 2015 The American Physiological Society.
Learning styles of first-year medical students attending Erciyes University in Kayseri, Turkey.
Baykan, Zeynep; Naçar, Melis
2007-06-01
Educational researchers postulate that every individual has a different learning style. The aim of this descriptive study was to determine the learning styles of first-year medical students using the Turkish version of the visual, auditory, read-write, kinesthetic (VARK) questionnaire. This study was performed at the Department of Medical Education of Erciyes University in February 2006. The Turkish version of the VARK questionnaire was administered to first-year medical students to determine their preferred mode of learning. According to the VARK questionnaire, students were divided into five groups (visual learners, read-write learners, auditory learners, kinesthetic learners, and multimodal learners). The unimodality preference was 36.1% and multimodality was 63.9%. Among the students who participated in the study (155 students), 23.3% were kinesthetic, 7.7% were auditory, 3.2% were visual, and 1.9% were read-write learners. Some students preferred multiple modes: bimodal (30.3%), trimodal (20.7%), and quadmodal (12.9%). The learning styles did not differ between male and female students, and no statistically significant difference was determined between the first-semester grade average points and learning styles. Knowing that our students have different preferred learning modes will help the medical instructors in our faculty develop appropriate learning approaches and explore opportunities so that they will be able to make the educational experience more productive.
Dynamic functional connectivity shapes individual differences in associative learning.
Fatima, Zainab; Kovacevic, Natasha; Misic, Bratislav; McIntosh, Anthony Randal
2016-11-01
Current neuroscientific research has shown that the brain reconfigures its functional interactions at multiple timescales. Here, we sought to link transient changes in functional brain networks to individual differences in behavioral and cognitive performance by using an active learning paradigm. Participants learned associations between pairs of unrelated visual stimuli by using feedback. Interindividual behavioral variability was quantified with a learning rate measure. By using a multivariate statistical framework (partial least squares), we identified patterns of network organization across multiple temporal scales (within a trial, millisecond; across a learning session, minute) and linked these to the rate of change in behavioral performance (fast and slow). Results indicated that posterior network connectivity was present early in the trial for fast, and later in the trial for slow performers. In contrast, connectivity in an associative memory network (frontal, striatal, and medial temporal regions) occurred later in the trial for fast, and earlier for slow performers. Time-dependent changes in the posterior network were correlated with visual/spatial scores obtained from independent neuropsychological assessments, with fast learners performing better on visual/spatial subtests. No relationship was found between functional connectivity dynamics in the memory network and visual/spatial test scores indicative of cognitive skill. By using a comprehensive set of measures (behavioral, cognitive, and neurophysiological), we report that individual variations in learning-related performance change are supported by differences in cognitive ability and time-sensitive connectivity in functional neural networks. Hum Brain Mapp 37:3911-3928, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
AstroML: "better, faster, cheaper" towards state-of-the-art data mining and machine learning
NASA Astrophysics Data System (ADS)
Ivezic, Zeljko; Connolly, Andrew J.; Vanderplas, Jacob
2015-01-01
We present AstroML, a Python module for machine learning and data mining built on numpy, scipy, scikit-learn, matplotlib, and astropy, and distributed under an open license. AstroML contains a growing library of statistical and machine learning routines for analyzing astronomical data in Python, loaders for several open astronomical datasets (such as SDSS and other recent major surveys), and a large suite of examples of analyzing and visualizing astronomical datasets. AstroML is especially suitable for introducing undergraduate students to numerical research projects and for graduate students to rapidly undertake cutting-edge research. The long-term goal of astroML is to provide a community repository for fast Python implementations of common tools and routines used for statistical data analysis in astronomy and astrophysics (see http://www.astroml.org).
Monroy, Claire D; Gerson, Sarah A; Hunnius, Sabine
2018-05-01
Humans are sensitive to the statistical regularities in action sequences carried out by others. In the present eyetracking study, we investigated whether this sensitivity can support the prediction of upcoming actions when observing unfamiliar action sequences. In two between-subjects conditions, we examined whether observers would be more sensitive to statistical regularities in sequences performed by a human agent versus self-propelled 'ghost' events. Secondly, we investigated whether regularities are learned better when they are associated with contingent effects. Both implicit and explicit measures of learning were compared between agent and ghost conditions. Implicit learning was measured via predictive eye movements to upcoming actions or events, and explicit learning was measured via both uninstructed reproduction of the action sequences and verbal reports of the regularities. The findings revealed that participants, regardless of condition, readily learned the regularities and made correct predictive eye movements to upcoming events during online observation. However, different patterns of explicit-learning outcomes emerged following observation: Participants were most likely to re-create the sequence regularities and to verbally report them when they had observed an actor create a contingent effect. These results suggest that the shift from implicit predictions to explicit knowledge of what has been learned is facilitated when observers perceive another agent's actions and when these actions cause effects. These findings are discussed with respect to the potential role of the motor system in modulating how statistical regularities are learned and used to modify behavior.
NASA Astrophysics Data System (ADS)
Wilder, Anna
The purpose of this study was to investigate the effects of a visualization-centered curriculum, Hemoglobin: A Case of Double Identity, on conceptual understanding and representational competence in high school biology. Sixty-nine students enrolled in three sections of freshman biology taught by the same teacher participated in this study. Online Chemscape Chime computer-based molecular visualizations were incorporated into the 10-week curriculum to introduce students to fundamental structure and function relationships. Measures used in this study included a Hemoglobin Structure and Function Test, Mental Imagery Questionnaire, Exam Difficulty Survey, the Student Assessment of Learning Gains, the Group Assessment of Logical Thinking, the Attitude Toward Science in School Assessment, audiotapes of student interviews, students' artifacts, weekly unit activity surveys, informal researcher observations and a teacher's weekly questionnaire. The Hemoglobin Structure and Function Test, consisting of Parts A and B, was administered as a pre and posttest. Part A used exclusively verbal test items to measure conceptual understanding, while Part B used visual-verbal test items to measure conceptual understanding and representational competence. Results of the Hemoglobin Structure and Function pre and posttest revealed statistically significant gains in conceptual understanding and representational competence, suggesting the visualization-centered curriculum implemented in this study was effective in supporting positive learning outcomes. The large positive correlation between posttest results on Part A, comprised of all-verbal test items, and Part B, using visual-verbal test items, suggests this curriculum supported students' mutual development of conceptual understanding and representational competence. Evidence based on student interviews, Student Assessment of Learning Gains ratings and weekly activity surveys indicated positive attitudes toward the use of Chemscape Chime software and the computer-based molecular visualization activities as learning tools. Evidence from these same sources also indicated that students felt computer-based molecular visualization activities in conjunction with other classroom activities supported their learning. Implications for instructional design are discussed.
Learning to associate auditory and visual stimuli: behavioral and neural mechanisms.
Altieri, Nicholas; Stevenson, Ryan A; Wallace, Mark T; Wenger, Michael J
2015-05-01
The ability to effectively combine sensory inputs across modalities is vital for acquiring a unified percept of events. For example, watching a hammer hit a nail while simultaneously identifying the sound as originating from the event requires the ability to identify spatio-temporal congruencies and statistical regularities. In this study, we applied a reaction time and hazard function measure known as capacity (e.g., Townsend and AshbyCognitive Theory 200-239, 1978) to quantify the extent to which observers learn paired associations between simple auditory and visual patterns in a model theoretic manner. As expected, results showed that learning was associated with an increase in accuracy, but more significantly, an increase in capacity. The aim of this study was to associate capacity measures of multisensory learning, with neural based measures, namely mean global field power (GFP). We observed a co-variation between an increase in capacity, and a decrease in GFP amplitude as learning occurred. This suggests that capacity constitutes a reliable behavioral index of efficient energy expenditure in the neural domain.
Verbal and visual memory in patients with early Parkinson's disease: effect of levodopa.
Singh, Sumit; Behari, Madhuri
2006-03-01
The effect of initiation of levodopa therapy on the memory functions in patients with Parkinson's disease remains poorly understood. To evaluate the effect of initiation of levodopa therapy on memory, in patients with early Parkinson's disease. Prospective case control study. Seventeen patients with early Parkinson's disease were evaluated for verbal memory using Rey's auditory verbal learning test, and visual memory using the Benton's visual retention test and Form sequence learning test. UPDRS scores, Hoehn and Yahr's Staging and Schwab and England scores of Activities of daily living. Hamilton's depression rating scale and MMSE were also evaluated. Six controls were also evaluated according to similar study protocol. Levodopa was then prescribed to the cases. Same tests were repeated on all the subjects after 12 weeks. The mean age of the patients was 59.8 (+ 12.9 yrs); mean disease duration of 3.26 (+ 2.06 yrs). The mean UPDRS scores of patients were 36.52 (+ 15.84). Controls were of a similar age and sex distribution. A statistically significant improvement in the scores on the UPDRS, Hamilton's depression scale, Schwab and England scale, and a statistically significant deterioration in the scores of visual memory was observed in patients with PD after starting levodopa, as compared to their baseline scores. There was no correlation between degree of deterioration and the dose of levodopa. Initiation of levodopa therapy in patients with early and stable Parkinson's disease is associated with deterioration in visual memory functions, with relative preservation of the verbal memory.
Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments
2016-09-01
yet to be discussed in existing supervised multi-concept visual perception systems used in robotics applications.1,5–7 Anno - tation of images is...Autonomous robot navigation in highly populated pedestrian zones. J Field Robotics. 2015;32(4):565–589. 3. Milella A, Reina G, Underwood J . A self...learning framework for statistical ground classification using RADAR and monocular vision. J Field Robotics. 2015;32(1):20–41. 4. Manjanna S, Dudek G
Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland
2011-01-01
Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
Nakanishi, Rine; Sankaran, Sethuraman; Grady, Leo; Malpeso, Jenifer; Yousfi, Razik; Osawa, Kazuhiro; Ceponiene, Indre; Nazarat, Negin; Rahmani, Sina; Kissel, Kendall; Jayawardena, Eranthi; Dailing, Christopher; Zarins, Christopher; Koo, Bon-Kwon; Min, James K; Taylor, Charles A; Budoff, Matthew J
2018-03-23
Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.
Investigating implicit statistical learning mechanisms through contextual cueing.
Goujon, Annabelle; Didierjean, André; Thorpe, Simon
2015-09-01
Since its inception, the contextual cueing (CC) paradigm has generated considerable interest in various fields of cognitive sciences because it constitutes an elegant approach to understanding how statistical learning (SL) mechanisms can detect contextual regularities during a visual search. In this article we review and discuss five aspects of CC: (i) the implicit nature of learning, (ii) the mechanisms involved in CC, (iii) the mediating factors affecting CC, (iv) the generalization of CC phenomena, and (v) the dissociation between implicit and explicit CC phenomena. The findings suggest that implicit SL is an inherent component of ongoing processing which operates through clustering, associative, and reinforcement processes at various levels of sensory-motor processing, and might result from simple spike-timing-dependent plasticity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Complete scanpaths analysis toolbox.
Augustyniak, Piotr; Mikrut, Zbigniew
2006-01-01
This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.
Structure Learning in Bayesian Sensorimotor Integration
Genewein, Tim; Hez, Eduard; Razzaghpanah, Zeynab; Braun, Daniel A.
2015-01-01
Previous studies have shown that sensorimotor processing can often be described by Bayesian learning, in particular the integration of prior and feedback information depending on its degree of reliability. Here we test the hypothesis that the integration process itself can be tuned to the statistical structure of the environment. We exposed human participants to a reaching task in a three-dimensional virtual reality environment where we could displace the visual feedback of their hand position in a two dimensional plane. When introducing statistical structure between the two dimensions of the displacement, we found that over the course of several days participants adapted their feedback integration process in order to exploit this structure for performance improvement. In control experiments we found that this adaptation process critically depended on performance feedback and could not be induced by verbal instructions. Our results suggest that structural learning is an important meta-learning component of Bayesian sensorimotor integration. PMID:26305797
Implicit sequence learning in deaf children with cochlear implants.
Conway, Christopher M; Pisoni, David B; Anaya, Esperanza M; Karpicke, Jennifer; Henning, Shirley C
2011-01-01
Deaf children with cochlear implants (CIs) represent an intriguing opportunity to study neurocognitive plasticity and reorganization when sound is introduced following a period of auditory deprivation early in development. Although it is common to consider deafness as affecting hearing alone, it may be the case that auditory deprivation leads to more global changes in neurocognitive function. In this paper, we investigate implicit sequence learning abilities in deaf children with CIs using a novel task that measured learning through improvement to immediate serial recall for statistically consistent visual sequences. The results demonstrated two key findings. First, the deaf children with CIs showed disturbances in their visual sequence learning abilities relative to the typically developing normal-hearing children. Second, sequence learning was significantly correlated with a standardized measure of language outcome in the CI children. These findings suggest that a period of auditory deprivation has secondary effects related to general sequencing deficits, and that disturbances in sequence learning may at least partially explain why some deaf children still struggle with language following cochlear implantation. © 2010 Blackwell Publishing Ltd.
Wieland, Patience S; Willis, Jana; Peters, Michelle L; O'Toole, Robin S
2018-07-01
The purpose of this experimental research study was to explore how modality and learning style preferences impact non-prescribing, first-year Licensed Vocational Nurse (LVN) students' recall of vocabulary. Independent t-test results indicated a statistically significant mean difference in short-term recall of pharmacological and psychiatric terms, with learners receiving visual text instruction recalling significantly more vocabulary than learners receiving audio text instruction. A correlation was not found between learning preferences and vocabulary recall. Copyright © 2018 Elsevier Ltd. All rights reserved.
Supramodal processing optimizes visual perceptual learning and plasticity.
Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie
2014-06-01
Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory-invariant representations - here, global coherence levels across sensory modalities. Copyright © 2014 Elsevier Inc. All rights reserved.
Webber, C J
2001-05-01
This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.
Thomas, Cyril; Didierjean, André; Maquestiaux, François; Goujon, Annabelle
2018-04-12
Since the seminal study by Chun and Jiang (Cognitive Psychology, 36, 28-71, 1998), a large body of research based on the contextual-cueing paradigm has shown that the cognitive system is capable of extracting statistical contingencies from visual environments. Most of these studies have focused on how individuals learn regularities found within an intratrial temporal window: A context predicts the target position within a given trial. However, Ono, Jiang, and Kawahara (Journal of Experimental Psychology, 31, 703-712, 2005) provided evidence of an intertrial implicit-learning effect when a distractor configuration in preceding trials N - 1 predicted the target location in trials N. The aim of the present study was to gain further insight into this effect by examining whether it occurs when predictive relationships are impeded by interfering task-relevant noise (Experiments 2 and 3) or by a long delay (Experiments 1, 4, and 5). Our results replicated the intertrial contextual-cueing effect, which occurred in the condition of temporally close contingencies. However, there was no evidence of integration across long-range spatiotemporal contingencies, suggesting a temporal limitation of statistical learning.
Beyond game effectiveness. Part II, a qualitative study of multi-role experiential learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willis, Matthew; Tucker, Eilish Marie; Raybourn, Elaine Marie
The present paper is the second in a series published at I/ITSEC that seeks to explain the efficacy of multirole experiential learning employed to create engaging game-based training methods transitioned to the U.S. Army, U.S. Army Special Forces, Civil Affairs, and Psychological Operations teams. The first publication (I/ITSEC 2009) summarized findings from a quantitative study that investigated experiential learning in the multi-player, PC-based game module transitioned to PEO-STRI, DARWARS Ambush! NK (non-kinetic). The 2009 publication reported that participants of multi-role (Player and Reflective Observer/Evaluator) game-based training reported statistically significant learning and engagement. Additionally when the means of the two groupsmore » (Player and Reflective Observer/Evaluator) were compared, they were not statistically significantly different from each other. That is to say that both playing as well as observing/evaluating were engaging learning modalities. The Observer/Evaluator role was designed to provide an opportunity for real-time reflection and meta-cognitive learning during game play. Results indicated that this role was an engaging way to learn about communication, that participants learned something about cultural awareness, and that the skills they learned were helpful in problem solving and decision-making. The present paper seeks to continue to understand what and how users of non-kinetic game-based missions learn by revisiting the 2009 quantitative study with further investigation such as stochastic player performance analysis using latent semantic analyses and graph visualizations to correlate against human coder ratings and pre- and post-test self-analysis. The results are applicable to First-Person game-based learning systems designed to enhance trainee intercultural communication, interpersonal skills, and adaptive thinking. In the full paper, we discuss results obtained from data collected from 78 research participants of diverse backgrounds who trained by engaging in tasks directly, as well as observing and evaluating peer performance in real-time. The goal is two-fold. One is to quantify and visualize detailed player performance data coming from game play transcription to give further understanding to the results in the 2009 I/ITSEC paper. The second is to develop a set of technologies from this quantification and visualization approach into a generalized application tool to be used to aid in future games development of player/learner models and game adaptation algorithms.« less
Chen, Chi-Hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen
2017-08-01
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories based on the commonalities across training stimuli. Experiment 2 replicated the first experiment and further examined whether speakers of Mandarin, a language in which final syllables of object names are more predictive of category membership than English, were able to learn words and form object categories when trained with the same type of structures. The results indicate that both groups of learners successfully extracted multiple levels of co-occurrence and used them to learn words and object categories simultaneously. However, marked individual differences in performance were also found, suggesting possible interference and competition in processing the two concurrent streams of regularities. Copyright © 2016 Cognitive Science Society, Inc.
The Music of Mathematics: Toward a New Problem Typology
ERIC Educational Resources Information Center
Quarfoot, David
2015-01-01
Halmos (1980) once described problems and their solutions as "the heart of mathematics". Following this line of thinking, one might naturally ask: "What, then, is the heart of problems?" In this work, I attempt to answer this question using techniques from statistics, information visualization, and machine learning. I begin the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, Kristin C; Brunhart-Lupo, Nicholas J; Bush, Brian W
We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically soundmore » esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.« less
Task specificity of attention training: the case of probability cuing
Jiang, Yuhong V.; Swallow, Khena M.; Won, Bo-Yeong; Cistera, Julia D.; Rosenbaum, Gail M.
2014-01-01
Statistical regularities in our environment enhance perception and modulate the allocation of spatial attention. Surprisingly little is known about how learning-induced changes in spatial attention transfer across tasks. In this study, we investigated whether a spatial attentional bias learned in one task transfers to another. Most of the experiments began with a training phase in which a search target was more likely to be located in one quadrant of the screen than in the other quadrants. An attentional bias toward the high-probability quadrant developed during training (probability cuing). In a subsequent, testing phase, the target's location distribution became random. In addition, the training and testing phases were based on different tasks. Probability cuing did not transfer between visual search and a foraging-like task. However, it did transfer between various types of visual search tasks that differed in stimuli and difficulty. These data suggest that different visual search tasks share a common and transferrable learned attentional bias. However, this bias is not shared by high-level, decision-making tasks such as foraging. PMID:25113853
NASA Astrophysics Data System (ADS)
Nagai, Yukie; Hosoda, Koh; Morita, Akio; Asada, Minoru
This study argues how human infants acquire the ability of joint attention through interactions with their caregivers from a viewpoint of cognitive developmental robotics. In this paper, a mechanism by which a robot acquires sensorimotor coordination for joint attention through bootstrap learning is described. Bootstrap learning is a process by which a learner acquires higher capabilities through interactions with its environment based on embedded lower capabilities even if the learner does not receive any external evaluation nor the environment is controlled. The proposed mechanism for bootstrap learning of joint attention consists of the robot's embedded mechanisms: visual attention and learning with self-evaluation. The former is to find and attend to a salient object in the field of the robot's view, and the latter is to evaluate the success of visual attention, not joint attention, and then to learn the sensorimotor coordination. Since the object which the robot looks at based on visual attention does not always correspond to the object which the caregiver is looking at in an environment including multiple objects, the robot may have incorrect learning situations for joint attention as well as correct ones. However, the robot is expected to statistically lose the learning data of the incorrect ones as outliers because of its weaker correlation between the sensor input and the motor output than that of the correct ones, and consequently to acquire appropriate sensorimotor coordination for joint attention even if the caregiver does not provide any task evaluation to the robot. The experimental results show the validity of the proposed mechanism. It is suggested that the proposed mechanism could explain the developmental mechanism of infants' joint attention because the learning process of the robot's joint attention can be regarded as equivalent to the developmental process of infants' one.
Stochastic Resonance in Signal Detection and Human Perception
2006-07-05
learning scheme performing a stochastic gradient ascent on the SNR to determine the optimal noise level based on the samples from the process. Rather than...produce some SR effect in threshold neurons and a new statistically robust learning law was proposed to find the optimal noise level. [McDonnell...Ultimately, we know that it is the brain that responds to a visual stimulus causing neurons to fire. Conceivably if we understood the effect of the noise PDF
Adding statistical regularity results in a global slowdown in visual search.
Vaskevich, Anna; Luria, Roy
2018-05-01
Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance. Copyright © 2018 Elsevier B.V. All rights reserved.
Cumulative sum analysis score and phacoemulsification competency learning curve.
Vedana, Gustavo; Cardoso, Filipe G; Marcon, Alexandre S; Araújo, Licio E K; Zanon, Matheus; Birriel, Daniella C; Watte, Guilherme; Jun, Albert S
2017-01-01
To use the cumulative sum analysis score (CUSUM) to construct objectively the learning curve of phacoemulsification competency. Three second-year residents and an experienced consultant were monitored for a series of 70 phacoemulsification cases each and had their series analysed by CUSUM regarding posterior capsule rupture (PCR) and best-corrected visual acuity. The acceptable rate for PCR was <5% (lower limit h) and the unacceptable rate was >10% (upper limit h). The acceptable rate for best-corrected visual acuity worse than 20/40 was <10% (lower limit h) and the unacceptable rate was >20% (upper limit h). The area between lower limit h and upper limit h is called the decision interval. There was no statistically significant difference in the mean age, sex or cataract grades between groups. The first trainee achieved PCR CUSUM competency at his 22 nd case. His best-corrected visual acuity CUSUM was in the decision interval from his third case and stayed there until the end, never reaching competency. The second trainee achieved PCR CUSUM competency at his 39 th case. He could reach best-corrected visual acuity CUSUM competency at his 22 nd case. The third trainee achieved PCR CUSUM competency at his 41 st case. He reached best-corrected visual acuity CUSUM competency at his 14 th case. The learning curve of competency in phacoemulsification is constructed by CUSUM and in average took 38 cases for each trainee to achieve it.
NASA Astrophysics Data System (ADS)
Garrido, Marta Isabel; Teng, Chee Leong James; Taylor, Jeremy Alexander; Rowe, Elise Genevieve; Mattingley, Jason Brett
2016-06-01
The ability to learn about regularities in the environment and to make predictions about future events is fundamental for adaptive behaviour. We have previously shown that people can implicitly encode statistical regularities and detect violations therein, as reflected in neuronal responses to unpredictable events that carry a unique prediction error signature. In the real world, however, learning about regularities will often occur in the context of competing cognitive demands. Here we asked whether learning of statistical regularities is modulated by concurrent cognitive load. We compared electroencephalographic metrics associated with responses to pure-tone sounds with frequencies sampled from narrow or wide Gaussian distributions. We showed that outliers evoked a larger response than those in the centre of the stimulus distribution (i.e., an effect of surprise) and that this difference was greater for physically identical outliers in the narrow than in the broad distribution. These results demonstrate an early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. Moreover, we manipulated concurrent cognitive load by having participants perform a visual working memory task while listening to these streams of sounds. We again observed greater prediction error responses in the narrower distribution under both low and high cognitive load. Furthermore, there was no reliable reduction in prediction error magnitude under high-relative to low-cognitive load. Our findings suggest that statistical learning is not a capacity limited process, and that it proceeds automatically even when cognitive resources are taxed by concurrent demands.
Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet
Rolls, Edmund T.
2012-01-01
Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus. PMID:22723777
Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.
Rolls, Edmund T
2012-01-01
Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.
Language experience changes subsequent learning
Onnis, Luca; Thiessen, Erik
2013-01-01
What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. PMID:23200510
Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi
2015-05-01
Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Urbanowicz, Ryan J.; Granizo-Mackenzie, Ambrose; Moore, Jason H.
2014-01-01
Michigan-style learning classifier systems (M-LCSs) represent an adaptive and powerful class of evolutionary algorithms which distribute the learned solution over a sizable population of rules. However their application to complex real world data mining problems, such as genetic association studies, has been limited. Traditional knowledge discovery strategies for M-LCS rule populations involve sorting and manual rule inspection. While this approach may be sufficient for simpler problems, the confounding influence of noise and the need to discriminate between predictive and non-predictive attributes calls for additional strategies. Additionally, tests of significance must be adapted to M-LCS analyses in order to make them a viable option within fields that require such analyses to assess confidence. In this work we introduce an M-LCS analysis pipeline that combines uniquely applied visualizations with objective statistical evaluation for the identification of predictive attributes, and reliable rule generalizations in noisy single-step data mining problems. This work considers an alternative paradigm for knowledge discovery in M-LCSs, shifting the focus from individual rules to a global, population-wide perspective. We demonstrate the efficacy of this pipeline applied to the identification of epistasis (i.e., attribute interaction) and heterogeneity in noisy simulated genetic association data. PMID:25431544
Goschy, Harriet; Bakos, Sarolta; Müller, Hermann J; Zehetleitner, Michael
2014-01-01
Targets in a visual search task are detected faster if they appear in a probable target region as compared to a less probable target region, an effect which has been termed "probability cueing." The present study investigated whether probability cueing cannot only speed up target detection, but also minimize distraction by distractors in probable distractor regions as compared to distractors in less probable distractor regions. To this end, three visual search experiments with a salient, but task-irrelevant, distractor ("additional singleton") were conducted. Experiment 1 demonstrated that observers can utilize uneven spatial distractor distributions to selectively reduce interference by distractors in frequent distractor regions as compared to distractors in rare distractor regions. Experiments 2 and 3 showed that intertrial facilitation, i.e., distractor position repetitions, and statistical learning (independent of distractor position repetitions) both contribute to the probability cueing effect for distractor locations. Taken together, the present results demonstrate that probability cueing of distractor locations has the potential to serve as a strong attentional cue for the shielding of likely distractor locations.
Visual feature-tolerance in the reading network.
Rauschecker, Andreas M; Bowen, Reno F; Perry, Lee M; Kevan, Alison M; Dougherty, Robert F; Wandell, Brian A
2011-09-08
A century of neurology and neuroscience shows that seeing words depends on ventral occipital-temporal (VOT) circuitry. Typically, reading is learned using high-contrast line-contour words. We explored whether a specific VOT region, the visual word form area (VWFA), learns to see only these words or recognizes words independent of the specific shape-defining visual features. Word forms were created using atypical features (motion-dots, luminance-dots) whose statistical properties control word-visibility. We measured fMRI responses as word form visibility varied, and we used TMS to interfere with neural processing in specific cortical circuits, while subjects performed a lexical decision task. For all features, VWFA responses increased with word-visibility and correlated with performance. TMS applied to motion-specialized area hMT+ disrupted reading performance for motion-dots, but not line-contours or luminance-dots. A quantitative model describes feature-convergence in the VWFA and relates VWFA responses to behavioral performance. These findings suggest how visual feature-tolerance in the reading network arises through signal convergence from feature-specialized cortical areas. Copyright © 2011 Elsevier Inc. All rights reserved.
Hagen, Brad; Awosoga, Olu; Kellett, Peter; Dei, Samuel Ofori
2013-09-01
Undergraduate nursing students must often take a course in statistics, yet there is scant research to inform teaching pedagogy. The objectives of this study were to assess nursing students' overall attitudes towards statistics courses - including (among other things) overall fear and anxiety, preferred learning and teaching styles, and the perceived utility and benefit of taking a statistics course - before and after taking a mandatory course in applied statistics. The authors used a pre-experimental research design (a one-group pre-test/post-test research design), by administering a survey to nursing students at the beginning and end of the course. The study was conducted at a University in Western Canada that offers an undergraduate Bachelor of Nursing degree. Participants included 104 nursing students, in the third year of a four-year nursing program, taking a course in statistics. Although students only reported moderate anxiety towards statistics, student anxiety about statistics had dropped by approximately 40% by the end of the course. Students also reported a considerable and positive change in their attitudes towards learning in groups by the end of the course, a potential reflection of the team-based learning that was used. Students identified preferred learning and teaching approaches, including the use of real-life examples, visual teaching aids, clear explanations, timely feedback, and a well-paced course. Students also identified preferred instructor characteristics, such as patience, approachability, in-depth knowledge of statistics, and a sense of humor. Unfortunately, students only indicated moderate agreement with the idea that statistics would be useful and relevant to their careers, even by the end of the course. Our findings validate anecdotal reports on statistics teaching pedagogy, although more research is clearly needed, particularly on how to increase students' perceptions of the benefit and utility of statistics courses for their nursing careers. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Propose but verify: Fast mapping meets cross-situational word learning
Trueswell, John C.; Medina, Tamara Nicol; Hafri, Alon; Gleitman, Lila R.
2012-01-01
We report three eyetracking experiments that examine the learning procedure used by adults as they pair novel words and visually presented referents over a sequence of referentially ambiguous trials. Successful learning under such conditions has been argued to be the product of a learning procedure in which participants provisionally pair each novel word with several possible referents and use a statistical-associative learning mechanism to gradually converge on a single mapping across learning instances. We argue here that successful learning in this setting is instead the product of a one-trial procedure in which a single hypothesized word-referent pairing is retained across learning instances, abandoned only if the subsequent instance fails to confirm the pairing – more a ‘fast mapping’ procedure than a gradual statistical one. We provide experimental evidence for this Propose-but-Verify learning procedure via three experiments in which adult participants attempted to learn the meanings of nonce words cross-situationally under varying degrees of referential uncertainty. The findings, using both explicit (referent selection) and implicit (eye movement) measures, show that even in these artificial learning contexts, which are far simpler than those encountered by a language learner in a natural environment, participants do not retain multiple meaning hypotheses across learning instances. As we discuss, these findings challenge ‘gradualist’ accounts of word learning and are consistent with the known rapid course of vocabulary learning in a first language. PMID:23142693
Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.
Fisher, Aaron; Anderson, G. Brooke; Peng, Roger
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457
Pitch enhancement facilitates word learning across visual contexts
Filippi, Piera; Gingras, Bruno; Fitch, W. Tecumseh
2014-01-01
This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution. PMID:25566144
Artist-Driven Initiatives for Art Education: What We Can Learn from Street Art
ERIC Educational Resources Information Center
Daichendt, G. James
2013-01-01
The economic state of California is representative of the larger financial health of the United States. The budget cuts and the faltering status of art education in public schools has contrasted much of the rhetoric and statistics for art education and employment in the visual arts. Yet, contemporaneously, California has also witnessed the largest…
ERIC Educational Resources Information Center
Videtich, Patricia E.; Neal, William J.
2012-01-01
Using sieving and sample "unknowns" for instructional grain-size analysis and interpretation of sands in undergraduate sedimentology courses has advantages over other techniques. Students (1) learn to calculate and use statistics; (2) visually observe differences in the grain-size fractions, thereby developing a sense of specific size…
Statistically-Driven Visualizations of Student Interactions with a French Online Course Video
ERIC Educational Resources Information Center
Youngs, Bonnie L.; Prakash, Akhil; Nugent, Rebecca
2018-01-01
Logged tracking data for online courses are generally not available to instructors, students, and course designers and developers, and even if these data were available, most content-oriented instructors do not have the skill set to analyze them. Learning analytics, mined from logged course data and usually presented in the form of learning…
Learning Predictive Statistics: Strategies and Brain Mechanisms.
Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe
2017-08-30
When immersed in a new environment, we are challenged to decipher initially incomprehensible streams of sensory information. However, quite rapidly, the brain finds structure and meaning in these incoming signals, helping us to predict and prepare ourselves for future actions. This skill relies on extracting the statistics of event streams in the environment that contain regularities of variable complexity from simple repetitive patterns to complex probabilistic combinations. Here, we test the brain mechanisms that mediate our ability to adapt to the environment's statistics and predict upcoming events. By combining behavioral training and multisession fMRI in human participants (male and female), we track the corticostriatal mechanisms that mediate learning of temporal sequences as they change in structure complexity. We show that learning of predictive structures relates to individual decision strategy; that is, selecting the most probable outcome in a given context (maximizing) versus matching the exact sequence statistics. These strategies engage distinct human brain regions: maximizing engages dorsolateral prefrontal, cingulate, sensory-motor regions, and basal ganglia (dorsal caudate, putamen), whereas matching engages occipitotemporal regions (including the hippocampus) and basal ganglia (ventral caudate). Our findings provide evidence for distinct corticostriatal mechanisms that facilitate our ability to extract behaviorally relevant statistics to make predictions. SIGNIFICANCE STATEMENT Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. Past work has studied how humans identify repetitive patterns and associative pairings. However, the natural environment contains regularities that vary in complexity from simple repetition to complex probabilistic combinations. Here, we combine behavior and multisession fMRI to track the brain mechanisms that mediate our ability to adapt to changes in the environment's statistics. We provide evidence for an alternate route for learning complex temporal statistics: extracting the most probable outcome in a given context is implemented by interactions between executive and motor corticostriatal mechanisms compared with visual corticostriatal circuits (including hippocampal cortex) that support learning of the exact temporal statistics. Copyright © 2017 Wang et al.
Gonzales, Lucia K; Glaser, Dale; Howland, Lois; Clark, Mary Jo; Hutchins, Susie; Macauley, Karen; Close, Jacqueline F; Leveque, Noelle Lipkin; Failla, Kim Reina; Brooks, Raelene; Ward, Jillian
2017-01-01
A number of studies across different disciplines have investigated students' learning styles. Differences are known to exist between graduate and baccalaureate nursing students. However, few studies have investigated the learning styles of students in graduate entry nursing programs. . Study objective was to describe graduate entry nursing students' learning styles. A descriptive design was used for this study. The Index of Learning Styles (ILS) was administered to 202 graduate entry nursing student volunteers at a southwestern university. Descriptive statistics, tests of association, reliability, and validity were performed. Graduate nursing students and faculty participated in data collection, analysis, and dissemination of the results. Predominant learning styles were: sensing - 82.7%, visual - 78.7%, sequential - 65.8%, and active - 59.9%. Inter-item reliabilities for the postulated subscales were: sensing/intuitive (α=0.70), visual/verbal (α=0.694), sequential/global (α=0.599), and active/reflective (α=0.572). Confirmatory factor analysis for results of validity were: χ 2 (896)=1110.25, p<0.001, CFI=0.779, TLI=0.766, WRMR=1.14, and RMSEA =0.034. Predominant learning styles described students as being concrete thinkers oriented toward facts (sensing); preferring pictures, diagrams, flow charts, demonstrations (visual); being linear thinkers (sequencing); and enjoying working in groups and trying things out (active),. The predominant learning styles suggest educators teach concepts through simulation, discussion, and application of knowledge. Multiple studies, including this one, provided similar psychometric results. Similar reliability and validity results for the ILS have been noted in previous studies and therefore provide sufficient evidence to use the ILS with graduate entry nursing students. This study provided faculty with numerous opportunities for actively engaging students in data collection, analysis, and dissemination of results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Learning Rotation-Invariant Local Binary Descriptor.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2017-08-01
In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors.
Innovative intelligent technology of distance learning for visually impaired people
NASA Astrophysics Data System (ADS)
Samigulina, Galina; Shayakhmetova, Assem; Nuysuppov, Adlet
2017-12-01
The aim of the study is to develop innovative intelligent technology and information systems of distance education for people with impaired vision (PIV). To solve this problem a comprehensive approach has been proposed, which consists in the aggregate of the application of artificial intelligence methods and statistical analysis. Creating an accessible learning environment, identifying the intellectual, physiological, psychophysiological characteristics of perception and information awareness by this category of people is based on cognitive approach. On the basis of fuzzy logic the individually-oriented learning path of PIV is con- structed with the aim of obtaining high-quality engineering education with modern equipment in the joint use laboratories.
Long-term adaptation to change in implicit contextual learning.
Zellin, Martina; von Mühlenen, Adrian; Müller, Hermann J; Conci, Markus
2014-08-01
The visual world consists of spatial regularities that are acquired through experience in order to guide attentional orienting. For instance, in visual search, detection of a target is faster when a layout of nontarget items is encountered repeatedly, suggesting that learned contextual associations can guide attention (contextual cuing). However, scene layouts sometimes change, requiring observers to adapt previous memory representations. Here, we investigated the long-term dynamics of contextual adaptation after a permanent change of the target location. We observed fast and reliable learning of initial context-target associations after just three repetitions. However, adaptation of acquired contextual representations to relocated targets was slow and effortful, requiring 3 days of training with overall 80 repetitions. A final test 1 week later revealed equivalent effects of contextual cuing for both target locations, and these were comparable to the effects observed on day 1. That is, observers learned both initial target locations and relocated targets, given extensive training combined with extended periods of consolidation. Thus, while implicit contextual learning efficiently extracts statistical regularities of our environment at first, it is rather insensitive to change in the longer term, especially when subtle changes in context-target associations need to be acquired.
What can we learn from learning models about sensitivity to letter-order in visual word recognition?
Lerner, Itamar; Armstrong, Blair C.; Frost, Ram
2014-01-01
Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521
Scaling up to address data science challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, Joanne R.
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
Scaling up to address data science challenges
Wendelberger, Joanne R.
2017-04-27
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
Language experience changes subsequent learning.
Onnis, Luca; Thiessen, Erik
2013-02-01
What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. Copyright © 2012 Elsevier B.V. All rights reserved.
The challenges of studying visual expertise in medical image diagnosis.
Gegenfurtner, Andreas; Kok, Ellen; van Geel, Koos; de Bruin, Anique; Jarodzka, Halszka; Szulewski, Adam; van Merriënboer, Jeroen Jg
2017-01-01
Visual expertise is the superior visual skill shown when executing domain-specific visual tasks. Understanding visual expertise is important in order to understand how the interpretation of medical images may be best learned and taught. In the context of this article, we focus on the visual skill of medical image diagnosis and, more specifically, on the methodological set-ups routinely used in visual expertise research. We offer a critique of commonly used methods and propose three challenges for future research to open up new avenues for studying characteristics of visual expertise in medical image diagnosis. The first challenge addresses theory development. Novel prospects in modelling visual expertise can emerge when we reflect on cognitive and socio-cultural epistemologies in visual expertise research, when we engage in statistical validations of existing theoretical assumptions and when we include social and socio-cultural processes in expertise development. The second challenge addresses the recording and analysis of longitudinal data. If we assume that the development of expertise is a long-term phenomenon, then it follows that future research can engage in advanced statistical modelling of longitudinal expertise data that extends the routine use of cross-sectional material through, for example, animations and dynamic visualisations of developmental data. The third challenge addresses the combination of methods. Alternatives to current practices can integrate qualitative and quantitative approaches in mixed-method designs, embrace relevant yet underused data sources and understand the need for multidisciplinary research teams. Embracing alternative epistemological and methodological approaches for studying visual expertise can lead to a more balanced and robust future for understanding superior visual skills in medical image diagnosis as well as other medical fields. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong
2016-06-29
The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images' spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines.
NASA Astrophysics Data System (ADS)
Chotimah, Siti; Bernard, M.; Wulandari, S. M.
2018-01-01
The main problems of the research were the lack of reasoning ability and mathematical disposition of students to the learning of mathematics in high school students in Cimahi - West Java. The lack of mathematical reasoning ability in students was caused by the process of learning. The teachers did not train the students to do the problems of reasoning ability. The students still depended on each other. Sometimes, one of patience teacher was still guiding his students. In addition, the basic ability aspects of students also affected the ability the mathematics skill. Furthermore, the learning process with contextual approach aided by VBA Learning Media (Visual Basic Application for Excel) gave the positive influence to the students’ mathematical disposition. The students are directly involved in learning process. The population of the study was all of the high school students in Cimahi. The samples were the students of SMA Negeri 4 Cimahi class XIA and XIB. There were both of tested and non-tested instruments. The test instrument was a description test of mathematical reasoning ability. The non-test instruments were questionnaire-scale attitudes about students’ mathematical dispositions. This instrument was used to obtain data about students’ mathematical reasoning and disposition of mathematics learning with contextual approach supported by VBA (Visual Basic Application for Excel) and by conventional learning. The data processed in this study was from the post-test score. These scores appeared from both of the experimental class group and the control class group. Then, performing data was processed by using SPSS 22 and Microsoft Excel. The data was analyzed using t-test statistic. The final result of this study concluded the achievement and improvement of reasoning ability and mathematical disposition of students whose learning with contextual approach supported by learning media of VBA (Visual Basic Application for Excel) was better than students who got conventional learning.
Calypso: a user-friendly web-server for mining and visualizing microbiome-environment interactions.
Zakrzewski, Martha; Proietti, Carla; Ellis, Jonathan J; Hasan, Shihab; Brion, Marie-Jo; Berger, Bernard; Krause, Lutz
2017-03-01
Calypso is an easy-to-use online software suite that allows non-expert users to mine, interpret and compare taxonomic information from metagenomic or 16S rDNA datasets. Calypso has a focus on multivariate statistical approaches that can identify complex environment-microbiome associations. The software enables quantitative visualizations, statistical testing, multivariate analysis, supervised learning, factor analysis, multivariable regression, network analysis and diversity estimates. Comprehensive help pages, tutorials and videos are provided via a wiki page. The web-interface is accessible via http://cgenome.net/calypso/ . The software is programmed in Java, PERL and R and the source code is available from Zenodo ( https://zenodo.org/record/50931 ). The software is freely available for non-commercial users. l.krause@uq.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Honeybees in a virtual reality environment learn unique combinations of colour and shape.
Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A
2017-10-01
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.
A unified framework for image retrieval using keyword and visual features.
Jing, Feng; Li, Mingling; Zhang, Hong-Jiang; Zhang, Bo
2005-07-01
In this paper, a unified image retrieval framework based on both keyword annotations and visual features is proposed. In this framework, a set of statistical models are built based on visual features of a small set of manually labeled images to represent semantic concepts and used to propagate keywords to other unlabeled images. These models are updated periodically when more images implicitly labeled by users become available through relevance feedback. In this sense, the keyword models serve the function of accumulation and memorization of knowledge learned from user-provided relevance feedback. Furthermore, two sets of effective and efficient similarity measures and relevance feedback schemes are proposed for query by keyword scenario and query by image example scenario, respectively. Keyword models are combined with visual features in these schemes. In particular, a new, entropy-based active learning strategy is introduced to improve the efficiency of relevance feedback for query by keyword. Furthermore, a new algorithm is proposed to estimate the keyword features of the search concept for query by image example. It is shown to be more appropriate than two existing relevance feedback algorithms. Experimental results demonstrate the effectiveness of the proposed framework.
Relationship between the learning style preferences of medical students and academic achievement
Almigbal, Turky H.
2015-01-01
Objectives: To investigate the relationship between the learning style preferences of Saudi medical students and their academic achievements. Methods: A cross-sectional study was conducted among 600 medical students at King Saud University in Riyadh, Kingdom of Saudi Arabia from October 2012 to July 2013. The Visual, Aural, Read/Write, and Kinesthetic questionnaire (VARK) questionnaire was used to categorize learning style preferences. Descriptive and analytical statistics were used to identify the learning style preferences of medical students and their relationship to academic achievement, gender, marital status, residency, different teaching curricula, and study resources (for example, teachers’ PowerPoint slides, textbooks, and journals). Results: The results indicated that 261 students (43%) preferred to learn using all VARK modalities. There was a significant difference in learning style preferences between genders (p=0.028). The relationship between learning style preferences and students in different teaching curricula was also statistically significant (p=0.047). However, learning style preferences are not related to a student’s academic achievements, marital status, residency, or study resources (for example, teachers’ PowerPoint slides, textbooks, and journals). Also, after being adjusted to other studies’ variables, the learning style preferences were not related to GPA. Conclusion: Our findings can be used to improve the quality of teaching in Saudi Arabia; students would be advantaged if teachers understood the factors that can be related to students’ learning styles. PMID:25737179
Relationship between the learning style preferences of medical students and academic achievement.
Almigbal, Turky H
2015-03-01
To investigate the relationship between the learning style preferences of Saudi medical students and their academic achievements. A cross-sectional study was conducted among 600 medical students at King Saud University in Riyadh, Kingdom of Saudi Arabia from October 2012 to July 2013. The Visual, Aural, Read/Write, and Kinesthetic questionnaire (VARK) questionnaire was used to categorize learning style preferences. Descriptive and analytical statistics were used to identify the learning style preferences of medical students and their relationship to academic achievement, gender, marital status, residency, different teaching curricula, and study resources (for example, teachers' PowerPoint slides, textbooks, and journals). The results indicated that 261 students (43%) preferred to learn using all VARK modalities. There was a significant difference in learning style preferences between genders (p=0.028). The relationship between learning style preferences and students in different teaching curricula was also statistically significant (p=0.047). However, learning style preferences are not related to a student's academic achievements, marital status, residency, or study resources (for example, teachers' PowerPoint slides, textbooks, and journals). Also, after being adjusted to other studies' variables, the learning style preferences were not related to GPA. Our findings can be used to improve the quality of teaching in Saudi Arabia; students would be advantaged if teachers understood the factors that can be related to students' learning styles.
Nonlinear projection methods for visualizing Barcode data and application on two data sets.
Olteanu, Madalina; Nicolas, Violaine; Schaeffer, Brigitte; Denys, Christiane; Missoup, Alain-Didier; Kennis, Jan; Larédo, Catherine
2013-11-01
Developing tools for visualizing DNA sequences is an important issue in the Barcoding context. Visualizing Barcode data can be put in a purely statistical context, unsupervised learning. Clustering methods combined with projection methods have two closely linked objectives, visualizing and finding structure in the data. Multidimensional scaling (MDS) and Self-organizing maps (SOM) are unsupervised statistical tools for data visualization. Both algorithms map data onto a lower dimensional manifold: MDS looks for a projection that best preserves pairwise distances while SOM preserves the topology of the data. Both algorithms were initially developed for Euclidean data and the conditions necessary to their good implementation were not satisfied for Barcode data. We developed a workflow consisting in four steps: collapse data into distinct sequences; compute a dissimilarity matrix; run a modified version of SOM for dissimilarity matrices to structure the data and reduce dimensionality; project the results using MDS. This methodology was applied to Astraptes fulgerator and Hylomyscus, an African rodent with debated taxonomy. We obtained very good results for both data sets. The results were robust against unbalanced species. All the species in Astraptes were well displayed in very distinct groups in the various visualizations, except for LOHAMP and FABOV that were mixed up. For Hylomyscus, our findings were consistent with known species, confirmed the existence of four unnamed taxa and suggested the existence of potentially new species. © 2013 John Wiley & Sons Ltd.
Institute for Brain and Neural Systems
2009-10-06
to deal with computational complexity when analyzing large amounts of information in visual scenes. It seems natural that in addition to exploring...algorithms using methods from statistical pattern recognition and machine learning. Over the last fifteen years, significant advances had been made in...recognition, robustness to noise and ability to cope with significant variations in lighting conditions. Identifying an occluded target adds another layer of
Correspondences between What Infants See and Know about Causal and Self-Propelled Motion
ERIC Educational Resources Information Center
Cicchino, Jessica B.; Aslin, Richard N.; Rakison, David H.
2011-01-01
The associative learning account of how infants identify human motion rests on the assumption that this knowledge is derived from statistical regularities seen in the world. Yet, no catalog exists of what visual input infants receive of human motion, and of causal and self-propelled motion in particular. In this manuscript, we demonstrate that the…
Vasconcellos, Luiz Felipe; Pereira, João Santos; Adachi, Marcelo; Greca, Denise; Cruz, Manuela; Malak, Ana Lara; Charchat-Fichman, Helenice; Spitz, Mariana
2017-01-01
Few studies have evaluated magnetic resonance imaging (MRI) visual scales in Parkinson's disease-Mild Cognitive Impairment (PD-MCI). We selected 79 PD patients and 92 controls (CO) to perform neurologic and neuropsychological evaluation. Brain MRI was performed to evaluate the following scales: Global Cortical Atrophy (GCA), Fazekas, and medial temporal atrophy (MTA). The analysis revealed that both PD groups (amnestic and nonamnestic) showed worse performance on several tests when compared to CO. Memory, executive function, and attention impairment were more severe in amnestic PD-MCI group. Overall analysis of frequency of MRI visual scales by MCI subtype did not reveal any statistically significant result. Statistically significant inverse correlation was observed between GCA scale and Mini-Mental Status Examination (MMSE), Montreal Cognitive Assessment (MoCA), semantic verbal fluency, Stroop test, figure memory test, trail making test (TMT) B, and Rey Auditory Verbal Learning Test (RAVLT). The MTA scale correlated with Stroop test and Fazekas scale with figure memory test, digit span, and Stroop test according to the subgroup evaluated. Visual scales by MRI in MCI should be evaluated by cognitive domain and might be more useful in more severely impaired MCI or dementia patients.
Brain networks for confidence weighting and hierarchical inference during probabilistic learning.
Meyniel, Florent; Dehaene, Stanislas
2017-05-09
Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This "confidence weighting" implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain's learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences.
Brain networks for confidence weighting and hierarchical inference during probabilistic learning
Meyniel, Florent; Dehaene, Stanislas
2017-01-01
Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This “confidence weighting” implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain’s learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences. PMID:28439014
Data-driven advice for applying machine learning to bioinformatics problems
Olson, Randal S.; La Cava, William; Mustahsan, Zairah; Varik, Akshay; Moore, Jason H.
2017-01-01
As the bioinformatics field grows, it must keep pace not only with new data but with new algorithms. Here we contribute a thorough analysis of 13 state-of-the-art, commonly used machine learning algorithms on a set of 165 publicly available classification problems in order to provide data-driven algorithm recommendations to current researchers. We present a number of statistical and visual comparisons of algorithm performance and quantify the effect of model selection and algorithm tuning for each algorithm and dataset. The analysis culminates in the recommendation of five algorithms with hyperparameters that maximize classifier performance across the tested problems, as well as general guidelines for applying machine learning to supervised classification problems. PMID:29218881
Rapid acquisition but slow extinction of an attentional bias in space.
Jiang, Yuhong V; Swallow, Khena M; Rosenbaum, Gail M; Herzig, Chelsey
2013-02-01
Substantial research has focused on the allocation of spatial attention based on goals or perceptual salience. In everyday life, however, people also direct attention using their previous experience. Here we investigate the pace at which people incidentally learn to prioritize specific locations. Participants searched for a T among Ls in a visual search task. Unbeknownst to them, the target was more often located in one region of the screen than in other regions. An attentional bias toward the rich region developed over dozens of trials. However, the bias did not rapidly readjust to new contexts. It persisted for at least a week and for hundreds of trials after the target's position became evenly distributed. The persistence of the bias did not reflect a long window over which visual statistics were calculated. Long-term persistence differentiates incidentally learned attentional biases from the more flexible goal-driven attention. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Rapid learning of visual ensembles.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-02-01
We recently demonstrated that observers are capable of encoding not only summary statistics, such as mean and variance of stimulus ensembles, but also the shape of the ensembles. Here, for the first time, we show the learning dynamics of this process, investigate the possible priors for the distribution shape, and demonstrate that observers are able to learn more complex distributions, such as bimodal ones. We used speeding and slowing of response times between trials (intertrial priming) in visual search for an oddly oriented line to assess internal models of distractor distributions. Experiment 1 demonstrates that two repetitions are sufficient for enabling learning of the shape of uniform distractor distributions. In Experiment 2, we compared Gaussian and uniform distractor distributions, finding that following only two repetitions Gaussian distributions are represented differently than uniform ones. Experiment 3 further showed that when distractor distributions are bimodal (with a 30° distance between two uniform intervals), observers initially treat them as uniform, and only with further repetitions do they begin to treat the distributions as bimodal. In sum, observers do not have strong initial priors for distribution shapes and quickly learn simple ones but have the ability to adjust their representations to more complex feature distributions as information accumulates with further repetitions of the same distractor distribution.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas. PMID:29163117
Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong
2016-01-01
The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images’ spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines. PMID:27367703
Natural image sequences constrain dynamic receptive fields and imply a sparse code.
Häusler, Chris; Susemihl, Alex; Nawrot, Martin P
2013-11-06
In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Reavis, Eric A; Frank, Sebastian M; Tse, Peter U
2018-04-12
Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.
The visual system’s internal model of the world
Lee, Tai Sing
2015-01-01
The Bayesian paradigm has provided a useful conceptual theory for understanding perceptual computation in the brain. While the detailed neural mechanisms of Bayesian inference are not fully understood, recent computational and neurophysiological works have illuminated the underlying computational principles and representational architecture. The fundamental insights are that the visual system is organized as a modular hierarchy to encode an internal model of the world, and that perception is realized by statistical inference based on such internal model. In this paper, I will discuss and analyze the varieties of representational schemes of these internal models and how they might be used to perform learning and inference. I will argue for a unified theoretical framework for relating the internal models to the observed neural phenomena and mechanisms in the visual cortex. PMID:26566294
Caudate nucleus reactivity predicts perceptual learning rate for visual feature conjunctions.
Reavis, Eric A; Frank, Sebastian M; Tse, Peter U
2015-04-15
Useful information in the visual environment is often contained in specific conjunctions of visual features (e.g., color and shape). The ability to quickly and accurately process such conjunctions can be learned. However, the neural mechanisms responsible for such learning remain largely unknown. It has been suggested that some forms of visual learning might involve the dopaminergic neuromodulatory system (Roelfsema et al., 2010; Seitz and Watanabe, 2005), but this hypothesis has not yet been directly tested. Here we test the hypothesis that learning visual feature conjunctions involves the dopaminergic system, using functional neuroimaging, genetic assays, and behavioral testing techniques. We use a correlative approach to evaluate potential associations between individual differences in visual feature conjunction learning rate and individual differences in dopaminergic function as indexed by neuroimaging and genetic markers. We find a significant correlation between activity in the caudate nucleus (a component of the dopaminergic system connected to visual areas of the brain) and visual feature conjunction learning rate. Specifically, individuals who showed a larger difference in activity between positive and negative feedback on an unrelated cognitive task, indicative of a more reactive dopaminergic system, learned visual feature conjunctions more quickly than those who showed a smaller activity difference. This finding supports the hypothesis that the dopaminergic system is involved in visual learning, and suggests that visual feature conjunction learning could be closely related to associative learning. However, no significant, reliable correlations were found between feature conjunction learning and genotype or dopaminergic activity in any other regions of interest. Copyright © 2015 Elsevier Inc. All rights reserved.
A study of the relationship between learning styles and cognitive abilities in engineering students
NASA Astrophysics Data System (ADS)
Hames, E.; Baker, M.
2015-03-01
Learning preferences have been indirectly linked to student success in engineering programmes, without a significant body of research to connect learning preferences with cognitive abilities. A better understanding of the relationship between learning styles and cognitive abilities will allow educators to optimise the classroom experience for students. The goal of this study was to determine whether relationships exist between student learning styles, as determined by the Felder-Soloman Inventory of Learning Styles (FSILS), and their cognitive performance. Three tests were used to assess student's cognitive abilities: a matrix reasoning task, a Tower of London task, and a mental rotation task. Statistical t-tests and correlation coefficients were used to quantify the results. Results indicated that the global-sequential, active-referential, and visual-verbal FSILS learning styles scales are related to performance on cognitive tasks. Most of these relationships were found in response times, not accuracy. Differences in task performance between gender groups (male and female) were more notable than differences between learning styles groups.
Molloy, Carly S; Wilson-Ching, Michelle; Doyle, Lex W; Anderson, Vicki A; Anderson, Peter J
2014-04-01
Contemporary data on visual memory and learning in survivors born extremely preterm (EP; <28 weeks gestation) or with extremely low birth weight (ELBW; <1,000 g) are lacking. Geographically determined cohort study of 298 consecutive EP/ELBW survivors born in 1991 and 1992, and 262 randomly selected normal-birth-weight controls. Visual learning and memory data were available for 221 (74.2%) EP/ELBW subjects and 159 (60.7%) controls. EP/ELBW adolescents exhibited significantly poorer performance across visual memory and learning variables compared with controls. Visual learning and delayed visual memory were particularly problematic and remained so after controlling for visual-motor integration and visual perception and excluding adolescents with neurosensory disability, and/or IQ <70. Male EP/ELBW adolescents or those treated with corticosteroids had poorer outcomes. EP/ELBW adolescents have poorer visual memory and learning outcomes compared with controls, which cannot be entirely explained by poor visual perceptual or visual constructional skills or intellectual impairment.
The role of visual representation in physics learning: dynamic versus static visualization
NASA Astrophysics Data System (ADS)
Suyatna, Agus; Anggraini, Dian; Agustina, Dina; Widyastuti, Dini
2017-11-01
This study aims to examine the role of visual representation in physics learning and to compare the learning outcomes of using dynamic and static visualization media. The study was conducted using quasi-experiment with Pretest-Posttest Control Group Design. The samples of this research are students of six classes at State Senior High School in Lampung Province. The experimental class received a learning using dynamic visualization and control class using static visualization media. Both classes are given pre-test and post-test with the same instruments. Data were tested with N-gain analysis, normality test, homogeneity test and mean difference test. The results showed that there was a significant increase of mean (N-Gain) learning outcomes (p <0.05) in both experimental and control classes. The averages of students’ learning outcomes who are using dynamic visualization media are significantly higher than the class that obtains learning by using static visualization media. It can be seen from the characteristics of visual representation; each visualization provides different understanding support for the students. Dynamic visual media is more suitable for explaining material related to movement or describing a process, whereas static visual media is appropriately used for non-moving physical phenomena and requires long-term observation.
Associative visual learning by tethered bees in a controlled visual environment.
Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin
2017-10-10
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.
Stevenson, Ryan A; Toulmin, Jennifer K; Youm, Ariana; Besney, Richard M A; Schulz, Samantha E; Barense, Morgan D; Ferber, Susanne
2017-10-30
Recent empirical evidence suggests that autistic individuals perceive the world differently than their typically-developed peers. One theoretical account, the predictive coding hypothesis, posits that autistic individuals show a decreased reliance on previous perceptual experiences, which may relate to autism symptomatology. We tested this through a well-characterized, audiovisual statistical-learning paradigm in which typically-developed participants were first adapted to consistent temporal relationships between audiovisual stimulus pairs (audio-leading, synchronous, visual-leading) and then performed a simultaneity judgement task with audiovisual stimulus pairs varying in temporal offset from auditory-leading to visual-leading. Following exposure to the visual-leading adaptation phase, participants' perception of synchrony was biased towards visual-leading presentations, reflecting the statistical regularities of their previously experienced environment. Importantly, the strength of adaptation was significantly related to the level of autistic traits that the participant exhibited, measured by the Autism Quotient (AQ). This was specific to the Attention to Detail subscale of the AQ that assesses the perceptual propensity to focus on fine-grain aspects of sensory input at the expense of more integrative perceptions. More severe Attention to Detail was related to weaker adaptation. These results support the predictive coding framework, and suggest that changes in sensory perception commonly reported in autism may contribute to autistic symptomatology.
Immersive Theater - a Proven Way to Enhance Learning Retention
NASA Astrophysics Data System (ADS)
Reiff, P. H.; Zimmerman, L.; Spillane, S.; Sumners, C.
2014-12-01
The portable immersive theater has gone from our first demonstration at fall AGU 2003 to a product offered by multiple companies in various versions to literally millions of users per year. As part of our NASA funded outreach program, we conducted a test of learning in a portable Discovery Dome as contrasted with learning the same materials (visuals and sound track) on a computer screen. We tested 200 middle school students (primarily underserved minorities). Paired t-tests and an independent t-test were used to compare the amount of learning that students achieved. Interest questionnaires were administered to participants in formal (public school) settings and focus groups were conducted in informal (museum camp and educational festival) settings. Overall results from the informal and formal educational setting indicated that there was a statistically significant increase in test scores after viewing We Choose Space. There was a statistically significant increase in test scores for students who viewed We Choose Space in the portable Discovery Dome (9.75) as well as with the computer (8.88). However, long-term retention of the material tested on the questionnaire indicated that for students who watched We Choose Space in the portable Discovery Dome, there was a statistically significant long-term increase in test scores (10.47), whereas, six weeks after learning on the computer, the improvements over the initial baseline (3.49) were far less and were not statistically significant. The test score improvement six weeks after learning in the dome was essentially the same as the post test immediately after watching the show, demonstrating virtually no loss of gained information in the six week interval. In the formal educational setting, approximately 34% of the respondents indicated that they wanted to learn more about becoming a scientist, while 35% expressed an interest in a career in space science. In the informal setting, 26% indicated that they were interested in pursuing a career in space science.
NASA Astrophysics Data System (ADS)
Gaber, Mohamed Medhat; Zaslavsky, Arkady; Krishnaswamy, Shonali
Data mining is concerned with the process of computationally extracting hidden knowledge structures represented in models and patterns from large data repositories. It is an interdisciplinary field of study that has its roots in databases, statistics, machine learning, and data visualization. Data mining has emerged as a direct outcome of the data explosion that resulted from the success in database and data warehousing technologies over the past two decades (Fayyad, 1997,Fayyad, 1998,Kantardzic, 2003).
Decomposing experience-driven attention: Opposite attentional effects of previously predictive cues.
Lin, Zhicheng; Lu, Zhong-Lin; He, Sheng
2016-10-01
A central function of the brain is to track the dynamic statistical regularities in the environment - such as what predicts what over time. How does this statistical learning process alter sensory and attentional processes? Drawing upon animal conditioning and predictive coding, we developed a learning procedure that revealed two distinct components through which prior learning-experience controls attention. During learning, a visual search task was used in which the target randomly appeared at one of several locations but always inside an encloser of a particular color - the learned color served to direct attention to the target location. During test, the color no longer predicted the target location. When the same search task was used in the subsequent test, we found that the learned color continued to attract attention despite the behavior being counterproductive for the task and despite the presence of a completely predictive cue. However, when tested with a flanker task that had minimal location uncertainty - the target was at the fixation surrounded by a distractor - participants were better at ignoring distractors in the learned color than other colors. Evidently, previously predictive cues capture attention in the same search task but can be better suppressed in a flanker task. These results demonstrate opposing components - capture and inhibition - in experience-driven attention, with their manifestations crucially dependent on task context. We conclude that associative learning enhances context-sensitive top-down modulation while it reduces bottom-up sensory drive and facilitates suppression, supporting a learning-based predictive coding account.
Decomposing experience-driven attention: opposite attentional effects of previously predictive cues
Lin, Zhicheng; Lu, Zhong-Lin; He, Sheng
2016-01-01
A central function of the brain is to track the dynamic statistical regularities in the environment—such as what predicts what over time. How does this statistical learning process alter sensory and attentional processes? Drawing upon animal conditioning and predictive coding, we developed a learning procedure that revealed two distinct components through which prior learning-experience controls attention. During learning, a visual search task was used in which the target randomly appeared at one of several locations but always inside an encloser of a particular color—the learned color served to direct attention to the target location. During test, the color no longer predicted the target location. When the same search task was used in the subsequent test, we found that the learned color continued to attract attention despite the behavior being counterproductive for the task and despite the presence of a completely predictive cue. However, when tested with a flanker task that had minimal location uncertainty—the target was at the fixation surrounded by a distractor—participants were better at ignoring distractors in the learned color than other colors. Evidently, previously predictive cues capture attention in the same search task but can be better suppressed in a flanker task. These results demonstrate opposing components—capture and inhibition—in experience-driven attention, with their manifestations crucially dependent on task context. We conclude that associative learning enhances context-sensitive top-down modulation while reduces bottom-up sensory drive and facilitates suppression, supporting a learning-based predictive coding account. PMID:27068051
Pattern Adaptation and Normalization Reweighting.
Westrick, Zachary M; Heeger, David J; Landy, Michael S
2016-09-21
Adaptation to an oriented stimulus changes both the gain and preferred orientation of neural responses in V1. Neurons tuned near the adapted orientation are suppressed, and their preferred orientations shift away from the adapter. We propose a model in which weights of divisive normalization are dynamically adjusted to homeostatically maintain response products between pairs of neurons. We demonstrate that this adjustment can be performed by a very simple learning rule. Simulations of this model closely match existing data from visual adaptation experiments. We consider several alternative models, including variants based on homeostatic maintenance of response correlations or covariance, as well as feedforward gain-control models with multiple layers, and we demonstrate that homeostatic maintenance of response products provides the best account of the physiological data. Adaptation is a phenomenon throughout the nervous system in which neural tuning properties change in response to changes in environmental statistics. We developed a model of adaptation that combines normalization (in which a neuron's gain is reduced by the summed responses of its neighbors) and Hebbian learning (in which synaptic strength, in this case divisive normalization, is increased by correlated firing). The model is shown to account for several properties of adaptation in primary visual cortex in response to changes in the statistics of contour orientation. Copyright © 2016 the authors 0270-6474/16/369805-12$15.00/0.
Grossberg, Stephen; Palma, Jesse; Versace, Massimiliano
2015-01-01
Freely behaving organisms need to rapidly calibrate their perceptual, cognitive, and motor decisions based on continuously changing environmental conditions. These plastic changes include sharpening or broadening of cognitive and motor attention and learning to match the behavioral demands that are imposed by changing environmental statistics. This article proposes that a shared circuit design for such flexible decision-making is used in specific cognitive and motor circuits, and that both types of circuits use acetylcholine to modulate choice selectivity. Such task-sensitive control is proposed to control thalamocortical choice of the critical features that are cognitively attended and that are incorporated through learning into prototypes of visual recognition categories. A cholinergically-modulated process of vigilance control determines if a recognition category and its attended features are abstract (low vigilance) or concrete (high vigilance). Homologous neural mechanisms of cholinergic modulation are proposed to focus attention and learn a multimodal map within the deeper layers of superior colliculus. This map enables visual, auditory, and planned movement commands to compete for attention, leading to selection of a winning position that controls where the next saccadic eye movement will go. Such map learning may be viewed as a kind of attentive motor category learning. The article hereby explicates a link between attention, learning, and cholinergic modulation during decision making within both cognitive and motor systems. Homologs between the mammalian superior colliculus and the avian optic tectum lead to predictions about how multimodal map learning may occur in the mammalian and avian brain and how such learning may be modulated by acetycholine.
Grossberg, Stephen; Palma, Jesse; Versace, Massimiliano
2016-01-01
Freely behaving organisms need to rapidly calibrate their perceptual, cognitive, and motor decisions based on continuously changing environmental conditions. These plastic changes include sharpening or broadening of cognitive and motor attention and learning to match the behavioral demands that are imposed by changing environmental statistics. This article proposes that a shared circuit design for such flexible decision-making is used in specific cognitive and motor circuits, and that both types of circuits use acetylcholine to modulate choice selectivity. Such task-sensitive control is proposed to control thalamocortical choice of the critical features that are cognitively attended and that are incorporated through learning into prototypes of visual recognition categories. A cholinergically-modulated process of vigilance control determines if a recognition category and its attended features are abstract (low vigilance) or concrete (high vigilance). Homologous neural mechanisms of cholinergic modulation are proposed to focus attention and learn a multimodal map within the deeper layers of superior colliculus. This map enables visual, auditory, and planned movement commands to compete for attention, leading to selection of a winning position that controls where the next saccadic eye movement will go. Such map learning may be viewed as a kind of attentive motor category learning. The article hereby explicates a link between attention, learning, and cholinergic modulation during decision making within both cognitive and motor systems. Homologs between the mammalian superior colliculus and the avian optic tectum lead to predictions about how multimodal map learning may occur in the mammalian and avian brain and how such learning may be modulated by acetycholine. PMID:26834535
Sczesny-Kaiser, Matthias; Beckhaus, Katharina; Dinse, Hubert R; Schwenkreis, Peter; Tegenthoff, Martin; Höffken, Oliver
2016-01-01
Studies on noninvasive motor cortex stimulation and motor learning demonstrated cortical excitability as a marker for a learning effect. Transcranial direct current stimulation (tDCS) is a non-invasive tool to modulate cortical excitability. It is as yet unknown how tDCS-induced excitability changes and perceptual learning in visual cortex correlate. Our study aimed to examine the influence of tDCS on visual perceptual learning in healthy humans. Additionally, we measured excitability in primary visual cortex (V1). We hypothesized that anodal tDCS would improve and cathodal tDCS would have minor or no effects on visual learning. Anodal, cathodal or sham tDCS were applied over V1 in a randomized, double-blinded design over four consecutive days (n = 30). During 20 min of tDCS, subjects had to learn a visual orientation-discrimination task (ODT). Excitability parameters were measured by analyzing paired-stimulation behavior of visual-evoked potentials (ps-VEP) and by measuring phosphene thresholds (PTs) before and after the stimulation period of 4 days. Compared with sham-tDCS, anodal tDCS led to an improvement of visual discrimination learning (p < 0.003). We found reduced PTs and increased ps-VEP ratios indicating increased cortical excitability after anodal tDCS (PT: p = 0.002, ps-VEP: p = 0.003). Correlation analysis within the anodal tDCS group revealed no significant correlation between PTs and learning effect. For cathodal tDCS, no significant effects on learning or on excitability could be seen. Our results showed that anodal tDCS over V1 resulted in improved visual perceptual learning and increased cortical excitability. tDCS is a promising tool to alter V1 excitability and, hence, perceptual visual learning.
Visual discrimination transfer and modulation by biogenic amines in honeybees.
Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo
2018-05-10
For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.
Visual and Verbal Learning Deficits in Veterans with Alcohol and Substance Use Disorders
Bell, Morris D.; Vissicchio, Nicholas A.; Weinstein, Andrea J.
2015-01-01
Background This study examined visual and verbal learning in the early phase of recovery for 48 Veterans with alcohol use (AUD) and substance use disorders (SUD, primarily cocaine and opiate abusers). Previous studies have demonstrated visual and verbal learning deficits in AUD, however little is known about the differences between AUD and SUD on these domains. Since the DSM-5 specifically identifies problems with learning in AUD and not in SUD, and problems with visual and verbal learning have been more prevalent in the literature for AUD than SUD, we predicted that people with AUD would be more impaired on measures of visual and verbal learning than people with SUD. Methods: Participants were enrolled in a comprehensive rehabilitation program and were assessed within the first 5 weeks of abstinence. Verbal learning was measured using the Hopkins Verbal Learning Test (HVLT) and visual learning was assessed using the Brief Visuospatial Memory Test (BVMT). Results Results indicated significantly greater decline in verbal learning on the HVLT across the three learning trials for AUD participants but not for SUD participants (F=4.653, df =48, p=.036). Visual learning was less impaired than verbal learning across learning trials for both diagnostic groups (F=0.197, df=48, p=.674); there was no significant difference between groups on visual learning (F=0.401, df=14, p=.538). Discussion Older Veterans in the early phase of recovery from AUD may have difficulty learning new verbal information. Deficits in verbal learning may reduce the effectiveness of verbally-based interventions such as psycho-education. PMID:26684868
Visual and verbal learning deficits in Veterans with alcohol and substance use disorders.
Bell, Morris D; Vissicchio, Nicholas A; Weinstein, Andrea J
2016-02-01
This study examined visual and verbal learning in the early phase of recovery for 48 Veterans with alcohol use (AUD) and substance use disorders (SUD, primarily cocaine and opiate abusers). Previous studies have demonstrated visual and verbal learning deficits in AUD, however little is known about the differences between AUD and SUD on these domains. Since the DSM-5 specifically identifies problems with learning in AUD and not in SUD, and problems with visual and verbal learning have been more prevalent in the literature for AUD than SUD, we predicted that people with AUD would be more impaired on measures of visual and verbal learning than people with SUD. Participants were enrolled in a comprehensive rehabilitation program and were assessed within the first 5 weeks of abstinence. Verbal learning was measured using the Hopkins Verbal Learning Test (HVLT) and visual learning was assessed using the Brief Visuospatial Memory Test (BVMT). Results indicated significantly greater decline in verbal learning on the HVLT across the three learning trials for AUD participants but not for SUD participants (F=4.653, df=48, p=0.036). Visual learning was less impaired than verbal learning across learning trials for both diagnostic groups (F=0.197, df=48, p=0.674); there was no significant difference between groups on visual learning (F=0.401, df=14, p=0.538). Older Veterans in the early phase of recovery from AUD may have difficulty learning new verbal information. Deficits in verbal learning may reduce the effectiveness of verbally-based interventions such as psycho-education. Published by Elsevier Ireland Ltd.
ERIC Educational Resources Information Center
Siu, Kin Wai Michael; Lam, Mei Seung
2012-01-01
Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…
Effects of Computer-Based Visual Representation on Mathematics Learning and Cognitive Load
ERIC Educational Resources Information Center
Yung, Hsin I.; Paas, Fred
2015-01-01
Visual representation has been recognized as a powerful learning tool in many learning domains. Based on the assumption that visual representations can support deeper understanding, we examined the effects of visual representations on learning performance and cognitive load in the domain of mathematics. An experimental condition with visual…
Threat captures attention but does not affect learning of contextual regularities.
Yamaguchi, Motonori; Harwood, Sarah L
2017-04-01
Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.
Colouring the Gaps in Learning Design: Aesthetics and the Visual in Learning
ERIC Educational Resources Information Center
Carroll, Fiona; Kop, Rita
2016-01-01
The visual is a dominant mode of information retrieval and understanding however, the focus on the visual dimension of Technology Enhanced Learning (TEL) is still quite weak in relation to its predominant focus on usability. To accommodate the future needs of the visual learner, designers of e-learning environments should advance the current…
Visual and Verbal Learning in a Genetic Metabolic Disorder
ERIC Educational Resources Information Center
Spilkin, Amy M.; Ballantyne, Angela O.; Trauner, Doris A.
2009-01-01
Visual and verbal learning in a genetic metabolic disorder (cystinosis) were examined in the following three studies. The goal of Study I was to provide a normative database and establish the reliability and validity of a new test of visual learning and memory (Visual Learning and Memory Test; VLMT) that was modeled after a widely used test of…
François, Clément; Cunillera, Toni; Garcia, Enara; Laine, Matti; Rodriguez-Fornells, Antoni
2017-04-01
Learning a new language requires the identification of word units from continuous speech (the speech segmentation problem) and mapping them onto conceptual representation (the word to world mapping problem). Recent behavioral studies have revealed that the statistical properties found within and across modalities can serve as cues for both processes. However, segmentation and mapping have been largely studied separately, and thus it remains unclear whether both processes can be accomplished at the same time and if they share common neurophysiological features. To address this question, we recorded EEG of 20 adult participants during both an audio alone speech segmentation task and an audiovisual word-to-picture association task. The participants were tested for both the implicit detection of online mismatches (structural auditory and visual semantic violations) as well as for the explicit recognition of words and word-to-picture associations. The ERP results from the learning phase revealed a delayed learning-related fronto-central negativity (FN400) in the audiovisual condition compared to the audio alone condition. Interestingly, while online structural auditory violations elicited clear MMN/N200 components in the audio alone condition, visual-semantic violations induced meaning-related N400 modulations in the audiovisual condition. The present results support the idea that speech segmentation and meaning mapping can take place in parallel and act in synergy to enhance novel word learning. Copyright © 2016 Elsevier Ltd. All rights reserved.
SOCR: Statistics Online Computational Resource
Dinov, Ivo D.
2011-01-01
The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Visual Perceptual Learning and Models.
Dosher, Barbara; Lu, Zhong-Lin
2017-09-15
Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.
Creating visual explanations improves learning.
Bobek, Eliza; Tversky, Barbara
2016-01-01
Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.
Brain composition and olfactory learning in honey bees
Gronenberg, Wulfila; Couvillon, Margaret J.
2015-01-01
Correlations between brain or brain component size and behavioral measures are frequently studied by comparing different animal species, which sometimes introduces variables that complicate interpretation in terms of brain function. Here, we have analyzed the brain composition of honey bees (Apis mellifera) that have been individually tested in an olfactory learning paradigm. We found that the total brain size correlated with the bees’ learning performance. Among different brain components, only the mushroom body, a structure known to be involved in learning and memory, showed a positive correlation with learning performance. In contrast, visual neuropils were relatively smaller in bees that performed better in the olfactory learning task, suggesting modality-specific behavioral specialization of individual bees. This idea is also supported by inter-individual differences in brain composition. Some slight yet statistically significant differences in the brain composition of European and Africanized honey bees are reported. Larger bees had larger brains, and by comparing brains of different sizes, we report isometric correlations for all brain components except for a small structure, the central body. PMID:20060918
The influence of visual ability on learning and memory performance in 13 strains of mice.
Brown, Richard E; Wong, Aimée A
2007-03-01
We calculated visual ability in 13 strains of mice (129SI/Sv1mJ, A/J, AKR/J, BALB/cByJ, C3H/HeJ, C57BL/6J, CAST/EiJ, DBA/2J, FVB/NJ, MOLF/EiJ, SJL/J, SM/J, and SPRET/EiJ) on visual detection, pattern discrimination, and visual acuity and tested these and other mice of the same strains in a behavioral test battery that evaluated visuo-spatial learning and memory, conditioned odor preference, and motor learning. Strain differences in visual acuity accounted for a significant proportion of the variance between strains in measures of learning and memory in the Morris water maze. Strain differences in motor learning performance were not influenced by visual ability. Conditioned odor preference was enhanced in mice with visual defects. These results indicate that visual ability must be accounted for when testing for strain differences in learning and memory in mice because differences in performance in many tasks may be due to visual deficits rather than differences in higher order cognitive functions. These results have significant implications for the search for the neural and genetic basis of learning and memory in mice.
Perceptual learning in children with visual impairment improves near visual acuity.
Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N
2013-09-17
This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P < 0.001). Only the children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
Visual Learning in Application of Integration
NASA Astrophysics Data System (ADS)
Bt Shafie, Afza; Barnachea Janier, Josefina; Bt Wan Ahmad, Wan Fatimah
Innovative use of technology can improve the way how Mathematics should be taught. It can enhance student's learning the concepts through visualization. Visualization in Mathematics refers to us of texts, pictures, graphs and animations to hold the attention of the learners in order to learn the concepts. This paper describes the use of a developed multimedia courseware as an effective tool for visual learning mathematics. The focus is on the application of integration which is a topic in Engineering Mathematics 2. The course is offered to the foundation students in the Universiti Teknologi of PETRONAS. Questionnaire has been distributed to get a feedback on the visual representation and students' attitudes towards using visual representation as a learning tool. The questionnaire consists of 3 sections: Courseware Design (Part A), courseware usability (Part B) and attitudes towards using the courseware (Part C). The results showed that students demonstrated the use of visual representation has benefited them in learning the topic.
Randomized Controlled Study of a Remote Flipped Classroom Neuro-otology Curriculum.
Carrick, Frederick Robert; Abdulrahman, Mahera; Hankir, Ahmed; Zayaruzny, Maksim; Najem, Kinda; Lungchukiet, Palita; Edwards, Roger A
2017-01-01
Medical Education can be delivered in the traditional classroom or via novel technology including an online classroom. To test the hypothesis that learning in an online classroom would result in similar outcomes as learning in the traditional classroom when using a flipped classroom pedagogy. Randomized controlled trial. A total of 274 subjects enrolled in a Neuro-otology training program for non-Neuro-otologists of 25 h held over a 3-day period. Subjects were randomized into a "control" group attending a traditional classroom and a "trial" group of equal numbers participating in an online synchronous Internet streaming classroom using the Adobe Connect e-learning platform. Subjects were randomized into a "control" group attending a traditional classroom and a "treatment" group of equal numbers participating in an online synchronous Internet streaming classroom. Pre- and post-multiple choice examinations of VOR, Movement, Head Turns, Head Tremor, Neurodegeneration, Inferior Olivary Complex, Collateral Projections, Eye Movement Training, Visual Saccades, Head Saccades, Visual Impairment, Walking Speed, Neuroprotection, Autophagy, Hyperkinetic Movement, Eye and Head Stability, Oscilllatory Head Movements, Gaze Stability, Leaky Neural Integrator, Cervical Dystonia, INC and Head Tilts, Visual Pursuits, Optokinetic Stimulation, and Vestibular Rehabilitation. All candidates took a pretest examination of the subject material. The 2-9 h and 1-8 h sessions over three consecutive days were given live in the classroom and synchronously in the online classroom using the Adobe Connect e-learning platform. Subjects randomized to the online classroom attended the lectures in a location of their choice and viewed the sessions live on the Internet. A posttest examination was given to all candidates after completion of the course. Two sample unpaired t tests with equal variances were calculated for all pretests and posttests for all groups including gender differences. All 274 subjects demonstrated statistically significant learning by comparison of their pre- and posttest scores. There were no statistically significant differences in the test scores between the two groups of 137 subjects each (0.8%, 95% CI 85.45917-86.67952; P = 0.9195). A total of 101 males in the traditional classroom arm had statistically significant lower scores than 72 females (0.8%, 95% CI 84.65716-86.53096; P = 0.0377) but not in the online arm (0.8%, 95% CI 85.46172-87.23135; P = 0.2176) with a moderate effect size (Cohen's d = -0.407). The use of a synchronous online classroom in neuro-otology clinical training has demonstrated similar outcomes to the traditional classroom. The online classroom is a low cost and effective complement to medical specialty training in Neuro-Otology. The significant difference in outcomes between males and females who attended the traditional classroom suggests that women may do better than males in this learning environment, although the effect size is moderate. Clinicaltrials.gov, identifier NCT03079349.
Karvelis, Povilas; Seitz, Aaron R; Lawrie, Stephen M; Seriès, Peggy
2018-05-14
Recent theories propose that schizophrenia/schizotypy and autistic spectrum disorder are related to impairments in Bayesian inference that is, how the brain integrates sensory information (likelihoods) with prior knowledge. However existing accounts fail to clarify: (i) how proposed theories differ in accounts of ASD vs. schizophrenia and (ii) whether the impairments result from weaker priors or enhanced likelihoods. Here, we directly address these issues by characterizing how 91 healthy participants, scored for autistic and schizotypal traits, implicitly learned and combined priors with sensory information. This was accomplished through a visual statistical learning paradigm designed to quantitatively assess variations in individuals' likelihoods and priors. The acquisition of the priors was found to be intact along both traits spectra. However, autistic traits were associated with more veridical perception and weaker influence of expectations. Bayesian modeling revealed that this was due, not to weaker prior expectations, but to more precise sensory representations. © 2018, Karvelis et al.
NASA Astrophysics Data System (ADS)
Benakli, Nadia; Kostadinov, Boyan; Satyanarayana, Ashwin; Singh, Satyanand
2017-04-01
The goal of this paper is to promote computational thinking among mathematics, engineering, science and technology students, through hands-on computer experiments. These activities have the potential to empower students to learn, create and invent with technology, and they engage computational thinking through simulations, visualizations and data analysis. We present nine computer experiments and suggest a few more, with applications to calculus, probability and data analysis, which engage computational thinking through simulations, visualizations and data analysis. We are using the free (open-source) statistical programming language R. Our goal is to give a taste of what R offers rather than to present a comprehensive tutorial on the R language. In our experience, these kinds of interactive computer activities can be easily integrated into a smart classroom. Furthermore, these activities do tend to keep students motivated and actively engaged in the process of learning, problem solving and developing a better intuition for understanding complex mathematical concepts.
Visual Learning Alters the Spontaneous Activity of the Resting Human Brain: An fNIRS Study
Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan
2014-01-01
Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning. PMID:25243168
Visual learning alters the spontaneous activity of the resting human brain: an fNIRS study.
Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan
2014-01-01
Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning.
Perceptual learning increases the strength of the earliest signals in visual cortex.
Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A
2010-11-10
Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.
Cognitive Strategies for Learning from Static and Dynamic Visuals.
ERIC Educational Resources Information Center
Lewalter, D.
2003-01-01
Studied the effects of including static or dynamic visuals in an expository text on a learning outcome and the use of learning strategies when working with these visuals. Results for 60 undergraduates for both types of illustration indicate different frequencies in the use of learning strategies relevant for the learning outcome. (SLD)
The Triangle Technique: a new evidence-based educational tool for pediatric medication calculations.
Sredl, Darlene
2006-01-01
Many nursing student verbalize an aversion to mathematical concepts and experience math anxiety whenever a mathematical problem is confronted. Since nurses confront mathematical problems on a daily basis, they must learn to feel comfortable with their ability to perform these calculations correctly. The Triangle Technique, a new educational tool available to nurse educators, incorporates evidence-based concepts within a graphic model using visual, auditory, and kinesthetic learning styles to demonstrate pediatric medication calculations of normal therapeutic ranges. The theoretical framework for the technique is presented, as is a pilot study examining the efficacy of the educational tool. Statistically significant results obtained by Pearson's product-moment correlation indicate that students are better able to calculate accurate pediatric therapeutic dosage ranges after participation in the educational intervention of learning the Triangle Technique.
Is an attention-based associative account of adjacent and nonadjacent dependency learning valid?
Pacton, Sébastien; Sobaco, Amélie; Perruchet, Pierre
2015-05-01
Pacton and Perruchet (2008) reported that participants who were asked to process adjacent elements located within a sequence of digits learned adjacent dependencies but did not learn nonadjacent dependencies and conversely, participants who were asked to process nonadjacent digits learned nonadjacent dependencies but did not learn adjacent dependencies. In the present study, we showed that when participants were simply asked to read aloud the same sequences of digits, a task demand that did not require the intentional processing of specific elements as in standard statistical learning tasks, only adjacent dependencies were learned. The very same pattern was observed when digits were replaced by syllables. These results show that the perfect symmetry found in Pacton and Perruchet was not due to the fact that the processing of digits is less sensitive to their distance than the processing of syllables, tones, or visual shapes used in most statistical learning tasks. Moreover, the present results, completed with a reanalysis of the data collected in Pacton and Perruchet (2008), demonstrate that participants are highly sensitive to violations involving the spacing between paired elements. Overall, these results are consistent with the Pacton and Perruchet's single-process account of adjacent and nonadjacent dependencies, in which the joint attentional processing of the two events is a necessary and sufficient condition for learning the relation between them, irrespective of their distance. However, this account should be completed to encompass the notion that the presence or absence of an intermediate event is an intrinsic component of the representation of an association. Copyright © 2015 Elsevier B.V. All rights reserved.
Visual Aversive Learning Compromises Sensory Discrimination.
Shalev, Lee; Paz, Rony; Avidan, Galia
2018-03-14
Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.
PeakVizor: Visual Analytics of Peaks in Video Clickstreams from Massive Open Online Courses.
Chen, Qing; Chen, Yuanzhe; Liu, Dongyu; Shi, Conglei; Wu, Yingcai; Qu, Huamin
2016-10-01
Massive open online courses (MOOCs) aim to facilitate open-access and massive-participation education. These courses have attracted millions of learners recently. At present, most MOOC platforms record the web log data of learner interactions with course videos. Such large amounts of multivariate data pose a new challenge in terms of analyzing online learning behaviors. Previous studies have mainly focused on the aggregate behaviors of learners from a summative view; however, few attempts have been made to conduct a detailed analysis of such behaviors. To determine complex learning patterns in MOOC video interactions, this paper introduces a comprehensive visualization system called PeakVizor. This system enables course instructors and education experts to analyze the "peaks" or the video segments that generate numerous clickstreams. The system features three views at different levels: the overview with glyphs to display valuable statistics regarding the peaks detected; the flow view to present spatio-temporal information regarding the peaks; and the correlation view to show the correlation between different learner groups and the peaks. Case studies and interviews conducted with domain experts have demonstrated the usefulness and effectiveness of PeakVizor, and new findings about learning behaviors in MOOC platforms have been reported.
Desantis, Andrea; Haggard, Patrick
2016-01-01
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063
Desantis, Andrea; Haggard, Patrick
2016-12-16
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.
Evolution and Vaccination of Influenza Virus.
Lam, Ham Ching; Bi, Xuan; Sreevatsan, Srinand; Boley, Daniel
2017-08-01
In this study, we present an application paradigm in which an unsupervised machine learning approach is applied to the high-dimensional influenza genetic sequences to investigate whether vaccine is a driving force to the evolution of influenza virus. We first used a visualization approach to visualize the evolutionary paths of vaccine-controlled and non-vaccine-controlled influenza viruses in a low-dimensional space. We then quantified the evolutionary differences between their evolutionary trajectories through the use of within- and between-scatter matrices computation to provide the statistical confidence to support the visualization results. We used the influenza surface Hemagglutinin (HA) gene for this study as the HA gene is the major target of the immune system. The visualization is achieved without using any clustering methods or prior information about the influenza sequences. Our results clearly showed that the evolutionary trajectories between vaccine-controlled and non-vaccine-controlled influenza viruses are different and vaccine as an evolution driving force cannot be completely eliminated.
Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo
2016-01-01
Summary Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded functional magnetic resonance imaging (fMRI) neurofeedback, termed “DecNef” [9], we tested whether associative learning of color and orientation can be created in early visual areas. During three days' training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive “red” significantly more frequently than “green” in an achromatic vertical grating. This effect was also observed 3 to 5 months after the training. These results suggest that long-term associative learning of the two different visual features such as color and orientation was created most likely in early visual areas. This newly extended technique that induces associative learning is called “A(ssociative)-DecNef” and may be used as an important tool for understanding and modifying brain functions, since associations are fundamental and ubiquitous functions in the brain. PMID:27374335
Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo
2016-07-25
Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded fMRI neurofeedback termed "DecNef" [9], we tested whether associative learning of orientation and color can be created in early visual areas. During 3 days of training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive "red" significantly more frequently than "green" in an achromatic vertical grating. This effect was also observed 3-5 months after the training. These results suggest that long-term associative learning of two different visual features such as orientation and color was created, most likely in early visual areas. This newly extended technique that induces associative learning is called "A-DecNef," and it may be used as an important tool for understanding and modifying brain functions because associations are fundamental and ubiquitous functions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.
Towards a Web-Enabled Geovisualization and Analytics Platform for the Energy and Water Nexus
NASA Astrophysics Data System (ADS)
Sanyal, J.; Chandola, V.; Sorokine, A.; Allen, M.; Berres, A.; Pang, H.; Karthik, R.; Nugent, P.; McManamay, R.; Stewart, R.; Bhaduri, B. L.
2017-12-01
Interactive data analytics are playing an increasingly vital role in the generation of new, critical insights regarding the complex dynamics of the energy/water nexus (EWN) and its interactions with climate variability and change. Integration of impacts, adaptation, and vulnerability (IAV) science with emerging, and increasingly critical, data science capabilities offers a promising potential to meet the needs of the EWN community. To enable the exploration of pertinent research questions, a web-based geospatial visualization platform is being built that integrates a data analysis toolbox with advanced data fusion and data visualization capabilities to create a knowledge discovery framework for the EWN. The system, when fully built out, will offer several geospatial visualization capabilities including statistical visual analytics, clustering, principal-component analysis, dynamic time warping, support uncertainty visualization and the exploration of data provenance, as well as support machine learning discoveries to render diverse types of geospatial data and facilitate interactive analysis. Key components in the system architecture includes NASA's WebWorldWind, the Globus toolkit, postgresql, as well as other custom built software modules.
Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.
Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne
2016-05-01
We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.
Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.
2013-01-01
Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656
Aminergic neuromodulation of associative visual learning in harnessed honey bees.
Mancini, Nino; Giurfa, Martin; Sandoz, Jean-Christophe; Avarguès-Weber, Aurore
2018-05-21
The honey bee Apis mellifera is a major insect model for studying visual cognition. Free-flying honey bees learn to associate different visual cues with a sucrose reward and may deploy sophisticated cognitive strategies to this end. Yet, the neural bases of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but training them to respond appetitively to visual stimuli paired with sucrose reward is difficult. Here we succeeded in coupling visual conditioning in harnessed bees with pharmacological analyses on the role of octopamine (OA), dopamine (DA) and serotonin (5-HT) in visual learning. We also studied if and how these biogenic amines modulate sucrose responsiveness and phototaxis behaviour as intact reward and visual perception are essential prerequisites for appetitive visual learning. Our results suggest that both octopaminergic and dopaminergic signaling mediate either the appetitive sucrose signaling or the association between color and sucrose reward in the bee brain. Enhancing and inhibiting serotonergic signaling both compromised learning performances, probably via an impairment of visual perception. We thus provide a first analysis of the role of aminergic signaling in visual learning and retention in the honey bee and discuss further research trends necessary to understand the neural bases of visual cognition in this insect. Copyright © 2018 Elsevier Inc. All rights reserved.
Advances and limitations of visual conditioning protocols in harnessed bees.
Avarguès-Weber, Aurore; Mota, Theo
2016-10-01
Bees are excellent invertebrate models for studying visual learning and memory mechanisms, because of their sophisticated visual system and impressive cognitive capacities associated with a relatively simple brain. Visual learning in free-flying bees has been traditionally studied using an operant conditioning paradigm. This well-established protocol, however, can hardly be combined with invasive procedures for studying the neurobiological basis of visual learning. Different efforts have been made to develop protocols in which harnessed honey bees could associate visual cues with reinforcement, though learning performances remain poorer than those obtained with free-flying animals. Especially in the last decade, the intention of improving visual learning performances of harnessed bees led many authors to adopt distinct visual conditioning protocols, altering parameters like harnessing method, nature and duration of visual stimulation, number of trials, inter-trial intervals, among others. As a result, the literature provides data hardly comparable and sometimes contradictory. In the present review, we provide an extensive analysis of the literature available on visual conditioning of harnessed bees, with special emphasis on the comparison of diverse conditioning parameters adopted by different authors. Together with this comparative overview, we discuss how these diverse conditioning parameters could modulate visual learning performances of harnessed bees. Copyright © 2016 Elsevier Ltd. All rights reserved.
Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.
Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V
2018-04-01
A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.
Statistical and Variational Methods for Problems in Visual Control
2009-03-02
plane curves to round points," /. Differential Geometry 26 (1987), pp. 285-314. 12 [7] S. Haker , G. Sapiro, and A. Tannenbaum, "Knowledge-based...segmentation of SAR data with learned priors," IEEE Trans. Image Processing, vol. 9, pp. 298-302, 2000. [8] S. Haker , L. Zhu, S. Angenent, and A...Tannenbaum, "Optimal mass transport for registration and warping" Int. Journal Computer Vision, vol. 60, pp. 225-240, 2004. [9] S. Haker , G. Sapiro, A
Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning
ERIC Educational Resources Information Center
Rau, Martina A.
2017-01-01
Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…
Embodied attention and word learning by toddlers
Yu, Chen; Smith, Linda B.
2013-01-01
Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist’s view nor the mature partner’s view, but is rather from the learner’s personal view. Here we show that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant. If parents named the object during these moments of bottom-up selectivity, later forced-choice tests showed that infants learned the name, but did not when naming occurred during a less visually selective moment. The momentary visual input for parents and toddlers was captured via head cameras placed low on each participant’s forehead as parents played with and named objects for their infant. Frame-by-frame analyses of the head camera images at and around naming moments were conducted to determine the visual properties at input that were associated with learning. The analyses indicated that learning occurred when bottom-up visual information was clean and uncluttered. The sensory-motor behaviors of infants and parents were also analyzed to determine how their actions on the objects may have created these optimal visual moments for learning. The results are discussed with respect to early word learning, embodied attention, and the social role of parents in early word learning. PMID:22878116
An object-based visual attention model for robotic applications.
Yu, Yuanlong; Mann, George K I; Gosine, Raymond G
2010-10-01
By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.
Illusory Reversal of Causality between Touch and Vision has No Effect on Prism Adaptation Rate.
Tanaka, Hirokazu; Homma, Kazuhiro; Imamizu, Hiroshi
2012-01-01
Learning, according to Oxford Dictionary, is "to gain knowledge or skill by studying, from experience, from being taught, etc." In order to learn from experience, the central nervous system has to decide what action leads to what consequence, and temporal perception plays a critical role in determining the causality between actions and consequences. In motor adaptation, causality between action and consequence is implicitly assumed so that a subject adapts to a new environment based on the consequence caused by her action. Adaptation to visual displacement induced by prisms is a prime example; the visual error signal associated with the motor output contributes to the recovery of accurate reaching, and a delayed feedback of visual error can decrease the adaptation rate. Subjective feeling of temporal order of action and consequence, however, can be modified or even reversed when her sense of simultaneity is manipulated with an artificially delayed feedback. Our previous study (Tanaka et al., 2011; Exp. Brain Res.) demonstrated that the rate of prism adaptation was unaffected when the subjective delay of visual feedback was shortened. This study asked whether subjects could adapt to prism adaptation and whether the rate of prism adaptation was affected when the subjective temporal order was illusory reversed. Adapting to additional 100 ms delay and its sudden removal caused a positive shift of point of simultaneity in a temporal order judgment experiment, indicating an illusory reversal of action and consequence. We found that, even in this case, the subjects were able to adapt to prism displacement with the learning rate that was statistically indistinguishable to that without temporal adaptation. This result provides further evidence to the dissociation between conscious temporal perception and motor adaptation.
NASA Astrophysics Data System (ADS)
Baukal, Charles Edward, Jr.
A literature search revealed very little information on how to teach working engineers, which became the motivation for this research. Effective training is important for many reasons such as preventing accidents, maximizing fuel efficiency, minimizing pollution emissions, and reducing equipment downtime. The conceptual framework for this study included the development of a new instructional design framework called the Multimedia Cone of Abstraction (MCoA). This was developed by combining Dale's Cone of Experience and Mayer's Cognitive Theory of Multimedia Learning. An anonymous survey of 118 engineers from a single Midwestern manufacturer was conducted to determine their demographics, learning strategy preferences, verbal-visual cognitive styles, and multimedia preferences. The learning strategy preference profile and verbal-visual cognitive styles of the sample were statistically significantly different than the general population. The working engineers included more Problem Solvers and were much more visually-oriented than the general population. To study multimedia preferences, five of the seven levels in the MCoA were used. Eight types of multimedia were compared in four categories (types in parantheses): text (text and narration), static graphics (drawing and photograph), non-interactive dynamic graphics (animation and video), and interactive dynamic graphics (simulated virtual reality and real virtual reality). The first phase of the study examined multimedia preferences within a category. Participants compared multimedia types in pairs on dual screens using relative preference, rating, and ranking. Surprisingly, the more abstract multimedia (text, drawing, animation, and simulated virtual reality) were preferred in every category to the more concrete multimedia (narration, photograph, video, and real virtual reality), despite the fact that most participants had relatively little prior subject knowledge. However, the more abstract graphics were only slightly preferred to the more concrete graphics. In the second phase, the more preferred multimedia types in each category from the first phase were compared against each other using relative preference, rating, and ranking and overall rating and ranking. Drawing was the most preferred multimedia type overall, although only slightly more than animation and simulated virtual reality. Text was a distant fourth. These results suggest that instructional content for continuing engineering education should include problem solving and should be highly visual.
NASA Astrophysics Data System (ADS)
Godbole, Saurabh
Traditionally, textual tools have been utilized to teach basic programming languages and paradigms. Research has shown that students tend to be visual learners. Using flowcharts, students can quickly understand the logic of their programs and visualize the flow of commands in the algorithm. Moreover, applying programming to physical systems through the use of a microcontroller to facilitate this type of learning can spark an interest in students to advance their programming knowledge to create novel applications. This study examined if freshmen college students' attitudes towards programming changed after completing a graphical programming lesson. Various attributes about students' attitudes were examined including confidence, interest, stereotypes, and their belief in the usefulness of acquiring programming skills. The study found that there were no statistically significant differences in attitudes either immediately following the session or after a period of four weeks.
Understanding How to Build Long-Lived Learning Collaborators
2016-03-16
discrimination in learning, and dynamic encoding strategies to improve visual encoding for learning via analogical generalization. We showed that spatial concepts...a 20,000 sketch corpus to examine the tradeoffs involved in visual representation and analogical generalization. 15. SUBJECT TERMS...strategies to improve visual encoding for learning via analogical generalization. We showed that spatial concepts can be learned via analogical
ERIC Educational Resources Information Center
Laakso, Mikko-Jussi; Myller, Niko; Korhonen, Ari
2009-01-01
In this paper, two emerging learning and teaching methods have been studied: collaboration in concert with algorithm visualization. When visualizations have been employed in collaborative learning, collaboration introduces new challenges for the visualization tools. In addition, new theories are needed to guide the development and research of the…
Geometry of the perceptual space
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Palmer, Stephen; Eghbalnia, Hamid; Carew, John
1999-09-01
The concept of space and geometry varies across the subjects. Following Poincare, we consider the construction of the perceptual space as a continuum equipped with a notion of magnitude. The study of the relationships of objects in the perceptual space gives rise to what we may call perceptual geometry. Computational modeling of objects and investigation of their deeper perceptual geometrical properties (beyond qualitative arguments) require a mathematical representation of the perceptual space. Within the realm of such a mathematical/computational representation, visual perception can be studied as in the well-understood logic-based geometry. This, however, does not mean that one could reduce all problems of visual perception to their geometric counterparts. Rather, visual perception as reported by a human observer, has a subjective factor that could be analytically quantified only through statistical reasoning and in the course of repetitive experiments. Thus, the desire to experimentally verify the statements in perceptual geometry leads to an additional probabilistic structure imposed on the perceptual space, whose amplitudes are measured through intervention by human observers. We propose a model for the perceptual space and the case of perception of textured surfaces as a starting point for object recognition. To rigorously present these ideas and propose computational simulations for testing the theory, we present the model of the perceptual geometry of surfaces through an amplification of theory of Riemannian foliation in differential topology, augmented by statistical learning theory. When we refer to the perceptual geometry of a human observer, the theory takes into account the Bayesian formulation of the prior state of the knowledge of the observer and Hebbian learning. We use a Parallel Distributed Connectionist paradigm for computational modeling and experimental verification of our theory.
Neural correlates of context-dependent feature conjunction learning in visual search tasks.
Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U
2016-06-01
Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Dissociation of visual associative and motor learning in Drosophila at the flight simulator.
Wang, Shunpeng; Li, Yan; Feng, Chunhua; Guo, Aike
2003-08-29
Ever since operant conditioning was studied experimentally, the relationship between associative learning and possible motor learning has become controversial. Although motor learning and its underlying neural substrates have been extensively studied in mammals, it is still poorly understood in invertebrates. The visual discriminative avoidance paradigm of Drosophila at the flight simulator has been widely used to study the flies' visual associative learning and related functions, but it has not been used to study the motor learning process. In this study, newly-designed data analysis was employed to examine the flies' solitary behavioural variable that was recorded at the flight simulator-yaw torque. Analysis was conducted to explore torque distributions of both wild-type and mutant flies in conditioning, with the following results: (1) Wild-type Canton-S flies had motor learning performance in conditioning, which was proved by modifications of the animal's behavioural mode in conditioning. (2) Repetition of training improved the motor learning performance of wild-type Canton-S flies. (3) Although mutant dunce(1) flies were defective in visual associative learning, they showed essentially normal motor learning performance in terms of yaw torque distribution in conditioning. Finally, we tentatively proposed that both visual associative learning and motor learning were involved in the visual operant conditioning of Drosophila at the flight simulator, that the two learning forms could be dissociated and they might have different neural bases.
Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars
NASA Astrophysics Data System (ADS)
Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed
2016-02-01
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning.
Lee, Ying Li; Chien, Tsai Feng; Kuo, Ming Chuan; Chang, Polun
2014-01-01
This study aims to understand the relationship between participating nurses' motivation, achievement and satisfaction before and after they learned to program in Excel Visual Basic for Applications (Excel VBA). We held a workshop to train nurses in developing simple Excel VBA information systems to support their clinical or administrative practices. Before and after the workshop, the participants were evaluated on their knowledge of Excel VBA, and a questionnaire was given to survey their learning motivation and satisfaction. Statistics softwares Winsteps and SPSS were used for data analysis. Results show that the participants are more knowledgeable about VBA as well as more motivated in learning VBA after the workshop. Participants were highly satisfied with the overall arrangement of the workshop and instructors, but didn't have enough confidence in promoting the application of Excel VBA themselves. In addition, we were unable to predict the participants' achievement by their demographic characteristics or pre-test motivation level.
Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars
Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed
2016-01-01
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning. PMID:26844862
Studying Visual Displays: How to Instructionally Support Learning
ERIC Educational Resources Information Center
Renkl, Alexander; Scheiter, Katharina
2017-01-01
Visual displays are very frequently used in learning materials. Although visual displays have great potential to foster learning, they also pose substantial demands on learners so that the actual learning outcomes are often disappointing. In this article, we pursue three main goals. First, we identify the main difficulties that learners have when…
Tsao, Wei-Shan; Hsieh, Hsi-Pao; Chuang, Yi-Ting; Sheu, Min-Muh
2017-05-01
Students with cognitive impairment are at increased risk of suffering from visual impairment due to refractive errors and ocular disease, which can adversely influence learning and daily activities. The purpose of this study was to evaluate the ocular and visual status among students at the special education school in Hualien. All students at the National Hualien Special Education School were evaluated. Full eye examinations were conducted by a skilled ophthalmologist. The students' medical records and disability types were reviewed. A total of 241 students, aged 7-18 years, were examined. Visual acuity could be assessed in 138 students. A total of 169/477 (35.4%) eyes were found to suffer from refractive errors, including 20 eyes with high myopia (≤-6.0 D) and 16 eyes with moderate hypermetropia (+3.0 D to +5.0 D). A total of 84/241 (34.8%) students needed spectacles to correct their vision, thus improving their daily activities and learning process, but only 15/241 (6.2%) students were wearing suitable corrective spectacles. A total of 55/241 students (22.8%) had ocular disorders, which influenced their visual function. The multiple disability group had a statistically significant higher prevalence of ocular disorders (32.9%) than the simple intellectual disability group (19.6%). Students with cognitive impairment in eastern Taiwan have a high risk of visual impairment due to refractive errors and ocular disorders. Importantly, many students have unrecognized correctable refractive errors. Regular ophthalmic examination should be administered to address this issue and prevent further disability in this already handicapped group. Copyright © 2016. Published by Elsevier B.V.
Self-paced model learning for robust visual tracking
NASA Astrophysics Data System (ADS)
Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin
2017-01-01
In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.
Jieyi Li; Arandjelovic, Ognjen
2017-07-01
Computer science and machine learning in particular are increasingly lauded for their potential to aid medical practice. However, the highly technical nature of the state of the art techniques can be a major obstacle in their usability by health care professionals and thus, their adoption and actual practical benefit. In this paper we describe a software tool which focuses on the visualization of predictions made by a recently developed method which leverages data in the form of large scale electronic records for making diagnostic predictions. Guided by risk predictions, our tool allows the user to explore interactively different diagnostic trajectories, or display cumulative long term prognostics, in an intuitive and easily interpretable manner.
An Interactive Approach to Learning and Teaching in Visual Arts Education
ERIC Educational Resources Information Center
Tomljenovic, Zlata
2015-01-01
The present research focuses on modernising the approach to learning and teaching the visual arts in teaching practice, as well as examining the performance of an interactive approach to learning and teaching in visual arts classes with the use of a combination of general and specific (visual arts) teaching methods. The study uses quantitative…
Differential learning and memory performance in OEF/OIF veterans for verbal and visual material.
Sozda, Christopher N; Muir, James J; Springer, Utaka S; Partovi, Diana; Cole, Michael A
2014-05-01
Memory complaints are particularly salient among veterans who experience combat-related mild traumatic brain injuries and/or trauma exposure, and represent a primary barrier to successful societal reintegration and everyday functioning. Anecdotally within clinical practice, verbal learning and memory performance frequently appears differentially reduced versus visual learning and memory scores. We sought to empirically investigate the robustness of a verbal versus visual learning and memory discrepancy and to explore potential mechanisms for a verbal/visual performance split. Participants consisted of 103 veterans with reported history of mild traumatic brain injuries returning home from U.S. military Operations Enduring Freedom and Iraqi Freedom referred for outpatient neuropsychological evaluation. Findings indicate that visual learning and memory abilities were largely intact while verbal learning and memory performance was significantly reduced in comparison, residing at approximately 1.1 SD below the mean for verbal learning and approximately 1.4 SD below the mean for verbal memory. This difference was not observed in verbal versus visual fluency performance, nor was it associated with estimated premorbid verbal abilities or traumatic brain injury history. In our sample, symptoms of depression, but not posttraumatic stress disorder, were significantly associated with reduced composite verbal learning and memory performance. Verbal learning and memory performance may benefit from targeted treatment of depressive symptomatology. Also, because visual learning and memory functions may remain intact, these might be emphasized when applying neurocognitive rehabilitation interventions to compensate for observed verbal learning and memory difficulties.
Age-related declines of stability in visual perceptual learning.
Chang, Li-Hung; Shibata, Kazuhisa; Andersen, George J; Sasaki, Yuka; Watanabe, Takeo
2014-12-15
One of the biggest questions in learning is how a system can resolve the plasticity and stability dilemma. Specifically, the learning system needs to have not only a high capability of learning new items (plasticity) but also a high stability to retain important items or processing in the system by preventing unimportant or irrelevant information from being learned. This dilemma should hold true for visual perceptual learning (VPL), which is defined as a long-term increase in performance on a visual task as a result of visual experience. Although it is well known that aging influences learning, the effect of aging on the stability and plasticity of the visual system is unclear. To address the question, we asked older and younger adults to perform a task while a task-irrelevant feature was merely exposed. We found that older individuals learned the task-irrelevant features that younger individuals did not learn, both the features that were sufficiently strong for younger individuals to suppress and the features that were too weak for younger individuals to learn. At the same time, there was no plasticity reduction in older individuals within the task tested. These results suggest that the older visual system is less stable to unimportant information than the younger visual system. A learning problem with older individuals may be due to a decrease in stability rather than a decrease in plasticity, at least in VPL. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Elvi, M.; Nurjanah
2017-02-01
This research is distributed on the issue of the lack of visual thinking ability is a must-have basic ability of students in learning geometry. The purpose of this research is to investigate and elucide: 1) the enhancement of visual thinking ability of students to acquire learning assisted with geogebra tutorial learning: 2) the increase in visual thinking ability of students who obtained a model of learning assisted with geogebra and students who obtained a regular study of KAM (high, medium, and low). This research population is grade VII in Bandung Junior High School. The instruments used to collect data in this study consisted of instruments of the test and the observation sheet. The data obtained were analyzed using the test average difference i.e. Test-t and ANOVA Test one line to two lines. The results showed that: 1) the attainment and enhancement of visual thinking ability of students to acquire learning assisted geogebra tutorial better than students who acquire learning; 2) there may be differences of visual upgrade thinking students who acquire the learning model assisted with geogebra tutorial earn regular learning of KAM (high, medium and low).
Sensitivity to structure in action sequences: An infant event-related potential study.
Monroy, Claire D; Gerson, Sarah A; Domínguez-Martínez, Estefanía; Kaduk, Katharina; Hunnius, Sabine; Reid, Vincent
2017-05-06
Infants are sensitive to structure and patterns within continuous streams of sensory input. This sensitivity relies on statistical learning, the ability to detect predictable regularities in spatial and temporal sequences. Recent evidence has shown that infants can detect statistical regularities in action sequences they observe, but little is known about the neural process that give rise to this ability. In the current experiment, we combined electroencephalography (EEG) with eye-tracking to identify electrophysiological markers that indicate whether 8-11-month-old infants detect violations to learned regularities in action sequences, and to relate these markers to behavioral measures of anticipation during learning. In a learning phase, infants observed an actor performing a sequence featuring two deterministic pairs embedded within an otherwise random sequence. Thus, the first action of each pair was predictive of what would occur next. One of the pairs caused an action-effect, whereas the second did not. In a subsequent test phase, infants observed another sequence that included deviant pairs, violating the previously observed action pairs. Event-related potential (ERP) responses were analyzed and compared between the deviant and the original action pairs. Findings reveal that infants demonstrated a greater Negative central (Nc) ERP response to the deviant actions for the pair that caused the action-effect, which was consistent with their visual anticipations during the learning phase. Findings are discussed in terms of the neural and behavioral processes underlying perception and learning of structured action sequences. Copyright © 2017 Elsevier Ltd. All rights reserved.
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
ERIC Educational Resources Information Center
Hendrickson, Homer
1988-01-01
Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…
iSee: Teaching Visual Learning in an Organic Virtual Learning Environment
ERIC Educational Resources Information Center
Han, Hsiao-Cheng
2017-01-01
This paper presents a three-year participatory action research project focusing on the graduate level course entitled Visual Learning in 3D Animated Virtual Worlds. The purpose of this research was to understand "How the virtual world processes of observing and creating can best help students learn visual theories". The first cycle of…
[Associative Learning between Orientation and Color in Early Visual Areas].
Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo
2017-08-01
Associative learning is an essential neural phenomenon where the contingency of different items increases after training. Although associative learning has been found to occur in many brain regions, there is no clear evidence that associative learning of visual features occurs in early visual areas. Here, we developed an associative decoded functional magnetic resonance imaging (fMRI) neurofeedback (A-DecNef) to determine whether associative learning of color and orientation can be induced in early visual areas. During the three days' training, A-DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was simultaneously, physically presented to participants. Consequently, participants' perception of "red" was significantly more frequently than that of "green" in an achromatic vertical grating. This effect was also observed 3 to 5 months after training. These results suggest that long-term associative learning of two different visual features such as color and orientation, was induced most likely in early visual areas. This newly extended technique that induces associative learning may be used as an important tool for understanding and modifying brain function, since associations are fundamental and ubiquitous with respect to brain function.
King, Andy J; Jensen, Jakob D; Davis, LaShara A; Carcioppolo, Nick
2014-01-01
There is a paucity of research on the visual images used in health communication messages and campaign materials. Even though many studies suggest further investigation of these visual messages and their features, few studies provide specific constructs or assessment tools for evaluating the characteristics of visual messages in health communication contexts. The authors conducted 2 studies to validate a measure of perceived visual informativeness (PVI), a message construct assessing visual messages presenting statistical or indexical information. In Study 1, a 7-item scale was created that demonstrated good internal reliability (α = .91), as well as convergent and divergent validity with related message constructs such as perceived message quality, perceived informativeness, and perceived attractiveness. PVI also converged with a preference for visual learning but was unrelated to a person's actual vision ability. In addition, PVI exhibited concurrent validity with a number of important constructs including perceived message effectiveness, decisional satisfaction, and three key public health theory behavior predictors: perceived benefits, perceived barriers, and self-efficacy. Study 2 provided more evidence that PVI is an internally reliable measure and demonstrates that PVI is a modifiable message feature that can be tested in future experimental work. PVI provides an initial step to assist in the evaluation and testing of visual messages in campaign and intervention materials promoting informed decision making and behavior change.
Perceptual learning in visual search: fast, enduring, but non-specific.
Sireteanu, R; Rettenbach, R
1995-07-01
Visual search has been suggested as a tool for isolating visual primitives. Elementary "features" were proposed to involve parallel search, while serial search is necessary for items without a "feature" status, or, in some cases, for conjunctions of "features". In this study, we investigated the role of practice in visual search tasks. We found that, under some circumstances, initially serial tasks can become parallel after a few hundred trials. Learning in visual search is far less specific than learning of visual discriminations and hyperacuity, suggesting that it takes place at another level in the central visual pathway, involving different neural circuits.
Bernard, Jean-Baptiste; Arunkumar, Amit; Chung, Susana T L
2012-08-01
In a previous study, Chung, Legge, and Cheung (2004) showed that training using repeated presentation of trigrams (sequences of three random letters) resulted in an increase in the size of the visual span (number of letters recognized in a glance) and reading speed in the normal periphery. In this study, we asked whether we could optimize the benefit of trigram training on reading speed by using trigrams more specific to the reading task (i.e., trigrams frequently used in the English language) and presenting them according to their frequencies of occurrence in normal English usage and observers' performance. Averaged across seven observers, our training paradigm (4 days of training) increased the size of the visual span by 6.44 bits, with an accompanied 63.6% increase in the maximum reading speed, compared with the values before training. However, these benefits were not statistically different from those of Chung, Legge, and Cheung (2004) using a random-trigram training paradigm. Our findings confirm the possibility of increasing the size of the visual span and reading speed in the normal periphery with perceptual learning, and suggest that the benefits of training on letter recognition and maximum reading speed may not be linked to the types of letter strings presented during training. Copyright © 2012 Elsevier Ltd. All rights reserved.
Motor-visual neurons and action recognition in social interactions.
de la Rosa, Stephan; Bülthoff, Heinrich H
2014-04-01
Cook et al. suggest that motor-visual neurons originate from associative learning. This suggestion has interesting implications for the processing of socially relevant visual information in social interactions. Here, we discuss two aspects of the associative learning account that seem to have particular relevance for visual recognition of social information in social interactions - namely, context-specific and contingency based learning.
Implicit visual learning and the expression of learning.
Haider, Hilde; Eberhardt, Katharina; Kunde, Alexander; Rose, Michael
2013-03-01
Although the existence of implicit motor learning is now widely accepted, the findings concerning perceptual implicit learning are ambiguous. Some researchers have observed perceptual learning whereas other authors have not. The review of the literature provides different reasons to explain this ambiguous picture, such as differences in the underlying learning processes, selective attention, or differences in the difficulty to express this knowledge. In three experiments, we investigated implicit visual learning within the original serial reaction time task. We used different response devices (keyboard vs. mouse) in order to manipulate selective attention towards response dimensions. Results showed that visual and motor sequence learning differed in terms of RT-benefits, but not in terms of the amount of knowledge assessed after training. Furthermore, visual sequence learning was modulated by selective attention. However, the findings of all three experiments suggest that selective attention did not alter implicit but rather explicit learning processes. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Obrecht, Dean H.
This report contrasts the results of a rigidly specified, pattern-oriented approach to learning Spanish with an approach that emphasizes the origination of sentences by the learner in direct response to stimuli. Pretesting and posttesting statistics are presented and conclusions are discussed. The experimental method, which required the student to…
Plastic Bags and Environmental Pollution
ERIC Educational Resources Information Center
Sang, Anita Ng Heung
2010-01-01
The "Hong Kong Visual Arts Curriculum Guide," covering Primary 1 to Secondary 3 grades (Curriculum Development Committee, 2003), points to three domains of learning in visual arts: (1) visual arts knowledge; (2) visual arts appreciation and criticism; and (3) visual arts making. The "Guide" suggests learning should develop…
Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.
Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A
2015-11-01
The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.
Time course influences transfer of visual perceptual learning across spatial location.
Larcombe, S J; Kennard, C; Bridge, H
2017-06-01
Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Aleman, Cheryl; And Others
1990-01-01
Compares auditory/visual practice to visual/motor practice in spelling with seven elementary school learning-disabled students enrolled in a resource room setting. Finds that the auditory/visual practice was superior to the visual/motor practice on the weekly spelling performance for all seven students. (MG)
Magnetic stimulation of visual cortex impairs perceptual learning.
Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio
2016-12-01
The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.
Visual Associative Learning in Restrained Honey Bees with Intact Antennae
Dobrin, Scott E.; Fahrbach, Susan E.
2012-01-01
A restrained honey bee can be trained to extend its proboscis in response to the pairing of an odor with a sucrose reward, a form of olfactory associative learning referred to as the proboscis extension response (PER). Although the ability of flying honey bees to respond to visual cues is well-established, associative visual learning in restrained honey bees has been challenging to demonstrate. Those few groups that have documented vision-based PER have reported that removing the antennae prior to training is a prerequisite for learning. Here we report, for a simple visual learning task, the first successful performance by restrained honey bees with intact antennae. Honey bee foragers were trained on a differential visual association task by pairing the presentation of a blue light with a sucrose reward and leaving the presentation of a green light unrewarded. A negative correlation was found between age of foragers and their performance in the visual PER task. Using the adaptations to the traditional PER task outlined here, future studies can exploit pharmacological and physiological techniques to explore the neural circuit basis of visual learning in the honey bee. PMID:22701575
Badami, Rokhsareh; Mahmoudi, Sahar; Baluch, Bahman
2016-12-01
The presented study was aimed at identifying for the first time the influence of sports vision exercises on fundamental motor skills and cognitive skills of 7- to 10-year-old developmental dyslexic Persian children. A pretest-posttest quasi-experimental study was conducted. The statistical population of this study was 7- to 10-year-old dyslexic children referring to two centres of learning disorder in the city of Isfahan. Twenty two of these children were selected using available and purposive sampling from the statistical population and were randomly assigned into two groups of experimental and control. The former (experimental group) participated in sports vision exercise courses for 12 weeks (3 one hr sessions per week) and the latter (control group) continued their routine daily activities during the exercise. Before the beginning and at the end of the exercise, Gardner's test of visual perception test - revised and Dehkhoda's reading skills test was administered to both groups. The results showed that the sports vision exercises increases motor skills, visual perceptual skills and reading skills in developmental dyslexic children. Based on the results of the presented study it was concluded that sports vision exercises can be used for fundamental and cognitive skills of developmental dyslexic children.
Statistical learning as an individual ability: Theoretical perspectives and empirical evidence
Siegelman, Noam; Frost, Ram
2015-01-01
Although the power of statistical learning (SL) in explaining a wide range of linguistic functions is gaining increasing support, relatively little research has focused on this theoretical construct from the perspective of individual differences. However, to be able to reliably link individual differences in a given ability such as language learning to individual differences in SL, three critical theoretical questions should be posed: Is SL a componential or unified ability? Is it nested within other general cognitive abilities? Is it a stable capacity of an individual? Following an initial mapping sentence outlining the possible dimensions of SL, we employed a battery of SL tasks in the visual and auditory modalities, using verbal and non-verbal stimuli, with adjacent and non-adjacent contingencies. SL tasks were administered along with general cognitive tasks in a within-subject design at two time points to explore our theoretical questions. We found that SL, as measured by some tasks, is a stable and reliable capacity of an individual. Moreover, we found SL to be independent of general cognitive abilities such as intelligence or working memory. However, SL is not a unified capacity, so that individual sensitivity to conditional probabilities is not uniform across modalities and stimuli. PMID:25821343
Cross-Sensory Transfer of Reference Frames in Spatial Memory
ERIC Educational Resources Information Center
Kelly, Jonathan W.; Avraamides, Marios N.
2011-01-01
Two experiments investigated whether visual cues influence spatial reference frame selection for locations learned through touch. Participants experienced visual cues emphasizing specific environmental axes and later learned objects through touch. Visual cues were manipulated and haptic learning conditions were held constant. Imagined perspective…
Learning style, judgements of learning, and learning of verbal and visual information.
Knoll, Abby R; Otani, Hajime; Skeel, Reid L; Van Horn, K Roger
2017-08-01
The concept of learning style is immensely popular despite the lack of evidence showing that learning style influences performance. This study tested the hypothesis that the popularity of learning style is maintained because it is associated with subjective aspects of learning, such as judgements of learning (JOLs). Preference for verbal and visual information was assessed using the revised Verbalizer-Visualizer Questionnaire (VVQ). Then, participants studied a list of word pairs and a list of picture pairs, making JOLs (immediate, delayed, and global) while studying each list. Learning was tested by cued recall. The results showed that higher VVQ verbalizer scores were associated with higher immediate JOLs for words, and higher VVQ visualizer scores were associated with higher immediate JOLs for pictures. There was no association between VVQ scores and recall or JOL accuracy. As predicted, learning style was associated with subjective aspects of learning but not objective aspects of learning. © 2016 The British Psychological Society.
ERIC Educational Resources Information Center
Soemer, Alexander; Schwan, Stephan
2016-01-01
In a series of experiments, we tested a recently proposed hypothesis stating that the degree of alignment between the form of a mental representation resulting from learning with a particular visualization format and the specific requirements of a learning task determines learning performance (task-appropriateness). Groups of participants were…
Eye movements reveal epistemic curiosity in human observers.
Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline
2015-12-01
Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Development of Visualization of Learning Outcomes Using Curriculum Mapping
ERIC Educational Resources Information Center
Ikuta, Takashi; Gotoh, Yasushi
2012-01-01
Niigata University has started to develop the Niigata University Bachelor Assessment System (NBAS). The objective is to have groups of teachers belonging to educational programs discuss whether visualized learning outcomes are comprehensible. Discussions based on teachers' subjective judgments showed in general that visualized learning outcomes…
Teaching for Different Learning Styles.
ERIC Educational Resources Information Center
Cropper, Carolyn
1994-01-01
This study examined learning styles in 137 high ability fourth-grade students. All students were administered two learning styles inventories. Characteristics of students with the following learning styles are summarized: auditory language, visual language, auditory numerical, visual numerical, tactile concrete, individual learning, group…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Tourassi, Georgia
2012-01-01
The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: (i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses withmore » the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.« less
Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.
2009-01-01
Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (< 50 years) normally sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732
Alais, David; Cass, John
2010-06-23
An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.
Parkington, Karisa B; Clements, Rebecca J; Landry, Oriane; Chouinard, Philippe A
2015-10-01
We examined how performance on an associative learning task changes in a sample of undergraduate students as a function of their autism-spectrum quotient (AQ) score. The participants, without any prior knowledge of the Japanese language, learned to associate hiragana characters with button responses. In the novel condition, 50 participants learned visual-motor associations without any prior exposure to the stimuli's visual attributes. In the familiar condition, a different set of 50 participants completed a session in which they first became familiar with the stimuli's visual appearance prior to completing the visual-motor association learning task. Participants with higher AQ scores had a clear advantage in the novel condition; the amount of training required reaching learning criterion correlated negatively with AQ. In contrast, participants with lower AQ scores had a clear advantage in the familiar condition; the amount of training required to reach learning criterion correlated positively with AQ. An examination of how each of the AQ subscales correlated with these learning patterns revealed that abilities in visual discrimination-which is known to depend on the visual ventral-stream system-may have afforded an advantage in the novel condition for the participants with the higher AQ scores, whereas abilities in attention switching-which are known to require mechanisms in the prefrontal cortex-may have afforded an advantage in the familiar condition for the participants with the lower AQ scores.
Learning invariance from natural images inspired by observations in the primary visual cortex.
Teichmann, Michael; Wiltschut, Jan; Hamker, Fred
2012-05-01
The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.
ERIC Educational Resources Information Center
Rossetto, Marietta; Chiera-Macchia, Antonella
2011-01-01
This study investigated the use of comics (Cary, 2004) in a guided writing experience in secondary school Italian language learning. The main focus of the peer group interaction task included the exploration of visual sequencing and visual integration (Bailey, O'Grady-Jones, & McGown, 1995) using image and text to create a comic strip narrative in…
Visual Complexity in Orthographic Learning: Modeling Learning across Writing System Variations
ERIC Educational Resources Information Center
Chang, Li-Yun; Plaut, David C.; Perfetti, Charles A.
2016-01-01
The visual complexity of orthographies varies across writing systems. Prior research has shown that complexity strongly influences the initial stage of reading development: the perceptual learning of grapheme forms. This study presents a computational simulation that examines the degree to which visual complexity leads to grapheme learning…
English Orthographic Learning in Chinese-L1 Young EFL Beginners
ERIC Educational Resources Information Center
Cheng, Yu-Lin
2017-01-01
English orthographic learning, among Chinese-L1 children who were beginning to learn English as a foreign language, was documented when: (1) "only" visual memory was at their disposal, (2) visual memory and either "some" letter-sound knowledge or "some" semantic information was available, and (3) visual memory,…
jAMVLE, a New Integrated Molecular Visualization Learning Environment
ERIC Educational Resources Information Center
Bottomley, Steven; Chandler, David; Morgan, Eleanor; Helmerhorst, Erik
2006-01-01
A new computer-based molecular visualization tool has been developed for teaching, and learning, molecular structure. This java-based jmol Amalgamated Molecular Visualization Learning Environment (jAMVLE) is platform-independent, integrated, and interactive. It has an overall graphical user interface that is intuitive and easy to use. The…
Drawing Connections across Conceptually Related Visual Representations in Science
ERIC Educational Resources Information Center
Hansen, Janice
2013-01-01
This dissertation explored beliefs about learning from multiple related visual representations in science, and compared beliefs to learning outcomes. Three research questions were explored: 1) What beliefs do pre-service teachers, non-educators and children have about learning from visual representations? 2) What format of presenting those…
DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide
2013-01-01
The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700
Decoding and disrupting left midfusiform gyrus activity during word reading
Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh
2016-01-01
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763
Decoding and disrupting left midfusiform gyrus activity during word reading.
Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh
2016-07-19
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.
A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm
NASA Astrophysics Data System (ADS)
Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina
The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.
Problem representation and mathematical problem solving of students of varying math ability.
Krawec, Jennifer L
2014-01-01
The purpose of this study was to examine differences in math problem solving among students with learning disabilities (LD, n = 25), low-achieving students (LA, n = 30), and average-achieving students (AA, n = 29). The primary interest was to analyze the processes students use to translate and integrate problem information while solving problems. Paraphrasing, visual representation, and problem-solving accuracy were measured in eighth grade students using a researcher-modified version of the Mathematical Processing Instrument. Results indicated that both students with LD and LA students struggled with processing but that students with LD were significantly weaker than their LA peers in paraphrasing relevant information. Paraphrasing and visual representation accuracy each accounted for a statistically significant amount of variance in problem-solving accuracy. Finally, the effect of visual representation of relevant information on problem-solving accuracy was dependent on ability; specifically, for students with LD, generating accurate visual representations was more strongly related to problem-solving accuracy than for AA students. Implications for instruction for students with and without LD are discussed.
Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566
Visualizing Big Data Outliers through Distributed Aggregation.
Wilkinson, Leland
2017-08-29
Visualizing outliers in massive datasets requires statistical pre-processing in order to reduce the scale of the problem to a size amenable to rendering systems like D3, Plotly or analytic systems like R or SAS. This paper presents a new algorithm, called hdoutliers, for detecting multidimensional outliers. It is unique for a) dealing with a mixture of categorical and continuous variables, b) dealing with big-p (many columns of data), c) dealing with big-n (many rows of data), d) dealing with outliers that mask other outliers, and e) dealing consistently with unidimensional and multidimensional datasets. Unlike ad hoc methods found in many machine learning papers, hdoutliers is based on a distributional model that allows outliers to be tagged with a probability. This critical feature reduces the likelihood of false discoveries.
Software tool for data mining and its applications
NASA Astrophysics Data System (ADS)
Yang, Jie; Ye, Chenzhou; Chen, Nianyi
2002-03-01
A software tool for data mining is introduced, which integrates pattern recognition (PCA, Fisher, clustering, hyperenvelop, regression), artificial intelligence (knowledge representation, decision trees), statistical learning (rough set, support vector machine), computational intelligence (neural network, genetic algorithm, fuzzy systems). It consists of nine function models: pattern recognition, decision trees, association rule, fuzzy rule, neural network, genetic algorithm, Hyper Envelop, support vector machine, visualization. The principle and knowledge representation of some function models of data mining are described. The software tool of data mining is realized by Visual C++ under Windows 2000. Nonmonotony in data mining is dealt with by concept hierarchy and layered mining. The software tool of data mining has satisfactorily applied in the prediction of regularities of the formation of ternary intermetallic compounds in alloy systems, and diagnosis of brain glioma.
No evidence for visual context-dependency of olfactory learning in Drosophila
NASA Astrophysics Data System (ADS)
Yarali, Ayse; Mayerle, Moritz; Nawroth, Christian; Gerber, Bertram
2008-08-01
How is behaviour organised across sensory modalities? Specifically, we ask concerning the fruit fly Drosophila melanogaster how visual context affects olfactory learning and recall and whether information about visual context is getting integrated into olfactory memory. We find that changing visual context between training and test does not deteriorate olfactory memory scores, suggesting that these olfactory memories can drive behaviour despite a mismatch of visual context between training and test. Rather, both the establishment and the recall of olfactory memory are generally facilitated by light. In a follow-up experiment, we find no evidence for learning about combinations of odours and visual context as predictors for reinforcement even after explicit training in a so-called biconditional discrimination task. Thus, a ‘true’ interaction between visual and olfactory modalities is not evident; instead, light seems to influence olfactory learning and recall unspecifically, for example by altering motor activity, alertness or olfactory acuity.
Four types of ensemble coding in data visualizations.
Szafir, Danielle Albers; Haroz, Steve; Gleicher, Michael; Franconeri, Steven
2016-01-01
Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.
Visual Place Learning in Drosophila melanogaster
Ofstad, Tyler A.; Zuker, Charles S.; Reiser, Michael B.
2011-01-01
The ability of insects to learn and navigate to specific locations in the environment has fascinated naturalists for decades. While the impressive navigation abilities of ants, bees, wasps, and other insects clearly demonstrate that insects are capable of visual place learning1–4, little is known about the underlying neural circuits that mediate these behaviors. Drosophila melanogaster is a powerful model organism for dissecting the neural circuitry underlying complex behaviors, from sensory perception to learning and memory. Flies can identify and remember visual features such as size, color, and contour orientation5, 6. However, the extent to which they use vision to recall specific locations remains unclear. Here we describe a visual place-learning platform and demonstrate that Drosophila are capable of forming and retaining visual place memories to guide selective navigation. By targeted genetic silencing of small subsets of cells in the Drosophila brain we show that neurons in the ellipsoid body, but not in the mushroom bodies, are necessary for visual place learning. Together, these studies reveal distinct neuroanatomical substrates for spatial versus non-spatial learning, and substantiate Drosophila as a powerful model for the study of spatial memories. PMID:21654803
The Flynn effect and memory function.
Baxendale, Sallie
2010-08-01
The Flynn effect refers to the steady increase in IQ that appears to date back at least to the inception of modern-day IQ tests. This study examined the possible Flynn effects on clinical memory tests involving the learning and recall of verbal and nonverbal material. Comparisons of the age-related norms on the list learning and design learning tasks from the Adult Memory and Information Processing Battery (AMIPB), published in 1985, and its successor, the BIRT (Brain Injury Rehabilitation Trust) Memory and Information Processing Battery (BMIPB) published in 2007, indicate that there is a significant Flynn effect on tests of memory function. This effect appears to be material specific with statistically significant improvements in all scores on tests involving the learning and recall of visual material in every age range evident over a 22-year period. Verbal memory abilities appear to be relatively stable with no significant differences between the scores in the majority of age ranges. The ramifications for the clinical interpretation of these tests are discussed.
Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression
Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.
2016-01-01
The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571
Optimism as a Prior Belief about the Probability of Future Reward
Kalra, Aditi; Seriès, Peggy
2014-01-01
Optimists hold positive a priori beliefs about the future. In Bayesian statistical theory, a priori beliefs can be overcome by experience. However, optimistic beliefs can at times appear surprisingly resistant to evidence, suggesting that optimism might also influence how new information is selected and learned. Here, we use a novel Pavlovian conditioning task, embedded in a normative framework, to directly assess how trait optimism, as classically measured using self-report questionnaires, influences choices between visual targets, by learning about their association with reward progresses. We find that trait optimism relates to an a priori belief about the likelihood of rewards, but not losses, in our task. Critically, this positive belief behaves like a probabilistic prior, i.e. its influence reduces with increasing experience. Contrary to findings in the literature related to unrealistic optimism and self-beliefs, it does not appear to influence the iterative learning process directly. PMID:24853098
ERIC Educational Resources Information Center
Johnson, A. M.; Ozogul, G.; Reisslein, M.
2015-01-01
An experiment examined the effects of visual signalling to relevant information in multiple external representations and the visual presence of an animated pedagogical agent (APA). Students learned electric circuit analysis using a computer-based learning environment that included Cartesian graphs, equations and electric circuit diagrams. The…
ERIC Educational Resources Information Center
Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady
2015-01-01
The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…
Identification of Quality Visual-Based Learning Material for Technology Education
ERIC Educational Resources Information Center
Katsioloudis, Petros
2010-01-01
It is widely known that the use of visual technology enhances learning by providing a better understanding of the topic as well as motivating students. If all visual-based learning materials (tables, figures, photos, etc.) were equally effective in facilitating student achievement of all kinds of educational objectives, there would virtually be no…
ERIC Educational Resources Information Center
Subrahmaniyan, Neeraja; Krishnaswamy, Swetha; Chowriappa, Ashirwad; Srimathveeravalli, Govindarajan; Bisantz, Ann; Shriber, Linda; Kesavadas, Thenkurussi
2012-01-01
Research has shown that children with learning disabilities exhibit considerable challenges with visual motor integration. While there are specialized Occupational Therapy interventions aimed at visual motor integration, computer games and virtual toys have now become increasingly popular, forming an integral part of children's learning and play.…
Investigation of Error Patterns in Geographical Databases
NASA Technical Reports Server (NTRS)
Dryer, David; Jacobs, Derya A.; Karayaz, Gamze; Gronbech, Chris; Jones, Denise R. (Technical Monitor)
2002-01-01
The objective of the research conducted in this project is to develop a methodology to investigate the accuracy of Airport Safety Modeling Data (ASMD) using statistical, visualization, and Artificial Neural Network (ANN) techniques. Such a methodology can contribute to answering the following research questions: Over a representative sampling of ASMD databases, can statistical error analysis techniques be accurately learned and replicated by ANN modeling techniques? This representative ASMD sample should include numerous airports and a variety of terrain characterizations. Is it possible to identify and automate the recognition of patterns of error related to geographical features? Do such patterns of error relate to specific geographical features, such as elevation or terrain slope? Is it possible to combine the errors in small regions into an error prediction for a larger region? What are the data density reduction implications of this work? ASMD may be used as the source of terrain data for a synthetic visual system to be used in the cockpit of aircraft when visual reference to ground features is not possible during conditions of marginal weather or reduced visibility. In this research, United States Geologic Survey (USGS) digital elevation model (DEM) data has been selected as the benchmark. Artificial Neural Networks (ANNS) have been used and tested as alternate methods in place of the statistical methods in similar problems. They often perform better in pattern recognition, prediction and classification and categorization problems. Many studies show that when the data is complex and noisy, the accuracy of ANN models is generally higher than those of comparable traditional methods.
Exploring the Engagement Effects of Visual Programming Language for Data Structure Courses
ERIC Educational Resources Information Center
Chang, Chih-Kai; Yang, Ya-Fei; Tsai, Yu-Tzu
2017-01-01
Previous research indicates that understanding the state of learning motivation enables researchers to deeply understand students' learning processes. Studies have shown that visual programming languages use graphical code, enabling learners to learn effectively, improve learning effectiveness, increase learning fun, and offering various other…
NASA Astrophysics Data System (ADS)
Oktaviyanthi, Rina; Herman, Tatang
2016-10-01
In this paper, the effect of two different modes of deliver are proposed. The use of self-paced video learning and conventional learning methods in mathematics are compared. The research design classified as a quasi-experiment. The participants were 80 students in the first-year college and divided into two groups. One group as an experiment class received self-paced video learning method and the other group as a control group taught by conventional learning method. Pre and posttest were employed to measure the students' achievement, while questionnaire and interviews were applied to support the pre and posttest data. Statistical analysis included the independent samples t-test showed differences (p < 0.05) in posttest between the experimental and control groups, it means that the use of self-paced video contributed on students' achievement and students' attitudes. In addition, related to corresponding to the students' answer, there are five positive gains in using self-paced video in learning Calculus, such as appropriate learning for both audio and visual of students' characteristics, useful to learn Calculus, assisting students to be more engaging and paying attention in learning, helping students in making the concepts of Calculus are visible, interesting media and motivating students to learn independently.
Perceptual learning in a non-human primate model of artificial vision
Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.
2016-01-01
Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058
The 50s cliff: a decline in perceptuo-motor learning, not a deficit in visual motion perception.
Ren, Jie; Huang, Shaochen; Zhang, Jiancheng; Zhu, Qin; Wilson, Andrew D; Snapp-Childs, Winona; Bingham, Geoffrey P
2015-01-01
Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the "50s cliff." The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.
Feature maps driven no-reference image quality prediction of authentically distorted images
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Bovik, Alan C.
2015-03-01
Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.
Pietrzak, Robert H; Scott, James Cobb; Harel, Brian T; Lim, Yen Ying; Snyder, Peter J; Maruff, Paul
2012-11-01
Alprazolam is a benzodiazepine that, when administered acutely, results in impairments in several aspects of cognition, including attention, learning, and memory. However, the profile (i.e., component processes) that underlie alprazolam-related decrements in visual paired associate learning has not been fully explored. In this double-blind, placebo-controlled, randomized cross-over study of healthy older adults, we used a novel, "process-based" computerized measure of visual paired associate learning to examine the effect of a single, acute 1-mg dose of alprazolam on component processes of visual paired associate learning and memory. Acute alprazolam challenge was associated with a large magnitude reduction in visual paired associate learning and memory performance (d = 1.05). Process-based analyses revealed significant increases in distractor, exploratory, between-search, and within-search error types. Analyses of percentages of each error type suggested that, relative to placebo, alprazolam challenge resulted in a decrease in the percentage of exploratory errors and an increase in the percentage of distractor errors, both of which reflect memory processes. Results of this study suggest that acute alprazolam challenge decreases visual paired associate learning and memory performance by reducing the strength of the association between pattern and location, which may reflect a general breakdown in memory consolidation, with less evidence of reductions in executive processes (e.g., working memory) that facilitate visual paired associate learning and memory. Copyright © 2012 John Wiley & Sons, Ltd.
Audiovisual Association Learning in the Absence of Primary Visual Cortex.
Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice
2015-01-01
Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.
Information Visualization Techniques for Effective Cross-Discipline Communication
NASA Astrophysics Data System (ADS)
Fisher, Ward
2013-04-01
Collaboration between research groups in different fields is a common occurrence, but it can often be frustrating due to the absence of a common vocabulary. This lack of a shared context can make expressing important concepts and discussing results difficult. This problem may be further exacerbated when communicating to an audience of laypeople. Without a clear frame of reference, simple concepts are often rendered difficult-to-understand at best, and unintelligible at worst. An easy way to alleviate this confusion is with the use of clear, well-designed visualizations to illustrate an idea, process or conclusion. There exist a number of well-described machine-learning and statistical techniques which can be used to illuminate the information present within complex high-dimensional datasets. Once the information has been separated from the data, clear communication becomes a matter of selecting an appropriate visualization. Ideally, the visualization is information-rich but data-scarce. Anything from a simple bar chart, to a line chart with confidence intervals, to an animated set of 3D point-clouds can be used to render a complex idea as an easily understood image. Several case studies will be presented in this work. In the first study, we will examine how a complex statistical analysis was applied to a high-dimensional dataset, and how the results were succinctly communicated to an audience of microbiologists and chemical engineers. Next, we will examine a technique used to illustrate the concept of the singular value decomposition, as used in the field of computer vision, to a lay audience of undergraduate students from mixed majors. We will then examine a case where a simple animated line plot was used to communicate an approach to signal decomposition, and will finish with a discussion of the tools available to create these visualizations.
Techniques for Programming Visual Demonstrations.
ERIC Educational Resources Information Center
Gropper, George L.
Visual demonstrations may be used as part of programs to deliver both content objectives and process objectives. Research has shown that learning of concepts is easier, more accurate, and more broadly applied when it is accompanied by visual examples. The visual examples supporting content learning should emphasize both discrimination and…
The Effects of Realism in Learning with Dynamic Visualizations
ERIC Educational Resources Information Center
Scheiter, Katharina; Gerjets, Peter; Huk, Thomas; Imhof, Birgit; Kammerer, Yvonne
2009-01-01
Two experiments are reported that investigated the relative effectiveness of a realistic dynamic visualization as opposed to a schematic visualization for learning about cell replication (mitosis). In Experiment 1, 37 university students watched either realistic or schematic visualizations. Students' subjective task demands ratings as well as…
Using Visual Imagery in the Classroom.
ERIC Educational Resources Information Center
Grabow, Beverly
1981-01-01
The use of visual imagery, visualization, and guided and unguided fantasy has potential as a teaching tool for use with learning disabled children. Visualization utilized in a gamelike atmosphere can help the student learn new concepts, can positively effect social behaviors, and can help with emotional control. (SB)
Physics Learning Styles in Higher Education
NASA Astrophysics Data System (ADS)
Loos, Rebecca; Ward, James
2012-03-01
Students in Physics learn in a variety ways depending on backgrounds and interests. This study proposes to evaluate how students in Physics learn using Howard Gardner's Theory of Multiple Intelligences. Physics utilizes numbers, conceptualization of models, observations and visualization skills, and the ability to understand and reflect on specific information. The main objective is to evaluate how Physics students learn specifically using spatial, visual and sequential approaches. This will be assessed by conducting a learning style survey provided by North Carolina State University (NCSU). The survey is completed online by the student after which the results are sent to NCSU. Students will print out the completed survey analysis for further evaluation. The NCSU results categorize students within five of ten learning styles. After the evaluation of Howard Gardner's Theory of Multiple Intelligences and the NCSU definitions of the ten learning styles, the NCSU sensing and visual learning styles will be defined as the Gardener's spatial, visual learning styles. NCSU's sequential learning style will be looked at separately. With the survey results, it can be determined if Physics students fall within the hypothesized learning styles.
Rahman, Md Mahmudur; Bhattacharya, Prabir; Desai, Bipin C
2007-01-01
A content-based image retrieval (CBIR) framework for diverse collection of medical images of different imaging modalities, anatomic regions with different orientations and biological systems is proposed. Organization of images in such a database (DB) is well defined with predefined semantic categories; hence, it can be useful for category-specific searching. The proposed framework consists of machine learning methods for image prefiltering, similarity matching using statistical distance measures, and a relevance feedback (RF) scheme. To narrow down the semantic gap and increase the retrieval efficiency, we investigate both supervised and unsupervised learning techniques to associate low-level global image features (e.g., color, texture, and edge) in the projected PCA-based eigenspace with their high-level semantic and visual categories. Specially, we explore the use of a probabilistic multiclass support vector machine (SVM) and fuzzy c-mean (FCM) clustering for categorization and prefiltering of images to reduce the search space. A category-specific statistical similarity matching is proposed in a finer level on the prefiltered images. To incorporate a better perception subjectivity, an RF mechanism is also added to update the query parameters dynamically and adjust the proposed matching functions. Experiments are based on a ground-truth DB consisting of 5000 diverse medical images of 20 predefined categories. Analysis of results based on cross-validation (CV) accuracy and precision-recall for image categorization and retrieval is reported. It demonstrates the improvement, effectiveness, and efficiency achieved by the proposed framework.
Machine Detection of Enhanced Electromechanical Energy Conversion in PbZr 0.2Ti 0.8O 3 Thin Films
Agar, Joshua C.; Cao, Ye; Naul, Brett; ...
2018-05-28
Many energy conversion, sensing, and microelectronic applications based on ferroic materials are determined by the domain structure evolution under applied stimuli. New hyperspectral, multidimensional spectroscopic techniques now probe dynamic responses at relevant length and time scales to provide an understanding of how these nanoscale domain structures impact macroscopic properties. Such approaches, however, remain limited in use because of the difficulties that exist in extracting and visualizing scientific insights from these complex datasets. Using multidimensional band-excitation scanning probe spectroscopy and adapting tools from both computer vision and machine learning, an automated workflow is developed to featurize, detect, and classify signatures ofmore » ferroelectric/ferroelastic switching processes in complex ferroelectric domain structures. This approach enables the identification and nanoscale visualization of varied modes of response and a pathway to statistically meaningful quantification of the differences between those modes. Lastly, among other things, the importance of domain geometry is spatially visualized for enhancing nanoscale electromechanical energy conversion.« less
VisualUrText: A Text Analytics Tool for Unstructured Textual Data
NASA Astrophysics Data System (ADS)
Zainol, Zuraini; Jaymes, Mohd T. H.; Nohuddin, Puteri N. E.
2018-05-01
The growing amount of unstructured text over Internet is tremendous. Text repositories come from Web 2.0, business intelligence and social networking applications. It is also believed that 80-90% of future growth data is available in the form of unstructured text databases that may potentially contain interesting patterns and trends. Text Mining is well known technique for discovering interesting patterns and trends which are non-trivial knowledge from massive unstructured text data. Text Mining covers multidisciplinary fields involving information retrieval (IR), text analysis, natural language processing (NLP), data mining, machine learning statistics and computational linguistics. This paper discusses the development of text analytics tool that is proficient in extracting, processing, analyzing the unstructured text data and visualizing cleaned text data into multiple forms such as Document Term Matrix (DTM), Frequency Graph, Network Analysis Graph, Word Cloud and Dendogram. This tool, VisualUrText, is developed to assist students and researchers for extracting interesting patterns and trends in document analyses.
Machine Detection of Enhanced Electromechanical Energy Conversion in PbZr 0.2Ti 0.8O 3 Thin Films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agar, Joshua C.; Cao, Ye; Naul, Brett
Many energy conversion, sensing, and microelectronic applications based on ferroic materials are determined by the domain structure evolution under applied stimuli. New hyperspectral, multidimensional spectroscopic techniques now probe dynamic responses at relevant length and time scales to provide an understanding of how these nanoscale domain structures impact macroscopic properties. Such approaches, however, remain limited in use because of the difficulties that exist in extracting and visualizing scientific insights from these complex datasets. Using multidimensional band-excitation scanning probe spectroscopy and adapting tools from both computer vision and machine learning, an automated workflow is developed to featurize, detect, and classify signatures ofmore » ferroelectric/ferroelastic switching processes in complex ferroelectric domain structures. This approach enables the identification and nanoscale visualization of varied modes of response and a pathway to statistically meaningful quantification of the differences between those modes. Lastly, among other things, the importance of domain geometry is spatially visualized for enhancing nanoscale electromechanical energy conversion.« less
Multitask visual learning using genetic programming.
Jaśkowski, Wojciech; Krawiec, Krzysztof; Wieloch, Bartosz
2008-01-01
We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training images. We apply this method to visual learning tasks of recognizing simple shapes and compare it to a reference method. The experimental verification demonstrates that such multitask learning often leads to performance improvements in one or both solved tasks, without extra computational effort.
Students using visual thinking to learn science in a Web-based environment
NASA Astrophysics Data System (ADS)
Plough, Jean Margaret
United States students' science test scores are low, especially in problem solving, and traditional science instruction could be improved. Consequently, visual thinking, constructing science structures, and problem solving in a web-based environment may be valuable strategies for improving science learning. This ethnographic study examined the science learning of fifteen fourth grade students in an after school computer club involving diverse students at an inner city school. The investigation was done from the perspective of the students, and it described the processes of visual thinking, web page construction, and problem solving in a web-based environment. The study utilized informal group interviews, field notes, Visual Learning Logs, and student web pages, and incorporated a Standards-Based Rubric which evaluated students' performance on eight science and technology standards. The Visual Learning Logs were drawings done on the computer to represent science concepts related to the Food Chain. Students used the internet to search for information on a plant or animal of their choice. Next, students used this internet information, with the information from their Visual Learning Logs, to make web pages on their plant or animal. Later, students linked their web pages to form Science Structures. Finally, students linked their Science Structures with the structures of other students, and used these linked structures as models for solving problems. Further, during informal group interviews, students answered questions about visual thinking, problem solving, and science concepts. The results of this study showed clearly that (1) making visual representations helped students understand science knowledge, (2) making links between web pages helped students construct Science Knowledge Structures, and (3) students themselves said that visual thinking helped them learn science. In addition, this study found that when using Visual Learning Logs, the main overall ideas of the science concepts were usually represented accurately. Further, looking for information on the internet may cause new problems in learning. Likewise, being absent, starting late, and/or dropping out all may negatively influence students' proficiency on the standards. Finally, the way Science Structures are constructed and linked may provide insights into the way individual students think and process information.
Discovering and visualizing indirect associations between biomedical concepts
Tsuruoka, Yoshimasa; Miwa, Makoto; Hamamoto, Kaisei; Tsujii, Jun'ichi; Ananiadou, Sophia
2011-01-01
Motivation: Discovering useful associations between biomedical concepts has been one of the main goals in biomedical text-mining, and understanding their biomedical contexts is crucial in the discovery process. Hence, we need a text-mining system that helps users explore various types of (possibly hidden) associations in an easy and comprehensible manner. Results: This article describes FACTA+, a real-time text-mining system for finding and visualizing indirect associations between biomedical concepts from MEDLINE abstracts. The system can be used as a text search engine like PubMed with additional features to help users discover and visualize indirect associations between important biomedical concepts such as genes, diseases and chemical compounds. FACTA+ inherits all functionality from its predecessor, FACTA, and extends it by incorporating three new features: (i) detecting biomolecular events in text using a machine learning model, (ii) discovering hidden associations using co-occurrence statistics between concepts, and (iii) visualizing associations to improve the interpretability of the output. To the best of our knowledge, FACTA+ is the first real-time web application that offers the functionality of finding concepts involving biomolecular events and visualizing indirect associations of concepts with both their categories and importance. Availability: FACTA+ is available as a web application at http://refine1-nactem.mc.man.ac.uk/facta/, and its visualizer is available at http://refine1-nactem.mc.man.ac.uk/facta-visualizer/. Contact: tsuruoka@jaist.ac.jp PMID:21685059
Interactive Learning System "VisMis" for Scientific Visualization Course
ERIC Educational Resources Information Center
Zhu, Xiaoming; Sun, Bo; Luo, Yanlin
2018-01-01
Now visualization courses have been taught at universities around the world. Keeping students motivated and actively engaged in this course can be a challenging task. In this paper we introduce our developed interactive learning system called VisMis (Visualization and Multi-modal Interaction System) for postgraduate scientific visualization course…
ERIC Educational Resources Information Center
Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter
2012-01-01
Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…
Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism
ERIC Educational Resources Information Center
Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter
2011-01-01
The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…
Drawing Connections Across Conceptually Related Visual Representations in Science
NASA Astrophysics Data System (ADS)
Hansen, Janice
This dissertation explored beliefs about learning from multiple related visual representations in science, and compared beliefs to learning outcomes. Three research questions were explored: 1) What beliefs do pre-service teachers, non-educators and children have about learning from visual representations? 2) What format of presenting those representations is most effective for learning? And, 3) Can children's ability to process conceptually related science diagrams be enhanced with added support? Three groups of participants, 89 pre-service teachers, 211 adult non-educators, and 385 middle school children, were surveyed about whether they felt related visual representations presented serially or simultaneously would lead to better learning outcomes. Two experiments, one with adults and one with child participants, explored the validity of these beliefs. Pre-service teachers did not endorse either serial or simultaneous related visual representations for their own learning. They were, however, significantly more likely to indicate that children would learn better from serially presented diagrams. In direct contrast to the educators, middle school students believed they would learn better from related visual representations presented simultaneously. Experimental data indicated that the beliefs adult non-educators held about their own learning needs matched learning outcomes. These participants endorsed simultaneous presentation of related diagrams for their own learning. When comparing learning from related diagrams presented simultaneously to learning from the same diagrams presented serially indicate that those in the simultaneously condition were able to create more complex mental models. A second experiment compared children's learning from related diagrams across four randomly-assigned conditions: serial, simultaneous, simultaneous with signaling, and simultaneous with structure mapping support. Providing middle school students with simultaneous related diagrams with support for structure mapping led to a lessened reliance on surface features, and a better understanding of the science concepts presented. These findings suggest that presenting diagrams serially in an effort to reduce cognitive load may not be preferable for learning if making connections across representations, and by extension across science concepts, is desired. Instead, providing simultaneous diagrams with structure mapping support may result in greater attention to the salient relationships between related visual representations as well as between the representations and the science concepts they depict.
The Statistics of Visual Representation
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.
2002-01-01
The experience of retinex image processing has prompted us to reconsider fundamental aspects of imaging and image processing. Foremost is the idea that a good visual representation requires a non-linear transformation of the recorded (approximately linear) image data. Further, this transformation appears to converge on a specific distribution. Here we investigate the connection between numerical and visual phenomena. Specifically the questions explored are: (1) Is there a well-defined consistent statistical character associated with good visual representations? (2) Does there exist an ideal visual image? And (3) what are its statistical properties?
Selective transfer of visual working memory training on Chinese character learning.
Opitz, Bertram; Schneiders, Julia A; Krick, Christoph M; Mecklinger, Axel
2014-01-01
Previous research has shown a systematic relationship between phonological working memory capacity and second language proficiency for alphabetic languages. However, little is known about the impact of working memory processes on second language learning in a non-alphabetic language such as Mandarin Chinese. Due to the greater complexity of the Chinese writing system we expect that visual working memory rather than phonological working memory exerts a unique influence on learning Chinese characters. This issue was explored in the present experiment by comparing visual working memory training with an active (auditory working memory training) control condition and a passive, no training control condition. Training induced modulations in language-related brain networks were additionally examined using functional magnetic resonance imaging in a pretest-training-posttest design. As revealed by pre- to posttest comparisons and analyses of individual differences in working memory training gains, visual working memory training led to positive transfer effects on visual Chinese vocabulary learning compared to both control conditions. In addition, we found sustained activation after visual working memory training in the (predominantly visual) left infero-temporal cortex that was associated with behavioral transfer. In the control conditions, activation either increased (active control condition) or decreased (passive control condition) without reliable behavioral transfer effects. This suggests that visual working memory training leads to more efficient processing and more refined responses in brain regions involved in visual processing. Furthermore, visual working memory training boosted additional activation in the precuneus, presumably reflecting mental image generation of the learned characters. We, therefore, suggest that the conjoint activity of the mid-fusiform gyrus and the precuneus after visual working memory training reflects an interaction of working memory and imagery processes with complex visual stimuli that fosters the coherent synthesis of a percept from a complex visual input in service of enhanced Chinese character learning. © 2013 Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Buditjahjanto, I. G. P. Asto; Nurlaela, Luthfiyah; Ekohariadi; Riduwan, Mochamad
2017-01-01
Programming technique is one of the subjects at Vocational High School in Indonesia. This subject contains theory and application of programming utilizing Visual Programming. Students experience some difficulties to learn textual learning. Therefore, it is necessary to develop media as a tool to transfer learning materials. The objectives of this…
Smith, Mary Lou; Bigel, Marla; Miller, Laurie A
2011-02-01
The mesial temporal lobes are important for learning arbitrary associations. It has previously been demonstrated that left mesial temporal structures are involved in learning word pairs, but it is not yet known whether comparable lesions in the right temporal lobe impair visually mediated associative learning. Patients who had undergone left (n=16) or right (n=18) temporal lobectomy for relief of intractable epilepsy and healthy controls (n=13) were administered two paired-associate learning tasks assessing their learning and memory of pairs of abstract designs or pairs of symbols in unique locations. Both patient groups had deficits in learning the designs, but only the right temporal group was impaired in recognition. For the symbol location task, differences were not found in learning, but again a recognition deficit was found for the right temporal group. The findings implicate the mesial temporal structures in relational learning. They support a material-specific effect for recognition but not for learning and recall of arbitrary visual and visual-spatial associative information. Copyright © 2010 Elsevier Inc. All rights reserved.
Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G
2017-01-01
Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.
Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning
Yee, Meagan; Jones, Susan S.; Smith, Linda B.
2012-01-01
Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015
Diagnosing alternative conceptions of Fermi energy among undergraduate students
NASA Astrophysics Data System (ADS)
Sharma, Sapna; Ahluwalia, Pardeep Kumar
2012-07-01
Physics education researchers have scientifically established the fact that the understanding of new concepts and interpretation of incoming information are strongly influenced by the preexisting knowledge and beliefs of students, called epistemological beliefs. This can lead to a gap between what students actually learn and what the teacher expects them to learn. In a classroom, as a teacher, it is desirable that one tries to bridge this gap at least on the key concepts of a particular field which is being taught. One such key concept which crops up in statistical physics/solid-state physics courses, and around which the behaviour of materials is described, is Fermi energy (εF). In this paper, we present the results which emerged about misconceptions on Fermi energy in the process of administering a diagnostic tool called the Statistical Physics Concept Survey developed by the authors. It deals with eight themes of basic importance in learning undergraduate solid-state physics and statistical physics. The question items of the tool were put through well-established sequential processes: definition of themes, Delphi study, interview with students, drafting questions, administration, validity and reliability of the tool. The tool was administered to a group of undergraduate students and postgraduate students, in a pre-test and post-test design. In this paper, we have taken one of the themes i.e. Fermi energy of the diagnostic tool for our analysis and discussion. Students’ responses and reasoning comments given during interview were analysed. This analysis helped us to identify prevailing misconceptions/learning gaps among students on this topic. How spreadsheets can be effectively used to remove the identified misconceptions and help appreciate the finer nuances while visualizing the behaviour of the system around Fermi energy, normally sidestepped both by the teachers and learners, is also presented in this paper.
Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R.
2017-01-01
Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee (Bombus terrestris) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. PMID:28978727
Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R; Chittka, Lars; Perry, Clint J
2017-10-11
Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee ( Bombus terrestris ) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. © 2017 The Authors.
Learning to See the Infinite: Measuring Visual Literacy Skills in a 1st-Year Seminar Course
ERIC Educational Resources Information Center
Palmer, Michael S.; Matthews, Tatiana
2015-01-01
Visual literacy was a stated learning objective for the fall 2009 iteration of a first-year seminar course. To help students develop visual literacy skills, they received formal instruction throughout the semester and completed a series of carefully designed learning activities. The effects of these interventions were measured using a one-group…
Learning Program for Enhancing Visual Literacy for Non-Design Students Using a CMS to Share Outcomes
ERIC Educational Resources Information Center
Ariga, Taeko; Watanabe, Takashi; Otani, Toshio; Masuzawa, Toshimitsu
2016-01-01
This study proposes a basic learning program for enhancing visual literacy using an original Web content management system (Web CMS) to share students' outcomes in class as a blog post. It seeks to reinforce students' understanding and awareness of the design of visual content. The learning program described in this research focuses on to address…
ERIC Educational Resources Information Center
Warmington, Meesha; Hulme, Charles
2012-01-01
This study examines the concurrent relationships between phoneme awareness, visual-verbal paired-associate learning, rapid automatized naming (RAN), and reading skills in 7- to 11-year-old children. Path analyses showed that visual-verbal paired-associate learning and RAN, but not phoneme awareness, were unique predictors of word recognition,…
de Rivera, Christina; Boutet, Isabelle; Zicker, Steven C; Milgram, Norton W
2005-03-01
Tasks requiring visual discrimination are commonly used in assessment of canine cognitive function. However, little is known about canine visual processing, and virtually nothing is known about the effects of age on canine visual function. This study describes a novel behavioural method developed to assess one aspect of canine visual function, namely contrast sensitivity. Four age groups (young, middle aged, old, and senior) were studied. We also included a group of middle aged to old animals that had been maintained for at least 4 years on a specially formulated food containing a broad spectrum of antioxidants and mitochondrial cofactors. Performance of this group was compared with a group in the same age range maintained on a control diet. In the first phase, all animals were trained to discriminate between two high contrast shapes. In the second phase, contrast was progressively reduced by increasing the luminance of the shapes. Performance decreased as a function of age, but the differences did not achieve statistical significance, possibly because of a small sample size in the young group. All age groups were able to acquire the initial discrimination, although the two older age groups showed slower learning. Errors increased with decreasing contrast with the maximal number of errors for the 1% contrast shape. Also, all animals on the antioxidant diet learned the task and had significantly fewer errors at the high contrast compared with the animals on the control diet. The initial results suggest that contrast sensitivity deteriorates with age in the canine while form perception is largely unaffected by age.
Fan, Jianping; Gao, Yuli; Luo, Hangzai
2008-03-01
In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.
Feldmann-Wüstefeld, Tobias; Uengoer, Metin; Schubö, Anna
2015-11-01
Besides visual salience and observers' current intention, prior learning experience may influence deployment of visual attention. Associative learning models postulate that observers pay more attention to stimuli previously experienced as reliable predictors of specific outcomes. To investigate the impact of learning experience on deployment of attention, we combined an associative learning task with a visual search task and measured event-related potentials of the EEG as neural markers of attention deployment. In the learning task, participants categorized stimuli varying in color/shape with only one dimension being predictive of category membership. In the search task, participants searched a shape target while disregarding irrelevant color distractors. Behavioral results showed that color distractors impaired performance to a greater degree when color rather than shape was predictive in the learning task. Neurophysiological results show that the amplified distraction was due to differential attention deployment (N2pc). Experiment 2 showed that when color was predictive for learning, color distractors captured more attention in the search task (ND component) and more suppression of color distractor was required (PD component). The present results thus demonstrate that priority in visual attention is biased toward predictive stimuli, which allows learning experience to shape selection. We also show that learning experience can overrule strong top-down control (blocked tasks, Experiment 3) and that learning experience has a longer-term effect on attention deployment (tasks on two successive days, Experiment 4). © 2015 Society for Psychophysiological Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Pinto, Marcos; Conklin, Heather M.; Li Chenghong
Purpose: The primary objective of this study was to determine whether children with localized ependymoma experience a decline in verbal or visual-auditory learning after conformal radiation therapy (CRT). The secondary objective was to investigate the impact of age and select clinical factors on learning before and after treatment. Methods and Materials: Learning in a sample of 71 patients with localized ependymoma was assessed with the California Verbal Learning Test (CVLT-C) and the Visual-Auditory Learning Test (VAL). Learning measures were administered before CRT, at 6 months, and then yearly for a total of 5 years. Results: There was no significant declinemore » on measures of verbal or visual-auditory learning after CRT; however, younger age, more surgeries, and cerebrospinal fluid shunting did predict lower scores at baseline. There were significant longitudinal effects (improved learning scores after treatment) among older children on the CVLT-C and children that did not receive pre-CRT chemotherapy on the VAL. Conclusion: There was no evidence of global decline in learning after CRT in children with localized ependymoma. Several important implications from the findings include the following: (1) identification of and differentiation among variables with transient vs. long-term effects on learning, (2) demonstration that children treated with chemotherapy before CRT had greater risk of adverse visual-auditory learning performance, and (3) establishment of baseline and serial assessment as critical in ascertaining necessary sensitivity and specificity for the detection of modest effects.« less
The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.
Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo
2014-09-01
Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus non-motor functions, with only the posterior lobe being responsible for learning in the perceptual domain. Copyright © 2014. Published by Elsevier Ltd.
Visual research in clinical education.
Bezemer, Jeff
2017-01-01
The aim of this paper is to explore what might be gained from collecting and analysing visual data, such as photographs, scans, drawings, video and screen recordings, in clinical educational research. Its focus is on visual research that looks at teaching and learning 'as it naturally occurs' in the work place, in simulation centres and other sites, and also involves the collection and analysis of visual learning materials circulating in these sites. With the ubiquity of digital recording devices, video data and visual learning materials are now relatively cheap to collect. Compared to other domains of education research visual materials are not widely used in clinical education research. The paper sets out to identify and reflect on the possibilities for visual research using examples from an ethnographic study on surgical and inter-professional learning in the operating theatres of a London hospital. The paper shows how visual research enables recognition, analysis and critical evaluation of (1) the hidden curriculum, such as the meanings implied by embodied, visible actions of clinicians; (2) the ways in which clinical teachers design multimodal learning environments using a range of modes of communication available to them, combining, for instance, gesture and speech; (3) the informal assessment of clinical skills, and the intricate relation between trainee performance and supervisor feedback; (4) the potentialities and limitations of different visual learning materials, such as textbooks and videos, for representing medical knowledge. The paper concludes with theoretical and methodological reflections on what can be made visible, and therefore available for analysis, explanation and evaluation if visual materials are used for clinical education research, and what remains unaccounted for if written language remains the dominant mode in the research cycle. Opportunities for quantitative analysis and ethical implications are also discussed. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Marschark, Marc; Pelz, Jeff B.; Convertino, Carol; Sapere, Patricia; Arndt, Mary Ellen; Seewagen, Rosemarie
2006-01-01
This study examined visual information processing and learning in classrooms including both deaf and hearing students. Of particular interest were the effects on deaf students’ learning of live (three-dimensional) versus video-recorded (two-dimensional) sign language interpreting and the visual attention strategies of more and less experienced deaf signers exposed to simultaneous, multiple sources of visual information. Results from three experiments consistently indicated no differences in learning between three-dimensional and two-dimensional presentations among hearing or deaf students. Analyses of students’ allocation of visual attention and the influence of various demographic and experimental variables suggested considerable flexibility in deaf students’ receptive communication skills. Nevertheless, the findings also revealed a robust advantage in learning in favor of hearing students. PMID:16628250
Efficacy of Simulation-Based Learning of Electronics Using Visualization and Manipulation
ERIC Educational Resources Information Center
Chen, Yu-Lung; Hong, Yu-Ru; Sung, Yao-Ting; Chang, Kuo-En
2011-01-01
Software for simulation-based learning of electronics was implemented to help learners understand complex and abstract concepts through observing external representations and exploring concept models. The software comprises modules for visualization and simulative manipulation. Differences in learning performance of using the learning software…
Analysing the physics learning environment of visually impaired students in high schools
NASA Astrophysics Data System (ADS)
Toenders, Frank G. C.; de Putter-Smits, Lesley G. A.; Sanders, Wendy T. M.; den Brok, Perry
2017-07-01
Although visually impaired students attend regular high school, their enrolment in advanced science classes is dramatically low. In our research we evaluated the physics learning environment of a blind high school student in a regular Dutch high school. For visually impaired students to grasp physics concepts, time and additional materials to support the learning process are key. Time for teachers to develop teaching methods for such students is scarce. Suggestions for changes to the learning environment and of materials used are given.
NASA Astrophysics Data System (ADS)
Moodley, Sadha
The purpose of this study was to determine whether the use of dynamic computer-based visualizations of the classical model of particle behavior helps to improve student understanding, performance, and interest in science when used by teachers as visual presentations to complement their traditional methods of teaching. The software, Virtual Molecular Dynamics Laboratory (VMDL), was developed at the Center for Polymer Studies at Boston University through funding from the National Science Foundation. The design of the study included five pairs of classes in four different schools in New England from the inner city and from advantaged suburbs. The study employed a treatment-control group design for testing the impact of several VMDL simulations on student learning in several content areas from traditional chemistry and physical science courses. The study employed a mixed qualitative and quantitative design. The quantitative part involved administering the Group Assessment of Logical Thinking (GALT) as well as post-tests that were topic specific. An Analysis of Covariance (ANCOVA) was conducted on the test scores with the GALT scores serving as a covariate. Results of the ANCOVA showed that students' understanding and performance were better in classes where teachers used the computer-based dynamic visualizations to complement their traditional teaching. GALT scores were significantly different among schools but very similar within schools. They were significant in adjusting post-test scores for pre-treatment differences for only two of the schools. The treatment groups outscored the control groups in all five comparisons. The mean differences reached statistical significance at the p < .01 level in only four of the comparisons. The qualitative part of the study involved classroom observations and student interviews. Analysis of classroom observations revealed a shift in classroom dynamics to more learner-centeredness with greater engagement by students, especially in classes that tended to have little student participation without the simulations. Analysis of the student interviews indicated that the dynamic visualizations made learning more enjoyable, helped with remembering, and enhanced students abilities to make connections between the nanoscopic and macroscopic science.
Feature and Region Selection for Visual Learning.
Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando
2016-03-01
Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.
Anderson, Afrouz A; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Dashtestani, Hadis; Chowdhry, Fatima A; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H
2018-01-01
Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing.
Anderson, Afrouz A.; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Chowdhry, Fatima A.; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H.
2018-01-01
Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing. PMID:29870536
Incidental Auditory Category Learning
Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.
2015-01-01
Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588
Attentional Bias in Human Category Learning: The Case of Deep Learning.
Hanson, Catherine; Caglar, Leyla Roskan; Hanson, Stephen José
2018-01-01
Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This "failure" to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning.
Motor learning and working memory in children born preterm: a systematic review.
Jongbloed-Pereboom, Marjolein; Janssen, Anjo J W M; Steenbergen, Bert; Nijhuis-van der Sanden, Maria W G
2012-04-01
Children born preterm have a higher risk for developing motor, cognitive, and behavioral problems. Motor problems can occur in combination with working memory problems, and working memory is important for explicit learning of motor skills. The relation between motor learning and working memory has never been reviewed. The goal of this review was to provide an overview of motor learning, visual working memory and the role of working memory on motor learning in preterm children. A systematic review conducted in four databases identified 38 relevant articles, which were evaluated for methodological quality. Only 4 of 38 articles discussed motor learning in preterm children. Thirty-four studies reported on visual working memory; preterm birth affected performance on visual working memory tests. Information regarding motor learning and the role of working memory on the different components of motor learning was not available. Future research should address this issue. Insight in the relation between motor learning and visual working memory may contribute to the development of evidence based intervention programs for children born preterm. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Oh, Yunjin; Lee, Soon Min
2016-01-01
This study explored whether learning-related anxiety would negatively affect intention to persist with e-learning among students with visual impairment, and examined the roles of three online interactions in the relationship between learning-related anxiety and intention to persist with e-learning. For this study, a convenience sample of…
Effect of Computer-Assisted Learning on Students' Dental Anatomy Waxing Performance.
Kwon, So Ran; Hernández, Marcela; Blanchette, Derek R; Lam, Matthew T; Gratton, David G; Aquilino, Steven A
2015-09-01
The aim of this study was to evaluate the impact of computer-assisted learning on first-year dental students' waxing abilities and self-evaluation skills. Additionally, this study sought to determine how well digital evaluation software performed compared to faculty grading with respect to students' technical scores on a practical competency examination. First-year students at one U.S. dental school were assigned to one of three groups: control (n=40), E4D Compare (n=20), and Sirona prepCheck (n=19). Students in the control group were taught by traditional teaching methodologies, and the technology-assisted groups received both traditional training and supplementary feedback from the corresponding digital system. Five outcomes were measured: visual assessment score, self-evaluation score, and digital assessment scores at 0.25 mm, 0.30 mm, and 0.35 mm tolerance. The scores from visual assessment and self-evaluation were examined for differences among groups using the Kruskal-Wallis test. Correlation between the visual assessment and digital scores was measured using Pearson and Spearman rank correlation coefficients. At completion of the course, students were asked to complete a survey on the use of these digital technologies. All 79 students in the first-year class participated in the study, for a 100% response rate. The results showed that the visual assessment and self-evaluation scores did not differ among groups (p>0.05). Overall correlations between visual and digital assessment scores were modest though statistically significant (5% level of significance). Analysis of survey responses completed by students in the technology groups showed that profiles for the two groups were similar and not favorable towards digital technology. The study concluded that technology-assisted training did not affect these students' waxing performance or self-evaluation skills and that visual scores given by faculty and digital assessment scores correlated moderately.
Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues
NASA Astrophysics Data System (ADS)
Adams, W. H.; Iyengar, Giridharan; Lin, Ching-Yung; Naphade, Milind Ramesh; Neti, Chalapathy; Nock, Harriet J.; Smith, John R.
2003-12-01
We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.
Technologies That Capitalize on Study Skills with Learning Style Strengths
ERIC Educational Resources Information Center
Howell, Dusti D.
2008-01-01
This article addresses the tools available in the rapidly changing digital learning environment and offers a variety of approaches for how they can assist students with visual, auditory, or kinesthetic learning strengths. Teachers can use visual, auditory, and kinesthetic assessment tests to identify learning preferences and then recommend…
Using Visualization to Motivate Student Participation in Collaborative Online Learning Environments
ERIC Educational Resources Information Center
Jin, Sung-Hee
2017-01-01
Online participation in collaborative online learning environments is instrumental in motivating students to learn and promoting their learning satisfaction, but there has been little research on the technical supports for motivating students' online participation. The purpose of this study was to develop a visualization tool to motivate learners…
ERIC Educational Resources Information Center
Kim, Yong-Jin; Chang, Nam-Kee
2001-01-01
Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…
From a Gloss to a Learning Tool: Does Visual Aids Enhance Better Sentence Comprehension?
ERIC Educational Resources Information Center
Sato, Takeshi; Suzuki, Akio
2012-01-01
The aim of this study is to optimize CALL environments as a learning tool rather than a gloss, focusing on the learning of polysemous words which refer to spatial relationship between objects. A lot of research has already been conducted to examine the efficacy of visual glosses while reading L2 texts and has reported that visual glosses can be…
Image Statistics and the Representation of Material Properties in the Visual Cortex
Baumgartner, Elisabeth; Gegenfurtner, Karl R.
2016-01-01
We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images. PMID:27582714
Image Statistics and the Representation of Material Properties in the Visual Cortex.
Baumgartner, Elisabeth; Gegenfurtner, Karl R
2016-01-01
We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images.
English Orthographic Learning in Chinese-L1 Young EFL Beginners.
Cheng, Yu-Lin
2017-12-01
English orthographic learning, among Chinese-L1 children who were beginning to learn English as a foreign language, was documented when: (1) only visual memory was at their disposal, (2) visual memory and either some letter-sound knowledge or some semantic information was available, and (3) visual memory, some letter-sound knowledge and some semantic information were all available. When only visual memory was available, orthographic learning (measured via an orthographic choice test) was meagre. Orthographic learning was significant when either semantic information or letter-sound knowledge supplemented visual memory, with letter-sound knowledge generating greater significance. Although the results suggest that letter-sound knowledge plays a more important role than semantic information, letter-sound knowledge alone does not suffice to achieve perfect orthographic learning, as orthographic learning was greatest when letter-sound knowledge and semantic information were both available. The present findings are congruent with a view that the orthography of a foreign language drives its orthographic learning more than L1 orthographic learning experience, thus extending Share's (Cognition 55:151-218, 1995) self-teaching hypothesis to include non-alphabetic L1 children's orthographic learning of an alphabetic foreign language. The little letter-sound knowledge development observed in the experiment-I control group indicates that very little letter-sound knowledge develops in the absence of dedicated letter-sound training. Given the important role of letter-sound knowledge in English orthographic learning, dedicated letter-sound instruction is highly recommended.
NASA Astrophysics Data System (ADS)
Kilb, D. L.; Fundis, A. T.; Risien, C. M.
2012-12-01
The focus of the Education and Public Engagement (EPE) component of the NSF's Ocean Observatories Initiative (OOI) is to provide a new layer of cyber-interactivity for undergraduate educators to bring near real-time data from the global ocean into learning environments. To accomplish this, we are designing six online services including: 1) visualization tools, 2) a lesson builder, 3) a concept map builder, 4) educational web services (middleware), 5) collaboration tools and 6) an educational resource database. Here, we report on our Fall 2012 release that includes the first four of these services: 1) Interactive visualization tools allow users to interactively select data of interest, display the data in various views (e.g., maps, time-series and scatter plots) and obtain statistical measures such as mean, standard deviation and a regression line fit to select data. Specific visualization tools include a tool to compare different months of data, a time series explorer tool to investigate the temporal evolution of select data parameters (e.g., sea water temperature or salinity), a glider profile tool that displays ocean glider tracks and associated transects, and a data comparison tool that allows users to view the data either in scatter plot view comparing one parameter with another, or in time series view. 2) Our interactive lesson builder tool allows users to develop a library of online lesson units, which are collaboratively editable and sharable and provides starter templates designed from learning theory knowledge. 3) Our interactive concept map tool allows the user to build and use concept maps, a graphical interface to map the connection between concepts and ideas. This tool also provides semantic-based recommendations, and allows for embedding of associated resources such as movies, images and blogs. 4) Education web services (middleware) will provide an educational resource database API.
Learning Science Using AR Book: A Preliminary Study on Visual Needs of Deaf Learners
NASA Astrophysics Data System (ADS)
Megat Mohd. Zainuddin, Norziha; Badioze Zaman, Halimah; Ahmad, Azlina
Augmented Reality (AR) is a technology that is projected to have more significant role in teaching and learning, particularly in visualising abstract concepts in the learning process. AR is a technology is based on visually oriented technique. Thus, it is suitable for deaf learners since they are generally classified as visual learners. Realising the importance of visual learning style for deaf learners in learning Science, this paper reports on a preliminary study of on an ongoing research on problems faced by deaf learners in learning the topic on Microorganisms. Being visual learners, they have problems with current text books that are more text-based that graphic based. In this preliminary study, a qualitative approach using the ethnographic observational technique was used so that interaction with three deaf learners who are participants throughout this study (they are also involved actively in the design and development of the AR Book). An interview with their teacher and doctor were also conducted to identify their learning and medical problems respectively. Preliminary findings have confirmed the need to design and develop a special Augmented Reality Book called AR-Science for Deaf Learners (AR-SiD).
NASA Astrophysics Data System (ADS)
Jiménez del Toro, Oscar; Atzori, Manfredo; Otálora, Sebastian; Andersson, Mats; Eurén, Kristian; Hedlund, Martin; Rönnquist, Peter; Müller, Henning
2017-03-01
The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional neural networks are a promising approach for the automatic classification of histopathology images and can hierarchically learn subtle visual features from the data. However, a large number of manual annotations from pathologists are commonly required to obtain sufficient statistical generalization when training new models that can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects prostatectomy WSIs with high-grade Gleason score is proposed. We evaluate the performance of various deep learning architectures training them with patches extracted from automatically generated regions-of-interest rather than from manually segmented ones. Relevant parameters for training the deep learning model such as size and number of patches as well as the inclusion or not of data augmentation are compared between the tested deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with different Gleason grades in a 2-class decision: high vs. low Gleason grade. Grades 7-8, which represent the boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data sets with straightforward re-training of the model to include data from multiple sources, scanners and acquisition techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches when training networks for big data sets and to guide the visual inspection of these images.
Giesbrecht, Barry; Sy, Jocelyn L.; Guerin, Scott A.
2012-01-01
Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants’ subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment. PMID:23099047
Dynamic functional brain networks involved in simple visual discrimination learning.
Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis
2014-10-01
Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.
Cross-modal interaction between visual and olfactory learning in Apis cerana.
Zhang, Li-Zhen; Zhang, Shao-Wu; Wang, Zi-Long; Yan, Wei-Yu; Zeng, Zhi-Jiang
2014-10-01
The power of the small honeybee brain carrying out behavioral and cognitive tasks has been shown repeatedly to be highly impressive. The present study investigates, for the first time, the cross-modal interaction between visual and olfactory learning in Apis cerana. To explore the role and molecular mechanisms of cross-modal learning in A. cerana, the honeybees were trained and tested in a modified Y-maze with seven visual and five olfactory stimulus, where a robust visual threshold for black/white grating (period of 2.8°-3.8°) and relatively olfactory threshold (concentration of 50-25%) was obtained. Meanwhile, the expression levels of five genes (AcCREB, Acdop1, Acdop2, Acdop3, Actyr1) related to learning and memory were analyzed under different training conditions by real-time RT-PCR. The experimental results indicate that A. cerana could exhibit cross-modal interactions between visual and olfactory learning by reducing the threshold level of the conditioning stimuli, and that these genes may play important roles in the learning process of honeybees.
... Doing AMIGAS Stay Informed Cancer Home Uterine Cancer Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... the most commonly diagnosed gynecologic cancer. U.S. Cancer Statistics Data Visualizations Tool The Data Visualizations tool makes ...
Communicating Science Concepts to Individuals with Visual Impairments Using Short Learning Modules
ERIC Educational Resources Information Center
Stender, Anthony S.; Newell, Ryan; Villarreal, Eduardo; Swearer, Dayne F.; Bianco, Elisabeth; Ringe, Emilie
2016-01-01
Of the 6.7 million individuals in the United States who are visually impaired, 63% are unemployed, and 59% have not attained an education beyond a high school diploma. Providing a basic science education to children and adults with visual disabilities can be challenging because most scientific learning relies on visual demonstrations. Creating…
Application of Frameworks in the Analysis and (Re)design of Interactive Visual Learning Tools
ERIC Educational Resources Information Center
Liang, Hai-Ning; Sedig, Kamran
2009-01-01
Interactive visual learning tools (IVLTs) are software environments that encode and display information visually and allow learners to interact with the visual information. This article examines the application and utility of frameworks in the analysis and design of IVLTs at the micro level. Frameworks play an important role in any design. They…
Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark
2015-10-01
Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.
ERIC Educational Resources Information Center
Mokiwa, S. A.; Phasha, T. N.
2012-01-01
For students with visual impairments, Information and Communication Technology (ICT) has become an important means through which they can learn and access learning materials at various levels of education. However, their learning experiences in using such form of technologies have been rarely documented, thus suggests society's lack of…
ERIC Educational Resources Information Center
Bell, Justine C.
2014-01-01
To test the claim that digital learning tools enhance the acquisition of visual literacy in this generation of biology students, a learning intervention was carried out with 33 students enrolled in an introductory college biology course. This study compared learning outcomes following two types of learning tools: a traditional drawing activity, or…
ERIC Educational Resources Information Center
Ningsih; Soetjipto, Budi Eko; Sumarmi
2017-01-01
The purpose of this study was: (1) to analyze increasing students' learning activity and learning outcomes. Student activities which were observed include the visual, verbal, listening, writing and mental visual activity; (2) to analyze the improvement of student learning outcomes using "Round Table" and "Rally Coach" Model of…
Mobility scooter driving ability in visually impaired individuals.
Cordes, Christina; Heutink, Joost; Brookhuis, Karel A; Brouwer, Wiebo H; Melis-Dankers, Bart J M
2018-06-01
To investigate how well visually impaired individuals can learn to use mobility scooters and which parts of the driving task deserve special attention. A mobility scooter driving skill test was developed to compare driving skills (e.g. reverse driving, turning) between 48 visually impaired (very low visual acuity = 14, low visual acuity = 10, peripheral field defects = 11, multiple visual impairments = 13) and 37 normal-sighted controls without any prior experience with mobility scooters. Performance on this test was rated on a three-point scale. Furthermore, the number of extra repetitions on the different elements were noted. Results showed that visually impaired participants were able to gain sufficient driving skills to be able to use mobility scooters. Participants with visual field defects combined with low visual acuity showed most problems learning different skills and needed more training. Reverse driving and stopping seemed to be most difficult. The present findings suggest that visually impaired individuals are able to learn to drive mobility scooters. Mobility scooter allocators should be aware that these individuals might need more training on certain elements of the driving task. Implications for rehabilitation Visual impairments do not necessarily lead to an inability to acquire mobility scooter driving skills. Individuals with peripheral field defects (especially in combination with reduced visual acuity) need more driving ability training compared to normal-sighted people - especially to accomplish reversing. Individual assessment of visually impaired people is recommended, since participants in this study showed a wide variation in ability to learn driving a mobility scooter.
Visual Hybrid Development Learning System (VHDLS) framework for children with autism.
Banire, Bilikis; Jomhari, Nazean; Ahmad, Rodina
2015-10-01
The effect of education on children with autism serves as a relative cure for their deficits. As a result of this, they require special techniques to gain their attention and interest in learning as compared to typical children. Several studies have shown that these children are visual learners. In this study, we proposed a Visual Hybrid Development Learning System (VHDLS) framework that is based on an instructional design model, multimedia cognitive learning theory, and learning style in order to guide software developers in developing learning systems for children with autism. The results from this study showed that the attention of children with autism increased more with the proposed VHDLS framework.
Telgen, Sebastian; Parvin, Darius; Diedrichsen, Jörn
2014-10-08
Motor learning tasks are often classified into adaptation tasks, which involve the recalibration of an existing control policy (the mapping that determines both feedforward and feedback commands), and skill-learning tasks, requiring the acquisition of new control policies. We show here that this distinction also applies to two different visuomotor transformations during reaching in humans: Mirror-reversal (left-right reversal over a mid-sagittal axis) of visual feedback versus rotation of visual feedback around the movement origin. During mirror-reversal learning, correct movement initiation (feedforward commands) and online corrections (feedback responses) were only generated at longer latencies. The earliest responses were directed into a nonmirrored direction, even after two training sessions. In contrast, for visual rotation learning, no dependency of directional error on reaction time emerged, and fast feedback responses to visual displacements of the cursor were immediately adapted. These results suggest that the motor system acquires a new control policy for mirror reversal, which initially requires extra processing time, while it recalibrates an existing control policy for visual rotations, exploiting established fast computational processes. Importantly, memory for visual rotation decayed between sessions, whereas memory for mirror reversals showed offline gains, leading to better performance at the beginning of the second session than in the end of the first. With shifts in time-accuracy tradeoff and offline gains, mirror-reversal learning shares common features with other skill-learning tasks. We suggest that different neuronal mechanisms underlie the recalibration of an existing versus acquisition of a new control policy and that offline gains between sessions are a characteristic of latter. Copyright © 2014 the authors 0270-6474/14/3413768-12$15.00/0.
Contextual cueing: implicit learning and memory of visual context guides spatial attention.
Chun, M M; Jiang, Y
1998-06-01
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Tureli, Derya; Altas, Hilal; Cengic, Ismet; Ekinci, Gazanfer; Baltacioglu, Feyyaz
2015-10-01
The aim of the study was to ascertain the learning curves for the radiology residents when first introduced to an anatomic structure in magnetic resonance images (MRI) to which they have not been previously exposed to. The iliolumbar ligament is a good marker for testing learning curves of radiology residents because the ligament is not part of a routine lumbar MRI reporting and has high variability in detection. Four radiologists, three residents without previous training and one mentor, studied standard axial T1- and T2-weighted images of routine lumbar MRI examinations. Radiologists had to define iliolumbar ligament while blinded to each other's findings. Interobserver agreement analyses, namely Cohen and Fleiss κ statistics, were performed for groups of 20 cases to evaluate the self-learning curve of radiology residents. Mean κ values of resident-mentor pairs were 0.431, 0.608, 0.604, 0.826, and 0.963 in the analysis of successive groups (P < .001). The results indicate that the concordance between the experienced and inexperienced radiologists started as weak (κ <0.5) and gradually became very acceptable (κ >0.8). Therefore, a junior radiology resident can obtain enough experience in identifying a rather ambiguous anatomic structure in routine MRI after a brief instruction of a few minutes by a mentor and studying approximately 80 cases by oneself. Implementing this methodology will help radiology educators obtain more concrete ideas on the optimal time and effort required for supported self-directed visual learning processes in resident education. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex
Poort, Jasper; Khan, Adil G.; Pachitariu, Marius; Nemri, Abdellatif; Orsolic, Ivana; Krupic, Julija; Bauza, Marius; Sahani, Maneesh; Keller, Georg B.; Mrsic-Flogel, Thomas D.; Hofer, Sonja B.
2015-01-01
Summary We determined how learning modifies neural representations in primary visual cortex (V1) during acquisition of a visually guided behavioral task. We imaged the activity of the same layer 2/3 neuronal populations as mice learned to discriminate two visual patterns while running through a virtual corridor, where one pattern was rewarded. Improvements in behavioral performance were closely associated with increasingly distinguishable population-level representations of task-relevant stimuli, as a result of stabilization of existing and recruitment of new neurons selective for these stimuli. These effects correlated with the appearance of multiple task-dependent signals during learning: those that increased neuronal selectivity across the population when expert animals engaged in the task, and those reflecting anticipation or behavioral choices specifically in neuronal subsets preferring the rewarded stimulus. Therefore, learning engages diverse mechanisms that modify sensory and non-sensory representations in V1 to adjust its processing to task requirements and the behavioral relevance of visual stimuli. PMID:26051421
Hempel, Dorothea; Sinnathurai, Sivajini; Haunhorst, Stephanie; Seibel, Armin; Michels, Guido; Heringer, Frank; Recker, Florian; Breitkreutz, Raoul
2016-08-01
Theoretical knowledge, visual perception, and sensorimotor skills are key elements in ultrasound education. Classroom-based presentations are used routinely to teach theoretical knowledge, whereas visual perception and sensorimotor skills typically require hands-on training (HT). We aimed to compare the effect of classroom-based lectures versus a case-based e-learning (based on clinical cases only) on the hands-on performance of trainees during an emergency ultrasound course. This is a randomized, controlled, parallel-group study. Sixty-two medical students were randomized into two groups [group 1 (G1) and group 2 (G2)]. G1 (n=29) was subjected to a precourse e-learning, based on 14 short screencasts (each 5 min), an on-site discussion (60 min), and a standardized HT session on the day of the course. G2 (n=31) received classroom-based presentations on the day of the course before an identical HT session. Both groups completed a multiple-choice (MC) pretest (test A), a practical postcourse test (objective structured clinical exam), and MC tests directly after the HT (test B) and 1 day after the course (test C). The Mann-Whitney U-test was used for statistical analysis. G1 performed markedly better in test A (median 84.2, 25%; 75% percentile: 68.5; 92.2) compared with G2 (65.8; 53.8; 80.4), who had not participated in case-based e-learning (P=0.0009). No differences were found in the objective structured clinical exam, test B, and test C. e-learning exclusively based on clinical cases is an effective method of education in preparation for HT sessions and can reduce attendance time in ultrasound courses.
NASA Astrophysics Data System (ADS)
McEver, Jimmie; Davis, Paul K.; Bigelow, James H.
2000-06-01
We have developed and used families of multiresolution and multiple-perspective models (MRM and MRMPM), both in our substantive analytic work for the Department of Defense and to learn more about how such models can be designed and implemented. This paper is a brief case history of our experience with a particular family of models addressing the use of precision fires in interdicting and halting an invading army. Our models were implemented as closed-form analytic solutions, in spreadsheets, and in the more sophisticated AnalyticaTM environment. We also drew on an entity-level simulation for data. The paper reviews the importance of certain key attributes of development environments (visual modeling, interactive languages, friendly use of array mathematics, facilities for experimental design and configuration control, statistical analysis tools, graphical visualization tools, interactive post-processing, and relational database tools). These can go a long way towards facilitating MRMPM work, but many of these attributes are not yet widely available (or available at all) in commercial model-development tools--especially for use with personal computers. We conclude with some lessons learned from our experience.
Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.
Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K
2017-08-30
A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.
Effects of Multimodal Information on Learning Performance and Judgment of Learning
ERIC Educational Resources Information Center
Chen, Gongxiang; Fu, Xiaolan
2003-01-01
Two experiments were conducted to investigate the effects of multimodal information on learning performance and judgment of learning (JOL). Experiment 1 examined the effects of representation type (word-only versus word-plus-picture) and presentation channel (visual-only versus visual-plus-auditory) on recall and immediate-JOL in fixed-rate…
The Role of Visual Learning in Improving Students' High-Order Thinking Skills
ERIC Educational Resources Information Center
Raiyn, Jamal
2016-01-01
Various concepts have been introduced to improve students' analytical thinking skills based on problem based learning (PBL). This paper introduces a new concept to increase student's analytical thinking skills based on a visual learning strategy. Such a strategy has three fundamental components: a teacher, a student, and a learning process. The…
Promoting Positive Emotion in Multimedia Learning Using Visual Illustrations
ERIC Educational Resources Information Center
Park, Sanghoon; Lim, Jung
2007-01-01
The purpose of this paper was to explore the concept of interest, one of the critical positive emotions in learning contexts and to investigate the effects of different types of visual illustrations on learning interest, achievement, and motivation in multimedia learning. The concept of interest was explored in light of positive emotion; an…
Pre-Service Visual Art Teachers' Perceptions of Assessment in Online Learning
ERIC Educational Resources Information Center
Allen, Jeanne Maree; Wright, Suzie; Innes, Maureen
2014-01-01
This paper reports on a study conducted into how one cohort of Master of Teaching pre-service visual art teachers perceived their learning in a fully online learning environment. Located in an Australian urban university, this qualitative study provided insights into a number of areas associated with higher education online learning, including…
Gesture helps learners learn, but not merely by guiding their visual attention.
Wakefield, Elizabeth; Novack, Miriam A; Congdon, Eliza L; Franconeri, Steven; Goldin-Meadow, Susan
2018-04-16
Teaching a new concept through gestures-hand movements that accompany speech-facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, ). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism-gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture-they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e., follow along with speech) than children who watch the no-gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learning-following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech. © 2018 John Wiley & Sons Ltd.
Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria
2013-08-01
In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.
Learning styles of medical students - implications in education.
Buşan, Alina-Mihaela
2014-01-01
The term "learning style" refers to the fact that each person has a different way of accumulating knowledge. While some prefer listening to learn better, others need to write or they only need to read the text or see a picture to later remember. According to Fleming and Mills the learning styles can be classified in Visual, Auditory and Kinesthetic. There is no evidence that teaching according to the learning style can help a person, yet this cannot be ignored. In this study, a number of 230 medical students were questioned in order to determine their learning style. We determined that 73% of the students prefer one learning style, 22% prefer to learn using equally two learning style, while the rest prefer three learning styles. According to this study the distribution of the learning styles is as following: 33% visual, 26% auditory, 14% kinesthetic, 12% visual and auditory styles equally, 6% visual and kinesthetic, 4% auditory and kinesthetic and 5% all three styles. 32 % of the students that participated at this study are from UMF Craiova, 32% from UMF Carol Davila, 11% University of Medicine T Popa, Iasi, 9% UMF Cluj Iulius Hatieganu. The way medical students learn is different from the general population. This is why it is important when teaching to considerate how the students learn in order to facilitate the learning.
Learning Styles of Medical Students - Implications in Education
BUŞAN, ALINA-MIHAELA
2014-01-01
Background: The term “learning style” refers to the fact that each person has a different way of accumulating knowledge. While some prefer listening to learn better, others need to write or they only need to read the text or see a picture to later remember. According to Fleming and Mills the learning styles can be classified in Visual, Auditory and Kinesthetic. There is no evidence that teaching according to the learning style can help a person, yet this cannot be ignored. Subjects and methods: In this study, a number of 230 medical students were questioned in order to determine their learning style. Results: We determined that 73% of the students prefer one learning style, 22% prefer to learn using equally two learning style, while the rest prefer three learning styles. According to this study the distribution of the learning styles is as following: 33% visual, 26% auditory, 14% kinesthetic, 12% visual and auditory styles equally, 6% visual and kinesthetic, 4% auditory and kinesthetic and 5% all three styles. 32 % of the students that participated at this study are from UMF Craiova, 32% from UMF Carol Davila, 11% University of Medicine T Popa, Iasi, 9% UMF Cluj Iulius Hatieganu. Discussions: The way medical students learn is different from the general population. This is why it is important when teaching to considerate how the students learn in order to facilitate the learning PMID:25729590
NASA Astrophysics Data System (ADS)
Tamer, A. J. J.; Anbar, A. D.; Elkins-Tanton, L. T.; Klug Boonstra, S.; Mead, C.; Swann, J. L.; Hunsley, D.
2017-12-01
Advances in scientific visualization and public access to data have transformed science outreach and communication, but have yet to realize their potential impacts in the realm of education. Computer-based learning is a clear bridge between visualization and education, but creating high-quality learning experiences that leverage existing visualizations requires close partnerships among scientists, technologists, and educators. The Infiniscope project is working to foster such partnerships in order to produce exploration-driven learning experiences around NASA SMD data and images, leveraging the principles of ETX (Education Through eXploration). The visualizations inspire curiosity, while the learning design promotes improved reasoning skills and increases understanding of space science concepts. Infiniscope includes both a web portal to host these digital learning experiences, as well as a teaching network of educators using and modifying these experiences. Our initial efforts to enable student discovery through active exploration of the concepts associated with Small Worlds, Kepler's Laws, and Exoplanets led us to develop our own visualizations at Arizona State University. Other projects focused on Astrobiology and Mars geology led us to incorporate an immersive Virtual Field Trip platform into the Infiniscope portal in support of virtual exploration of scientifically significant locations. Looking to apply ETX design practices with other visualizations, our team at Arizona State partnered with the Jet Propulsion Lab to integrate the web-based version of NASA Eyes on the Eclipse within Smart Sparrow's digital learning platform in a proof-of-concept focused on the 2017 Eclipse. This goes a step beyond the standard features of "Eyes" by wrapping guided exploration, focused on a specific learning goal into standards-aligned lesson built around the visualization, as well as its distribution through Infiniscope and it's digital teaching network. Experience from this development effort has laid the groundwork to explore future integrations with JPL and other NASA partners.
Impact of distributed virtual reality on engineering knowledge retention and student engagement
NASA Astrophysics Data System (ADS)
Sulbaran, Tulio Alberto
Engineering Education is facing many problems, one of which is poor knowledge retention among engineering students. This problem affects the Architecture, Engineering, and Construction (A/E/C) industry, because students are unprepared for many necessary job skills. This problem of poor knowledge retention is caused by many factors, one of which is the mismatch between student learning preferences and the media used to teach engineering. The purpose of this research is to assess the impact of Distributed Virtual Reality (DVR) as an engineering teaching tool. The implementation of DVR addresses the issue of poor knowledge retention by impacting the mismatch between learning and teaching style in the visual versus verbal spectrum. Using as a point of departure three knowledge domain areas (Learning and Instruction, Distributed Virtual Reality and Crane Selection as Part of Crane Lift Planning), a DVR engineering teaching tool is developed, deployed and assessed in engineering classrooms. The statistical analysis of the data indicates that: (1) most engineering students are visual learners; (2) most students would like more classes using DVR; (3) engineering students find DVR more engaging than traditional learning methods; (4) most students find the responsiveness of the DVR environments to be either good or very good; (5) all students are able to interact with DVR and most of the students found it easy or very easy to navigate (without previous formal training in how to use DVR); (6) students' knowledge regarding the subject (crane selection) is higher after the experiment; and, (7) students' using different instructional media do not demonstrate statistical difference in knowledge retained after the experiment. This inter-disciplinary research offers opportunities for direct and immediate application in education, research, and industry, due to the fact that the instructional module developed (on crane selection as part of construction crane lift planning) can be used to convey knowledge to engineers beyond the classrooms. This instructional module can also be used as a workbench to assess parameters on engineering education such as time on task, assessment media, and long-term retention among others.
NASA Astrophysics Data System (ADS)
Tungsombatsanti, A.; Ponkham, K.; Somtoa, T.
2018-01-01
This research aimed to: 1) To evaluate the efficiency of the process and the efficiency of the results (E1 / E2) of the innovative instructional lesson plan in the form of the STEM Education method in the field of physics of secondary students at the 10th grade level in physics class to determine the efficiency of the STEM based on criteria of the 70/70 standard level. 2) To study students' critical thinking skills of secondary students at the 11th grade level, and assessing skill in criteria 80 percentage 3) To compare learning achievements between students' pre-post testing after taught in STEM Education 4) To evaluate Student' Satisfaction after using STEM Education teaching by using mean compare to 5 points Likert Scale. The participant used were 40 students from grade 11 at Borabu School, Borabu District, Mahasarakham Province, semester 2, Academic year 2016. Tools used in this study consist of: 1) STEM Education plan about the force and laws of motion for grade 11 students of 1 schemes with total of 15 hours, 2) The test of critical think skills with essay type in amount of 30 items, 3) achievement test on Light and visual equipment with multiple-choice of 4 options of 30 items, 4) satisfaction learning with 5 Rating Scale of 16 items. The statistics used in data analysis were percentage, mean, standard deviation, and t-test (Dependent). The results showed that 1) The results of these findings revealed that the efficiency of the STEM based on criteria indicate that are higher than the standard level of the 70/70 at 71.51/75 2) Student has critical thinking scores that are higher than criteria 80 percentage as amount is 26 people. 3) Statistically significant of students' learning achievements to their later outcomes were differentiated between pretest and posttest at the .05 level, evidently. 4) The student' level of satisfaction toward the learning by using STEM Education plan was at a good level (X ¯ = 4.33, S.D = 0.64).
Effects of kinesthetic and cutaneous stimulation during the learning of a viscous force field.
Rosati, Giulio; Oscari, Fabio; Pacchierotti, Claudio; Prattichizzo, Domenico
2014-01-01
Haptic stimulation can help humans learn perceptual motor skills, but the precise way in which it influences the learning process has not yet been clarified. This study investigates the role of the kinesthetic and cutaneous components of haptic feedback during the learning of a viscous curl field, taking also into account the influence of visual feedback. We present the results of an experiment in which 17 subjects were asked to make reaching movements while grasping a joystick and wearing a pair of cutaneous devices. Each device was able to provide cutaneous contact forces through a moving platform. The subjects received visual feedback about joystick's position. During the experiment, the system delivered a perturbation through (1) full haptic stimulation, (2) kinesthetic stimulation alone, (3) cutaneous stimulation alone, (4) altered visual feedback, or (5) altered visual feedback plus cutaneous stimulation. Conditions 1, 2, and 3 were also tested with the cancellation of the visual feedback of position error. Results indicate that kinesthetic stimuli played a primary role during motor adaptation to the viscous field, which is a fundamental premise to motor learning and rehabilitation. On the other hand, cutaneous stimulation alone appeared not to bring significant direct or adaptation effects, although it helped in reducing direct effects when used in addition to kinesthetic stimulation. The experimental conditions with visual cancellation of position error showed slower adaptation rates, indicating that visual feedback actively contributes to the formation of internal models. However, modest learning effects were detected when the visual information was used to render the viscous field.
Anodal tDCS to V1 blocks visual perceptual learning consolidation.
Peters, Megan A K; Thompson, Benjamin; Merabet, Lotfi B; Wu, Allan D; Shams, Ladan
2013-06-01
This study examined the effects of visual cortex transcranial direct current stimulation (tDCS) on visual processing and learning. Participants performed a contrast detection task on two consecutive days. Each session consisted of a baseline measurement followed by measurements made during active or sham stimulation. On the first day, one group received anodal stimulation to primary visual cortex (V1), while another received cathodal stimulation. Stimulation polarity was reversed for these groups on the second day. The third (control) group of subjects received sham stimulation on both days. No improvements or decrements in contrast sensitivity relative to the same-day baseline were observed during real tDCS, nor was any within-session learning trend observed. However, task performance improved significantly from Day 1 to Day 2 for the participants who received cathodal tDCS on Day 1 and for the sham group. No such improvement was found for the participants who received anodal stimulation on Day 1, indicating that anodal tDCS blocked overnight consolidation of visual learning, perhaps through engagement of inhibitory homeostatic plasticity mechanisms or alteration of the signal-to-noise ratio within stimulated cortex. These results show that applying tDCS to the visual cortex can modify consolidation of visual learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Omede, Andrew A.
2015-01-01
This paper focused on the challenges in educating the visually impaired and modalities for ensuring quality assurance in tertiary institutions of learning in Nigeria. It examined the global challenges in the higher educational system and made it clear that the visually impaired are those with visual problems be it partial, low vision, or total…
Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning.
Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro
2014-01-01
Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.
Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning
Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro
2014-01-01
Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1–5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat. PMID:25076874
Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.
Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min
2013-12-01
Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.
Learning Science Through Visualization
NASA Technical Reports Server (NTRS)
Chaudhury, S. Raj
2005-01-01
In the context of an introductory physical science course for non-science majors, I have been trying to understand how scientific visualizations of natural phenomena can constructively impact student learning. I have also necessarily been concerned with the instructional and assessment approaches that need to be considered when focusing on learning science through visually rich information sources. The overall project can be broken down into three distinct segments : (i) comparing students' abilities to demonstrate proportional reasoning competency on visual and verbal tasks (ii) decoding and deconstructing visualizations of an object falling under gravity (iii) the role of directed instruction to elicit alternate, valid scientific visualizations of the structure of the solar system. Evidence of student learning was collected in multiple forms for this project - quantitative analysis of student performance on written, graded assessments (tests and quizzes); qualitative analysis of videos of student 'think aloud' sessions. The results indicate that there are significant barriers for non-science majors to succeed in mastering the content of science courses, but with informed approaches to instruction and assessment, these barriers can be overcome.
Online incidental statistical learning of audiovisual word sequences in adults: a registered report.
Kuppuraj, Sengottuvel; Duta, Mihaela; Thompson, Paul; Bishop, Dorothy
2018-02-01
Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory-picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test-retest reliability ( r = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process.
Online incidental statistical learning of audiovisual word sequences in adults: a registered report
Duta, Mihaela; Thompson, Paul
2018-01-01
Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory–picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test–retest reliability (r = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process. PMID:29515876
Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo
2014-07-01
A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.
ERIC Educational Resources Information Center
Gross, M. Melissa; Wright, Mary C.; Anderson, Olivia S.
2017-01-01
Research on the benefits of visual learning has relied primarily on lecture-based pedagogy, but the potential benefits of combining active learning strategies with visual and verbal materials on learning anatomy has not yet been explored. In this study, the differential effects of text-based and image-based active learning exercises on examination…
Implementation of ICARE learning model using visualization animation on biotechnology course
NASA Astrophysics Data System (ADS)
Hidayat, Habibi
2017-12-01
ICARE is a learning model that directly ensure the students to actively participate in the learning process using animation media visualization. ICARE have five key elements of learning experience from children and adult that is introduction, connection, application, reflection and extension. The use of Icare system to ensure that participants have opportunity to apply what have been they learned. So that, the message delivered by lecture to students can be understood and recorded by students in a long time. Learning model that was deemed capable of improving learning outcomes and interest to learn in following learning process Biotechnology with applying the ICARE learning model using visualization animation. This learning model have been giving motivation to participate in the learning process and learning outcomes obtained becomes more increased than before. From the results of student learning in subjects Biotechnology by applying the ICARE learning model using Visualization Animation can improving study results of student from the average value of middle test amounted to 70.98 with the percentage of 75% increased value of final test to be 71.57 with the percentage of 68.63%. The interest to learn from students more increasing visits of student activities at each cycle, namely the first cycle obtained average value by 33.5 with enough category. The second cycle is obtained an average value of 36.5 to good category and third cycle the average value of 36.5 with a student activity to good category.
Visual learning with reduced adaptation is eccentricity-specific.
Harris, Hila; Sagi, Dov
2018-01-12
Visual learning is known to be specific to the trained target location, showing little transfer to untrained locations. Recently, learning was shown to transfer across equal-eccentricity retinal-locations when sensory adaptation due to repetitive stimulation was minimized. It was suggested that learning transfers to previously untrained locations when the learned representation is location invariant, with sensory adaptation introducing location-dependent representations, thus preventing transfer. Spatial invariance may also fail when the trained and tested locations are at different distance from the center of gaze (different retinal eccentricities), due to differences in the corresponding low-level cortical representations (e.g. allocated cortical area decreases with eccentricity). Thus, if learning improves performance by better classifying target-dependent early visual representations, generalization is predicted to fail when locations of different retinal eccentricities are trained and tested in the absence sensory adaptation. Here, using the texture discrimination task, we show specificity of learning across different retinal eccentricities (4-8°) using reduced adaptation training. The existence of generalization across equal-eccentricity locations but not across different eccentricities demonstrates that learning accesses visual representations preceding location independent representations, with specificity of learning explained by inhomogeneous sensory representation.
Learned face-voice pairings facilitate visual search.
Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia
2015-04-01
Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information, and they facilitate localization of the sought-after individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face-voice associations and verified learning using a two-alternative forced choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent learned voice speeded visual search for the associated face. This result suggests that voices facilitate the visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training.
The primate amygdala represents the positive and negative value of visual stimuli during learning
Paton, Joseph J.; Belova, Marina A.; Morrison, Sara E.; Salzman, C. Daniel
2008-01-01
Visual stimuli can acquire positive or negative value through their association with rewards and punishments, a process called reinforcement learning. Although we now know a great deal about how the brain analyses visual information, we know little about how visual representations become linked with values. To study this process, we turned to the amygdala, a brain structure implicated in reinforcement learning1–5. We recorded the activity of individual amygdala neurons in monkeys while abstract images acquired either positive or negative value through conditioning. After monkeys had learned the initial associations, we reversed image value assignments. We examined neural responses in relation to these reversals in order to estimate the relative contribution to neural activity of the sensory properties of images and their conditioned values. Here we show that changes in the values of images modulate neural activity, and that this modulation occurs rapidly enough to account for, and correlates with, monkeys’ learning. Furthermore, distinct populations of neurons encode the positive and negative values of visual stimuli. Behavioural and physiological responses to visual stimuli may therefore be based in part on the plastic representation of value provided by the amygdala. PMID:16482160
Perceptual learning and adult cortical plasticity.
Gilbert, Charles D; Li, Wu; Piech, Valentin
2009-06-15
The visual cortex retains the capacity for experience-dependent changes, or plasticity, of cortical function and cortical circuitry, throughout life. These changes constitute the mechanism of perceptual learning in normal visual experience and in recovery of function after CNS damage. Such plasticity can be seen at multiple stages in the visual pathway, including primary visual cortex. The manifestation of the functional changes associated with perceptual learning involve both long term modification of cortical circuits during the course of learning, and short term dynamics in the functional properties of cortical neurons. These dynamics are subject to top-down influences of attention, expectation and perceptual task. As a consequence, each cortical area is an adaptive processor, altering its function in accordance to immediate perceptual demands.
Korkmaz, Selcuk; Zararsiz, Gokmen; Goksuluk, Dincer
2015-01-01
Virtual screening is an important step in early-phase of drug discovery process. Since there are thousands of compounds, this step should be both fast and effective in order to distinguish drug-like and nondrug-like molecules. Statistical machine learning methods are widely used in drug discovery studies for classification purpose. Here, we aim to develop a new tool, which can classify molecules as drug-like and nondrug-like based on various machine learning methods, including discriminant, tree-based, kernel-based, ensemble and other algorithms. To construct this tool, first, performances of twenty-three different machine learning algorithms are compared by ten different measures, then, ten best performing algorithms have been selected based on principal component and hierarchical cluster analysis results. Besides classification, this application has also ability to create heat map and dendrogram for visual inspection of the molecules through hierarchical cluster analysis. Moreover, users can connect the PubChem database to download molecular information and to create two-dimensional structures of compounds. This application is freely available through www.biosoft.hacettepe.edu.tr/MLViS/. PMID:25928885
Local statistics of retinal optic flow for self-motion through natural sceneries.
Calow, Dirk; Lappe, Markus
2007-12-01
Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.
Learning from Balance Sheet Visualization
ERIC Educational Resources Information Center
Tanlamai, Uthai; Soongswang, Oranuj
2011-01-01
This exploratory study examines alternative visuals and their effect on the level of learning of balance sheet users. Executive and regular classes of graduate students majoring in information technology in business were asked to evaluate the extent of acceptance and enhanced capability of these alternative visuals toward their learning…
The Effects of Verbal Elaboration and Visual Elaboration on Student Learning.
ERIC Educational Resources Information Center
Chanlin, Lih-Juan
1997-01-01
This study examined: (1) the effectiveness of integrating verbal elaboration (metaphors) and different visual presentation strategies (still and animated graphics) in learning biotechnology concepts; (2) whether the use of verbal elaboration with different visual presentation strategies facilitates cognitive processes; and (3) how students employ…
Saterbak, Ann; Moturu, Anoosha; Volz, Tracy
2018-03-01
Rice University's bioengineering department incorporates written, oral, and visual communication instruction into its undergraduate curriculum to aid student learning and to prepare students to communicate their knowledge and discoveries precisely and persuasively. In a tissue culture lab course, we used a self- and peer-review tool called Calibrated Peer Review™ (CPR) to diagnose student learning gaps in visual communication skills on a poster assignment. We then designed an active learning intervention that required students to practice the visual communication skills that needed improvement and used CPR to measure the changes. After the intervention, we observed that students performed significantly better in their ability to develop high quality graphs and tables that represent experimental data. Based on these outcomes, we conclude that guided task practice, collaborative learning, and calibrated peer review can be used to improve engineering students' visual communication skills.
De Weerd, Peter; Reithler, Joel; van de Ven, Vincent; Been, Marin; Jacobs, Christianne; Sack, Alexander T
2012-02-08
Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.
Learning by E-Learning for Visually Impaired Students: Opportunities or Again Marginalisation?
ERIC Educational Resources Information Center
Kharade, Kalpana; Peese, Hema
2012-01-01
In recent years, e-learning has become a valuable tool for an increasing number of visually impaired (VI) learners. The benefits of this technology include: (1) remote learning for VI students; (2) the possibility for teachers living far from schools or universities to provide remote instructional assistance to VI students; and (3) continuing…
Tracing Trajectories of Audio-Visual Learning in the Infant Brain
ERIC Educational Resources Information Center
Kersey, Alyssa J.; Emberson, Lauren L.
2017-01-01
Although infants begin learning about their environment before they are born, little is known about how the infant brain changes during learning. Here, we take the initial steps in documenting how the neural responses in the brain change as infants learn to associate audio and visual stimuli. Using functional near-infrared spectroscopy (fNRIS) to…
ERIC Educational Resources Information Center
McGrady, Harold J.; Olson, Don A.
To describe and compare the psychosensory functioning of normal children and children with specific learning disabilities, 62 learning disabled and 68 normal children were studied. Each child was given a battery of thirteen subtests on an automated psychosensory system representing various combinations of auditory and visual intra- and…
ERIC Educational Resources Information Center
Patron, Emelie; Wikman, Susanne; Edfors, Inger; Johansson-Cederblad, Brita; Linder, Cedric
2017-01-01
Visual representations are essential for communication and meaning-making in chemistry, and thus the representational practices play a vital role in the teaching and learning of chemistry. One powerful contemporary model of classroom learning, the variation theory of learning, posits that the way an object of learning gets handled is another vital…
ERIC Educational Resources Information Center
Bacon, Donald R.; Hartley, Steven W.
2015-01-01
Many educators and researchers have suggested that some students learn more effectively with visual stimuli (e.g., pictures, graphs), whereas others learn more effectively with verbal information (e.g., text) (Felder & Brent, 2005). In two studies, the present research seeks to improve popular self-reported (indirect) learning style measures…
Mobile Learning and the Visual Web, Oh My! Nutrition Education in the 21st Century
ERIC Educational Resources Information Center
Schuster, Ellen
2012-01-01
Technology is rapidly changing how our program participants learn in school and for their personal improvement. Extension educators who deliver nutrition program will want to be aware of the technology trends that are driving these changes. Blended learning, mobile learning, the visual Web, and the gamification of health are approaches to consider…
Chang, Li-Hung; Yotsumoto, Yuko; Salat, David H; Andersen, George J; Watanabe, Takeo; Sasaki, Yuka
2015-01-01
Although normal aging is known to reduce cortical structures globally, the effects of aging on local structures and functions of early visual cortex are less understood. Here, using standard retinotopic mapping and magnetic resonance imaging morphologic analyses, we investigated whether aging affects areal size of the early visual cortex, which were retinotopically localized, and whether those morphologic measures were associated with individual performance on visual perceptual learning. First, significant age-associated reduction was found in the areal size of V1, V2, and V3. Second, individual ability of visual perceptual learning was significantly correlated with areal size of V3 in older adults. These results demonstrate that aging changes local structures of the early visual cortex, and the degree of change may be associated with individual visual plasticity. Copyright © 2015 Elsevier Inc. All rights reserved.
Visualizing statistical significance of disease clusters using cartograms.
Kronenfeld, Barry J; Wong, David W S
2017-05-15
Health officials and epidemiological researchers often use maps of disease rates to identify potential disease clusters. Because these maps exaggerate the prominence of low-density districts and hide potential clusters in urban (high-density) areas, many researchers have used density-equalizing maps (cartograms) as a basis for epidemiological mapping. However, we do not have existing guidelines for visual assessment of statistical uncertainty. To address this shortcoming, we develop techniques for visual determination of statistical significance of clusters spanning one or more districts on a cartogram. We developed the techniques within a geovisual analytics framework that does not rely on automated significance testing, and can therefore facilitate visual analysis to detect clusters that automated techniques might miss. On a cartogram of the at-risk population, the statistical significance of a disease cluster is determinate from the rate, area and shape of the cluster under standard hypothesis testing scenarios. We develop formulae to determine, for a given rate, the area required for statistical significance of a priori and a posteriori designated regions under certain test assumptions. Uniquely, our approach enables dynamic inference of aggregate regions formed by combining individual districts. The method is implemented in interactive tools that provide choropleth mapping, automated legend construction and dynamic search tools to facilitate cluster detection and assessment of the validity of tested assumptions. A case study of leukemia incidence analysis in California demonstrates the ability to visually distinguish between statistically significant and insignificant regions. The proposed geovisual analytics approach enables intuitive visual assessment of statistical significance of arbitrarily defined regions on a cartogram. Our research prompts a broader discussion of the role of geovisual exploratory analyses in disease mapping and the appropriate framework for visually assessing the statistical significance of spatial clusters.
Chinese children's early knowledge about writing.
Zhang, Lan; Yin, Li; Treiman, Rebecca
2017-09-01
Much research on literacy development has focused on learners of alphabetic writing systems. Researchers have hypothesized that children learn about the formal characteristics of writing before they learn about the relations between units of writing and units of speech. We tested this hypothesis by examining young Chinese children's understanding of writing. Mandarin-speaking 2- to 5-year-olds completed a graphic task, which tapped their knowledge about the formal characteristics of writing, and a phonological task, which tapped their knowledge about the correspondence between Chinese characters and syllables. The 3- to 5-year-olds performed better on the graphic task than the phonological task, indicating that learning how writing appears visually begins earlier than learning that writing corresponds to linguistic units, even in a writing system in which written units correspond to syllables. Statement of contribution What is already known on this subject? Learning about writing's visual form, how it looks, is an important part of emergent literacy. Knowledge of how writing symbolizes linguistic units may emerge later. What does this study add? We test the hypothesis that Chinese children learn about writing's visual form earlier than its symbolic nature. Chinese 3- to 5- year-olds know more about visual features than character-syllable links. Results show learning of the visual appearance of a notation system is developmentally precocious. © 2016 The British Psychological Society.
Imprinting modulates processing of visual information in the visual wulst of chicks.
Maekawa, Fumihiko; Komine, Okiru; Sato, Katsushige; Kanamatsu, Tomoyuki; Uchimura, Motoaki; Tanaka, Kohichi; Ohki-Hamazaki, Hiroko
2006-11-14
Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium.
Imprinting modulates processing of visual information in the visual wulst of chicks
Maekawa, Fumihiko; Komine, Okiru; Sato, Katsushige; Kanamatsu, Tomoyuki; Uchimura, Motoaki; Tanaka, Kohichi; Ohki-Hamazaki, Hiroko
2006-01-01
Background Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. Results A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. Conclusion These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium. PMID:17101060
Phonological Concept Learning.
Moreton, Elliott; Pater, Joe; Pertsova, Katya
2017-01-01
Linguistic and non-linguistic pattern learning have been studied separately, but we argue for a comparative approach. Analogous inductive problems arise in phonological and visual pattern learning. Evidence from three experiments shows that human learners can solve them in analogous ways, and that human performance in both cases can be captured by the same models. We test GMECCS (Gradual Maximum Entropy with a Conjunctive Constraint Schema), an implementation of the Configural Cue Model (Gluck & Bower, ) in a Maximum Entropy phonotactic-learning framework (Goldwater & Johnson, ; Hayes & Wilson, ) with a single free parameter, against the alternative hypothesis that learners seek featurally simple algebraic rules ("rule-seeking"). We study the full typology of patterns introduced by Shepard, Hovland, and Jenkins () ("SHJ"), instantiated as both phonotactic patterns and visual analogs, using unsupervised training. Unlike SHJ, Experiments 1 and 2 found that both phonotactic and visual patterns that depended on fewer features could be more difficult than those that depended on more features, as predicted by GMECCS but not by rule-seeking. GMECCS also correctly predicted performance differences between stimulus subclasses within each pattern. A third experiment tried supervised training (which can facilitate rule-seeking in visual learning) to elicit simple rule-seeking phonotactic learning, but cue-based behavior persisted. We conclude that similar cue-based cognitive processes are available for phonological and visual concept learning, and hence that studying either kind of learning can lead to significant insights about the other. Copyright © 2015 Cognitive Science Society, Inc.
Differentiating Visual from Response Sequencing during Long-term Skill Learning.
Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy
2017-01-01
The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.
ERIC Educational Resources Information Center
Stauffer, Linda K.
2010-01-01
Given the visual-gestural nature of ASL it is reasonable to assume that visualization abilities may be one predictor of aptitude for learning ASL. This study tested a hypothesis that visualization abilities are a foundational aptitude for learning a signed language and that measurements of these skills will increase as students progress from…
ERIC Educational Resources Information Center
Mather, Susan M.; Clark, M. Diane
2012-01-01
One of the ongoing challenges teachers of students who are deaf or hard of hearing face is managing the visual split attention implicit in multimedia learning. When a teacher presents various types of visual information at the same time, visual learners have no choice but to divide their attention among those materials and the teacher and…
Basic Visual Processes and Learning Disability.
ERIC Educational Resources Information Center
Leisman, Gerald
Representatives of a variety of disciplines concerned with either clinical or research problems in vision and learning disabilities present reviews and reports of relevant research and clinical approaches. Contributions are organized into four broad sections: basic processes, specific disorders, diagnosis of visually based problems in learning,…
Exploring Visuospatial Thinking in Chemistry Learning
ERIC Educational Resources Information Center
Wu, Hsin-Kai; Shah, Priti
2004-01-01
In this article, we examine the role of visuospatial cognition in chemistry learning. We review three related kinds of literature: correlational studies of spatial abilities and chemistry learning, students' conceptual errors and difficulties understanding visual representations, and visualization tools that have been designed to help overcome…
The effect of haptic guidance and visual feedback on learning a complex tennis task.
Marchal-Crespo, Laura; van Raai, Mark; Rauter, Georg; Wolf, Peter; Riener, Robert
2013-11-01
While haptic guidance can improve ongoing performance of a motor task, several studies have found that it ultimately impairs motor learning. However, some recent studies suggest that the haptic demonstration of optimal timing, rather than movement magnitude, enhances learning in subjects trained with haptic guidance. Timing of an action plays a crucial role in the proper accomplishment of many motor skills, such as hitting a moving object (discrete timing task) or learning a velocity profile (time-critical tracking task). The aim of the present study is to evaluate which feedback conditions-visual or haptic guidance-optimize learning of the discrete and continuous elements of a timing task. The experiment consisted in performing a fast tennis forehand stroke in a virtual environment. A tendon-based parallel robot connected to the end of a racket was used to apply haptic guidance during training. In two different experiments, we evaluated which feedback condition was more adequate for learning: (1) a time-dependent discrete task-learning to start a tennis stroke and (2) a tracking task-learning to follow a velocity profile. The effect that the task difficulty and subject's initial skill level have on the selection of the optimal training condition was further evaluated. Results showed that the training condition that maximizes learning of the discrete time-dependent motor task depends on the subjects' initial skill level. Haptic guidance was especially suitable for less-skilled subjects and in especially difficult discrete tasks, while visual feedback seems to benefit more skilled subjects. Additionally, haptic guidance seemed to promote learning in a time-critical tracking task, while visual feedback tended to deteriorate the performance independently of the task difficulty and subjects' initial skill level. Haptic guidance outperformed visual feedback, although additional studies are needed to further analyze the effect of other types of feedback visualization on motor learning of time-critical tasks.
Dynamics of EEG functional connectivity during statistical learning.
Tóth, Brigitta; Janacsek, Karolina; Takács, Ádám; Kóbor, Andrea; Zavecz, Zsófia; Nemeth, Dezso
2017-10-01
Statistical learning is a fundamental mechanism of the brain, which extracts and represents regularities of our environment. Statistical learning is crucial in predictive processing, and in the acquisition of perceptual, motor, cognitive, and social skills. Although previous studies have revealed competitive neurocognitive processes underlying statistical learning, the neural communication of the related brain regions (functional connectivity, FC) has not yet been investigated. The present study aimed to fill this gap by investigating FC networks that promote statistical learning in humans. Young adults (N=28) performed a statistical learning task while 128-channels EEG was acquired. The task involved probabilistic sequences, which enabled to measure incidental/implicit learning of conditional probabilities. Phase synchronization in seven frequency bands was used to quantify FC between cortical regions during the first, second, and third periods of the learning task, respectively. Here we show that statistical learning is negatively correlated with FC of the anterior brain regions in slow (theta) and fast (beta) oscillations. These negative correlations increased as the learning progressed. Our findings provide evidence that dynamic antagonist brain networks serve a hallmark of statistical learning. Copyright © 2017 Elsevier Inc. All rights reserved.
Perceptual learning during action video game playing.
Green, C Shawn; Li, Renjie; Bavelier, Daphne
2010-04-01
Action video games have been shown to enhance behavioral performance on a wide variety of perceptual tasks, from those that require effective allocation of attentional resources across the visual scene, to those that demand the successful identification of fleetingly presented stimuli. Importantly, these effects have not only been shown in expert action video game players, but a causative link has been established between action video game play and enhanced processing through training studies. Although an account based solely on attention fails to capture the variety of enhancements observed after action game playing, a number of models of perceptual learning are consistent with the observed results, with behavioral modeling favoring the hypothesis that avid video game players are better able to form templates for, or extract the relevant statistics of, the task at hand. This may suggest that the neural site of learning is in areas where information is integrated and actions are selected; yet changes in low-level sensory areas cannot be ruled out. Copyright © 2009 Cognitive Science Society, Inc.
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Visual Cues, Verbal Cues and Child Development
ERIC Educational Resources Information Center
Valentini, Nadia
2004-01-01
In this article, the author discusses two strategies--visual cues (modeling) and verbal cues (short, accurate phrases) which are related to teaching motor skills in maximizing learning in physical education classes. Both visual and verbal cues are strong influences in facilitating and promoting day-to-day learning. Both strategies reinforce…
Learning from Chemical Visualizations: Comparing Generation and Selection
ERIC Educational Resources Information Center
Zhang, Zhihui Helen; Linn, Marcia C.
2013-01-01
Dynamic visualizations can make unseen phenomena such as chemical reactions visible but students need guidance to benefit from them. This study explores the value of generating drawings versus selecting among alternatives to guide students to learn chemical reactions from a dynamic visualization of hydrogen combustion as part of an online inquiry…
Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention.
ERIC Educational Resources Information Center
Chun, Marvin M.; Jiang, Yuhong
1998-01-01
Six experiments involving a total of 112 college students demonstrate that a robust memory for visual context exists to guide spatial attention. Results show how implicit learning and memory of visual context can guide spatial attention toward task-relevant aspects of a scene. (SLD)
Construction of a VISUAL (VIdeo-SUpported Active Learning) Resource.
ERIC Educational Resources Information Center
Nicolson, Roderick I.; And Others
1994-01-01
Discussion of interactive video for educational purposes focuses on the development of a video-supported active learning (VISUAL) resource on voice disorders that used digitized video and an Apple Macintosh computer. User evaluations are reported, and potential applications for VISUAL resources are suggested. (Contains five references.) (LRW)
Attentional Modulation in Visual Cortex Is Modified during Perceptual Learning
ERIC Educational Resources Information Center
Bartolucci, Marco; Smith, Andrew T.
2011-01-01
Practicing a visual task commonly results in improved performance. Often the improvement does not transfer well to a new retinal location, suggesting that it is mediated by changes occurring in early visual cortex, and indeed neuroimaging and neurophysiological studies both demonstrate that perceptual learning is associated with altered activity…
Local image statistics: maximum-entropy constructions and perceptual salience
Victor, Jonathan D.; Conte, Mary M.
2012-01-01
The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics—including luminance distributions, pair-wise correlations, and higher-order correlations—are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions. PMID:22751397
Context generalization in Drosophila visual learning requires the mushroom bodies
NASA Astrophysics Data System (ADS)
Liu, Li; Wolf, Reinhard; Ernst, Roman; Heisenberg, Martin
1999-08-01
The world is permanently changing. Laboratory experiments on learning and memory normally minimize this feature of reality, keeping all conditions except the conditioned and unconditioned stimuli as constant as possible. In the real world, however, animals need to extract from the universe of sensory signals the actual predictors of salient events by separating them from non-predictive stimuli (context). In principle, this can be achieved ifonly those sensory inputs that resemble the reinforcer in theirtemporal structure are taken as predictors. Here we study visual learning in the fly Drosophila melanogaster, using a flight simulator,, and show that memory retrieval is, indeed, partially context-independent. Moreover, we show that the mushroom bodies, which are required for olfactory but not visual or tactile learning, effectively support context generalization. In visual learning in Drosophila, it appears that a facilitating effect of context cues for memory retrieval is the default state, whereas making recall context-independent requires additional processing.
[A technological device for optimizing the time taken for blind people to learn Braille].
Hernández, Cesar; Pedraza, Luis F; López, Danilo
2011-10-01
This project was aimed at designing and putting an electronic prototype into practice for improving the initial time taken by visually handicapped people for learning Braille, especially children. This project was mainly based on a prototype digital electronic device which identifies and translates material written by a user in Braille by a voice synthesis system, producing artificial words to determine whether a handicapped person's writing in Braille has been correct. A global system for mobile communications (GSM) module was also incorporated into the device which allowed it to send text messages, thereby involving innovation in the field of articles for aiding visually handicapped people. This project's main result was an easily accessed and understandable prototype device which improved visually handicapped people's initial learning of Braille. The time taken for visually handicapped people to learn Braille became significantly reduced whilst their interest increased, as did their concentration time regarding such learning.
Index of learning styles in a u.s. School of pharmacy.
Teevan, Colleen J; Li, Michael; Schlesselman, Lauren S
2011-04-01
The goal of this study was to assess for a predominance of learning styles among pharmacy students at an accredited U.S. school of pharmacy. Following approval by the Institutional Review Board, the Index of Learning Styles© was administered to 210 pharmacy students. The survey provides results within 4 domains: perception, input, processing, and understanding. Analyses were conducted to determine trends in student learning styles. Within the four domains, 84% of students showed a preference toward sensory perception, 66% toward visual input, and 74% toward sequential understanding. Students showed no significant preference for active or reflective processing. Preferences were of moderate strength for the sensing, visual, and sequential learning styles. Students showed preferences for sensing, visual, and sequential learning styles with gender playing a role in learning style preferences. Faculty should be aware, despite some preferences, a mix of learning styles exists. To focus on the preferences found, instructors should focus teaching in a logical progression while adding visual aids. To account for other types of learning styles found, the instructors can offer other approaches and provide supplemental activities for those who would benefit from them. Further research is necessary to compare these learning styles to the teaching styles of pharmacy preceptors and faculty at schools of pharmacy.
Takemoto, Atsushi; Miwa, Miki; Koba, Reiko; Yamaguchi, Chieko; Suzuki, Hiromi; Nakamura, Katsuki
2015-04-01
Detailed information about the characteristics of learning behavior in marmosets is useful for future marmoset research. We trained 42 marmosets in visual discrimination and reversal learning. All marmosets could learn visual discrimination, and all but one could complete reversal learning, though some marmosets failed to touch the visual stimuli and were screened out. In 87% of measurements, the final percentage of correct responses was over 95%. We quantified performance with two measures: onset trial and dynamic interval. Onset trial represents the number of trials that elapsed before the marmoset started to learn. Dynamic interval represents the number of trials from the start before reaching the final percentage of correct responses. Both measures decreased drastically as a result of the formation of discrimination learning sets. In reversal learning, both measures worsened, but the effect on onset trial was far greater. The effects of age and sex were not significant as far as we used adolescent or young adult marmosets. Unexpectedly, experimental circumstance (in the colony or isolator) had only a subtle effect on performance. However, we found that marmosets from different families exhibited different learning process characteristics, suggesting some family effect on learning. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Perceptual statistical learning over one week in child speech production.
Richtsmeier, Peter T; Goffman, Lisa
2017-07-01
What cognitive mechanisms account for the trajectory of speech sound development, in particular, gradually increasing accuracy during childhood? An intriguing potential contributor is statistical learning, a type of learning that has been studied frequently in infant perception but less often in child speech production. To assess the relevance of statistical learning to developing speech accuracy, we carried out a statistical learning experiment with four- and five-year-olds in which statistical learning was examined over one week. Children were familiarized with and tested on word-medial consonant sequences in novel words. There was only modest evidence for statistical learning, primarily in the first few productions of the first session. This initial learning effect nevertheless aligns with previous statistical learning research. Furthermore, the overall learning effect was similar to an estimate of weekly accuracy growth based on normative studies. The results implicate other important factors in speech sound development, particularly learning via production. Copyright © 2017 Elsevier Inc. All rights reserved.
Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun
2016-01-01
Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.
Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi
2017-10-01
Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.
Visual acuity and visual skills in Malaysian children with learning disabilities
Muzaliha, Mohd-Nor; Nurhamiza, Buang; Hussein, Adil; Norabibas, Abdul-Rani; Mohd-Hisham-Basrun, Jaafar; Sarimah, Abdullah; Leo, Seo-Wei; Shatriah, Ismail
2012-01-01
Background: There is limited data in the literature concerning the visual status and skills in children with learning disabilities, particularly within the Asian population. This study is aimed to determine visual acuity and visual skills in children with learning disabilities in primary schools within the suburban Kota Bharu district in Malaysia. Methods: We examined 1010 children with learning disabilities aged between 8–12 years from 40 primary schools in the Kota Bharu district, Malaysia from January 2009 to March 2010. These children were identified based on their performance in a screening test known as the Early Intervention Class for Reading and Writing Screening Test conducted by the Ministry of Education, Malaysia. Complete ocular examinations and visual skills assessment included near point of convergence, amplitude of accommodation, accommodative facility, convergence break and recovery, divergence break and recovery, and developmental eye movement tests for all subjects. Results: A total of 4.8% of students had visual acuity worse than 6/12 (20/40), 14.0% had convergence insufficiency, 28.3% displayed poor accommodative amplitude, and 26.0% showed signs of accommodative infacility. A total of 12.1% of the students had poor convergence break, 45.7% displayed poor convergence recovery, 37.4% showed poor divergence break, and 66.3% were noted to have poor divergence recovery. The mean horizontal developmental eye movement was significantly prolonged. Conclusion: Although their visual acuity was satisfactory, nearly 30% of the children displayed accommodation problems including convergence insufficiency, poor accommodation, and accommodative infacility. Convergence and divergence recovery are the most affected visual skills in children with learning disabilities in Malaysia. PMID:23055674
The climate visualizer: Sense-making through scientific visualization
NASA Astrophysics Data System (ADS)
Gordin, Douglas N.; Polman, Joseph L.; Pea, Roy D.
1994-12-01
This paper describes the design of a learning environment, called the Climate Visualizer, intended to facilitate scientific sense-making in high school classrooms by providing students the ability to craft, inspect, and annotate scientific visualizations. The theoretical back-ground for our design presents a view of learning as acquiring and critiquing cultural practices and stresses the need for students to appropriate the social and material aspects of practice when learning an area. This is followed by a description of the design of the Climate Visualizer, including detailed accounts of its provision of spatial and temporal context and the quantitative and visual representations it employs. A broader context is then explored by describing its integration into the high school science classroom. This discussion explores how visualizations can promote the creation of scientific theories, especially in conjunction with the Collaboratory Notebook, an embedded environment for creating and critiquing scientific theories and visualizations. Finally, we discuss the design trade-offs we have made in light of our theoretical orientation, and our hopes for further progress.
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Haviz, M.
2018-04-01
The purpose of this article is to design and develop an interactive CD on spermatogenesis. This is a research and development. Procedure of development is making an outline of media program, making flowchart, making story board, gathering of materials, programming and finishing. The quantitative data obtained were analyzed by descriptive statistics. Qualitative data obtained were analyzed with Miles and Huberman techniques. The instrument used is a validation sheet. The result of CD design with a Macro flash MX program shows there are 17 slides generated. This prototype obtained a valid value after a self-review technique with many revisions, especially on sound and programming. This finding suggests that process-oriented spermatogenesis can be audio-visualized into a more comprehensive form of learning media. But this interactive CD product needs further testing to determine consistency and resistance to revisions.
2007-06-01
information flow involved in network attacks. This kind of information can be invaluable in learning how to best setup and defend computer networks...administrators, and those interested in learning about securing networks a way to conceptualize this complex system of computing. NTAV3D will provide a three...teaching with visual and other components can make learning more effective” (Baxley et al, 2006). A hyperbox (Alpern and Carter, 1991) is
Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.
Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J
2016-10-24
In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.
Long-Term Memory Performance in Adult ADHD.
Skodzik, Timo; Holling, Heinz; Pedersen, Anya
2017-02-01
Memory problems are a frequently reported symptom in adult ADHD, and it is well-documented that adults with ADHD perform poorly on long-term memory tests. However, the cause of this effect is still controversial. The present meta-analysis examined underlying mechanisms that may lead to long-term memory impairments in adult ADHD. We performed separate meta-analyses of measures of memory acquisition and long-term memory using both verbal and visual memory tests. In addition, the influence of potential moderator variables was examined. Adults with ADHD performed significantly worse than controls on verbal but not on visual long-term memory and memory acquisition subtests. The long-term memory deficit was strongly statistically related to the memory acquisition deficit. In contrast, no retrieval problems were observable. Our results suggest that memory deficits in adult ADHD reflect a learning deficit induced at the stage of encoding. Implications for clinical and research settings are presented.
Patch-Based Generative Shape Model and MDL Model Selection for Statistical Analysis of Archipelagos
NASA Astrophysics Data System (ADS)
Ganz, Melanie; Nielsen, Mads; Brandt, Sami
We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation of calcifications, where the area overlap with the ground truth shapes improved significantly compared to the case where the prior was not used.
Visuospatial Cognition in Electronic Learning
ERIC Educational Resources Information Center
Shah, Priti; Freedman, Eric G.
2003-01-01
Static, animated, and interactive visualizations are frequently used in electronic learning environments. In this article, we provide a brief review of research on visuospatial cognition relevant to designing e-learning tools that use these displays. In the first section, we discuss the possible cognitive benefits of visualizations consider used…
Perceptual adaptation in the use of night vision goggles
NASA Technical Reports Server (NTRS)
Durgin, Frank H.; Proffitt, Dennis R.
1992-01-01
The image intensification (I sup 2) systems studied for this report were the biocular AN/PVS-7(NVG) and the binocular AN/AVS-6(ANVIS). Both are quite impressive for purposes of revealing the structure of the environment in a fairly straightforward way in extremely low-light conditions. But these systems represent an unusual viewing medium. The perceptual information available through I sup 2 systems is different in a variety of ways from the typical input of everyday vision, and extensive training and practice is required for optimal use. Using this sort of system involves a kind of perceptual skill learning, but is may also involve visual adaptations that are not simply an extension of normal vision. For example, the visual noise evident in the goggles in very low-light conditions results in unusual statistical properties in visual input. Because we had recently discovered a strong and enduring aftereffect of perceived texture density which seemed to be sensitive to precisely the sorts of statistical distortions introduced by I sup 2 systems, it occurred to use that visual noise of this sort might be a very adapting stimulus for texture density and produce an aftereffect that extended into normal vision once the goggles were removed. We have not found any experimental evidence that I sup 2 systems produce texture density aftereffects. The nature of the texture density aftereffect is briefly explained, followed by an accounting of our studies of I sup 2 systems and our most recent work on the texture density aftereffect. A test for spatial frequency adaptation after exposure to NVG's is also reported, as is a study of perceived depth from motion (motion parallax) while wearing the biocular goggles. We conclude with a summary of our findings.
Sensori-motor experience leads to changes in visual processing in the developing brain.
James, Karin Harman
2010-03-01
Since Broca's studies on language processing, cortical functional specialization has been considered to be integral to efficient neural processing. A fundamental question in cognitive neuroscience concerns the type of learning that is required for functional specialization to develop. To address this issue with respect to the development of neural specialization for letters, we used functional magnetic resonance imaging (fMRI) to compare brain activation patterns in pre-school children before and after different letter-learning conditions: a sensori-motor group practised printing letters during the learning phase, while the control group practised visual recognition. Results demonstrated an overall left-hemisphere bias for processing letters in these pre-literate participants, but, more interestingly, showed enhanced blood oxygen-level-dependent activation in the visual association cortex during letter perception only after sensori-motor (printing) learning. It is concluded that sensori-motor experience augments processing in the visual system of pre-school children. The change of activation in these neural circuits provides important evidence that 'learning-by-doing' can lay the foundation for, and potentially strengthen, the neural systems used for visual letter recognition.
ERIC Educational Resources Information Center
Baxter, Mark G.; Browning, Philip G. F.; Mitchell, Anna S.
2008-01-01
Surgical disconnection of the frontal cortex and inferotemporal cortex severely impairs many aspects of visual learning and memory, including learning of new object-in-place scene memory problems, a monkey model of episodic memory. As part of a study of specialization within prefrontal cortex in visual learning and memory, we tested monkeys with…
STATISTICAL ESTIMATION AND VISUALIZATION OF GROUND-WATER CONTAMINATION DATA
This work presents methods of visualizing and animating statistical estimates of ground water and/or soil contamination over a region from observations of the contaminant for that region. The primary statistical methods used to produce the regional estimates are nonparametric re...
Redmond, Catherine; Davies, Carmel; Cornally, Deirdre; Adam, Ewa; Daly, Orla; Fegan, Marianne; O'Toole, Margaret
2018-01-01
Both nationally and internationally concerns have been expressed over the adequacy of preparation of undergraduate nurses for the clinical skill of wound care. This project describes the educational evaluation of a series of Reusable Learning Objects (RLOs) as a blended learning approach to facilitate undergraduate nursing students learning of wound care for competence development. Constructivism Learning Theory and Cognitive Theory of Multimedia Learning informed the design of the RLOs, promoting active learner approaches. Clinically based case studies and visual data from two large university teaching hospitals provided the authentic learning materials required. Interactive exercises and formative feedback were incorporated into the educational resource. Evaluation of student perceived learning gains in terms of knowledge, ability and attitudes were measured using a quantitative pre and posttest Wound Care Competency Outcomes Questionnaire. The RLO CETL Questionnaire was used to identify perceived learning enablers. Statistical and deductive thematic analyses inform the findings. Students (n=192) reported that their ability to meet the competency outcomes for wound care had increased significantly after engaging with the RLOs. Students rated the RLOs highly across all categories of perceived usefulness, impact, access and integration. These findings provide evidence that the use of RLOs for both knowledge-based and performance-based learning is effective. RLOs when designed using clinically real case scenarios reflect the true complexities of wound care and offer innovative interventions in nursing curricula. Copyright © 2017 Elsevier Ltd. All rights reserved.
Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution
Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin
2016-01-01
The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114
TRAPR: R Package for Statistical Analysis and Visualization of RNA-Seq Data.
Lim, Jae Hyun; Lee, Soo Youn; Kim, Ju Han
2017-03-01
High-throughput transcriptome sequencing, also known as RNA sequencing (RNA-Seq), is a standard technology for measuring gene expression with unprecedented accuracy. Numerous bioconductor packages have been developed for the statistical analysis of RNA-Seq data. However, these tools focus on specific aspects of the data analysis pipeline, and are difficult to appropriately integrate with one another due to their disparate data structures and processing methods. They also lack visualization methods to confirm the integrity of the data and the process. In this paper, we propose an R-based RNA-Seq analysis pipeline called TRAPR, an integrated tool that facilitates the statistical analysis and visualization of RNA-Seq expression data. TRAPR provides various functions for data management, the filtering of low-quality data, normalization, transformation, statistical analysis, data visualization, and result visualization that allow researchers to build customized analysis pipelines.
Handwriting generates variable visual output to facilitate symbol learning.
Li, Julia X; James, Karin H
2016-03-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Handwriting generates variable visual input to facilitate symbol learning
Li, Julia X.; James, Karin H.
2015-01-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913
Self-Taught Low-Rank Coding for Visual Learning.
Li, Sheng; Li, Kang; Fu, Yun
2018-03-01
The lack of labeled data presents a common challenge in many computer vision and machine learning tasks. Semisupervised learning and transfer learning methods have been developed to tackle this challenge by utilizing auxiliary samples from the same domain or from a different domain, respectively. Self-taught learning, which is a special type of transfer learning, has fewer restrictions on the choice of auxiliary data. It has shown promising performance in visual learning. However, existing self-taught learning methods usually ignore the structure information in data. In this paper, we focus on building a self-taught coding framework, which can effectively utilize the rich low-level pattern information abstracted from the auxiliary domain, in order to characterize the high-level structural information in the target domain. By leveraging a high quality dictionary learned across auxiliary and target domains, the proposed approach learns expressive codings for the samples in the target domain. Since many types of visual data have been proven to contain subspace structures, a low-rank constraint is introduced into the coding objective to better characterize the structure of the given target set. The proposed representation learning framework is called self-taught low-rank (S-Low) coding, which can be formulated as a nonconvex rank-minimization and dictionary learning problem. We devise an efficient majorization-minimization augmented Lagrange multiplier algorithm to solve it. Based on the proposed S-Low coding mechanism, both unsupervised and supervised visual learning algorithms are derived. Extensive experiments on five benchmark data sets demonstrate the effectiveness of our approach.
Learning the Language of Statistics: Challenges and Teaching Approaches
ERIC Educational Resources Information Center
Dunn, Peter K.; Carey, Michael D.; Richardson, Alice M.; McDonald, Christine
2016-01-01
Learning statistics requires learning the language of statistics. Statistics draws upon words from general English, mathematical English, discipline-specific English and words used primarily in statistics. This leads to many linguistic challenges in teaching statistics and the way in which the language is used in statistics creates an extra layer…
Visual-Verbal Redundancy Effects on Television News Learning.
ERIC Educational Resources Information Center
Reese, Stephen D.
1984-01-01
Discusses methodology and results of a study focusing on improving television's informing process by examining effects of combining visual and captioned information with a reporter's script. Results indicate redundant pictures and words enhance learning, while adding redundant print information either had no effect or detracted from learning. (MBR)
Molecular and Cellular Biology Animations: Development and Impact on Student Learning
ERIC Educational Resources Information Center
McClean, Phillip; Johnson, Christina; Rogers, Roxanne; Daniels, Lisa; Reber, John; Slator, Brian M.; Terpstra, Jeff; White, Alan
2005-01-01
Educators often struggle when teaching cellular and molecular processes because typically they have only two-dimensional tools to teach something that plays out in four dimensions. Learning research has demonstrated that visualizing processes in three dimensions aids learning, and animations are effective visualization tools for novice learners…
The Contribution of Visualization to Learning Computer Architecture
ERIC Educational Resources Information Center
Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy
2007-01-01
This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…
Virtual Manipulatives: Tools for Teaching Mathematics to Students with Learning Disabilities
ERIC Educational Resources Information Center
Shin, Mikyung; Bryant, Diane P.; Bryant, Brian R.; McKenna, John W.; Hou, Fangjuan; Ok, Min Wook
2017-01-01
Many students with learning disabilities demonstrate difficulty in developing a conceptual understanding of mathematical topics. Researchers recommend using visual models to support student learning of the concepts and skills necessary to complete abstract and symbolic mathematical problems. Virtual manipulatives (i.e., interactive visual models)…
Visual Modelling of Learning Processes
ERIC Educational Resources Information Center
Copperman, Elana; Beeri, Catriel; Ben-Zvi, Nava
2007-01-01
This paper introduces various visual models for the analysis and description of learning processes. The models analyse learning on two levels: the dynamic level (as a process over time) and the functional level. Two types of model for dynamic modelling are proposed: the session trace, which documents a specific learner in a particular learning…
Visual Productions and Student Learning.
ERIC Educational Resources Information Center
Bazeli, Marilyn
When students become actively involved in technology productions they develop learning skills, communication skills, and visual analysis skills, all of which are applied to real-life learning within the classroom curriculum. Students participate in all stages of the production projects, which proves to be motivating for the students and allows the…
Using a Self-Administered Visual Basic Software Tool To Teach Psychological Concepts.
ERIC Educational Resources Information Center
Strang, Harold R.; Sullivan, Amie K.; Schoeny, Zahrl G.
2002-01-01
Introduces LearningLinks, a Visual Basic software tool that allows teachers to create individualized learning modules that use constructivist and behavioral learning principles. Describes field testing of undergraduates at the University of Virginia that tested a module designed to improve understanding of the psychological concepts of…
ERIC Educational Resources Information Center
Hew, Soon-Hin; Ohki, Mitsuru
2004-01-01
This study examines the effectiveness of imagery and electronic visual feedback in facilitating students' acquisition of Japanese pronunciation skills. The independent variables, animated graphic annotation (AGA) and immediate visual feedback (IVF) were integrated into a Japanese computer-assisted language learning (JCALL) program focused on the…
Learning-Related Vision Problems: How Visual Processing Affects Reading Efficiency
ERIC Educational Resources Information Center
Solan, Harold A.
2004-01-01
Research during the past decade lends support to the notion that visual as well as phonological deficits are significantly correlated with reading and learning disorders. However, from the variety of visual anomalies discussed, it soon becomes evident that vision, itself, is not a unitary disorder. In this review, the multifaceted nature of…
ERIC Educational Resources Information Center
Tillmanns, Tanja; Holland, Charlotte; Filho, Alfredo Salomão
2017-01-01
This paper presents the design criteria for Visual Cues--visual stimuli that are used in combination with other pedagogical processes and tools in Disruptive Learning interventions in sustainability education--to disrupt learners' existing frames of mind and help re-orient learners' mind-sets towards sustainability. The theory of Disruptive…
ERIC Educational Resources Information Center
Dib, Hazar; Adamo-Villani, Nicoletta; Garver, Stephen
2014-01-01
Many benefits have been claimed for visualizations, a general assumption being that learning is facilitated. However, several researchers argue that little is known about the cognitive value of graphical representations, be they schematic visualizations, such as diagrams or more realistic, such as virtual reality. The study reported in the paper…
Images in Language: Metaphors and Metamorphoses. Visual Learning. Volume 1
ERIC Educational Resources Information Center
Benedek, Andras, Ed.; Nyiri, Kristof, Ed.
2011-01-01
Learning and teaching are faced with radically new challenges in today's rapidly changing world and its deeply transformed communicational environment. We are living in an era of images. Contemporary visual technology--film, video, interactive digital media--is promoting but also demanding a new approach to education: the age of visual learning…
The Nature of Experience Determines Object Representations in the Visual System
ERIC Educational Resources Information Center
Wong, Yetta K.; Folstein, Jonathan R.; Gauthier, Isabel
2012-01-01
Visual perceptual learning (PL) and perceptual expertise (PE) traditionally lead to different training effects and recruit different brain areas, but reasons for these differences are largely unknown. Here, we tested how the learning history influences visual object representations. Two groups were trained with tasks typically used in PL or PE…
Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities
ERIC Educational Resources Information Center
Norrix, Linda W.; Plante, Elena; Vance, Rebecca
2006-01-01
Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…
Behind Mathematical Learning Disabilities: What about Visual Perception and Motor Skills?
ERIC Educational Resources Information Center
Pieters, Stefanie; Desoete, Annemie; Roeyers, Herbert; Vanderswalmen, Ruth; Van Waelvelde, Hilde
2012-01-01
In a sample of 39 children with mathematical learning disabilities (MLD) and 106 typically developing controls belonging to three control groups of three different ages, we found that visual perception, motor skills and visual-motor integration explained a substantial proportion of the variance in either number fact retrieval or procedural…
Perceptual and academic patterns of learning-disabled/gifted students.
Waldron, K A; Saphire, D G
1992-04-01
This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.
Born, Jannis; Galeazzi, Juan M; Stringer, Simon M
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.
Born, Jannis; Stringer, Simon M.
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet. PMID:28562618
Yashar, Amit; Denison, Rachel N
2017-12-01
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL's effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments.
Feature reliability determines specificity and transfer of perceptual learning in orientation search
2017-01-01
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL’s effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments. PMID:29240813
Interactions between attention, context and learning in primary visual cortex.
Gilbert, C; Ito, M; Kapadia, M; Westheimer, G
2000-01-01
Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.