Soulé, Jonathan; Penke, Zsuzsa; Kanhema, Tambudzai; Alme, Maria Nordheim; Laroche, Serge; Bramham, Clive R.
2008-01-01
Long-term recognition memory requires protein synthesis, but little is known about the coordinate regulation of specific genes. Here, we examined expression of the plasticity-associated immediate early genes (Arc, Zif268, and Narp) in the dentate gyrus following long-term object-place recognition learning in rats. RT-PCR analysis from dentate gyrus tissue collected shortly after training did not reveal learning-specific changes in Arc mRNA expression. In situ hybridization and immunohistochemistry were therefore used to assess possible sparse effects on gene expression. Learning about objects increased the density of granule cells expressing Arc, and to a lesser extent Narp, specifically in the dorsal blade of the dentate gyrus, while Zif268 expression was elevated across both blades. Thus, object-place recognition triggers rapid, blade-specific upregulation of plasticity-associated immediate early genes. Furthermore, Western blot analysis of dentate gyrus homogenates demonstrated concomitant upregulation of three postsynaptic density proteins (Arc, PSD-95, and α-CaMKII) with key roles in long-term synaptic plasticity and long-term memory. PMID:19190776
Rats Fed a Diet Rich in Fats and Sugars Are Impaired in the Use of Spatial Geometry.
Tran, Dominic M D; Westbrook, R Frederick
2015-12-01
A diet rich in fats and sugars is associated with cognitive deficits in people, and rodent models have shown that such a diet produces deficits on tasks assessing spatial learning and memory. Spatial navigation is guided by two distinct types of information: geometrical, such as distance and direction, and featural, such as luminance and pattern. To clarify the nature of diet-induced spatial impairments, we provided rats with standard chow supplemented with sugar water and a range of energy-rich foods eaten by people, and then we assessed their place- and object-recognition memory. Rats exposed to this diet performed comparably with control rats fed only chow on object recognition but worse on place recognition. This impairment on the place-recognition task was present after only a few days on the diet and persisted across tests. Critically, this spatial impairment was specific to the processing of distance and direction. © The Author(s) 2015.
Grossberg, Stephen
2015-09-24
This article provides an overview of neural models of synaptic learning and memory whose expression in adaptive behavior depends critically on the circuits and systems in which the synapses are embedded. It reviews Adaptive Resonance Theory, or ART, models that use excitatory matching and match-based learning to achieve fast category learning and whose learned memories are dynamically stabilized by top-down expectations, attentional focusing, and memory search. ART clarifies mechanistic relationships between consciousness, learning, expectation, attention, resonance, and synchrony. ART models are embedded in ARTSCAN architectures that unify processes of invariant object category learning, recognition, spatial and object attention, predictive remapping, and eye movement search, and that clarify how conscious object vision and recognition may fail during perceptual crowding and parietal neglect. The generality of learned categories depends upon a vigilance process that is regulated by acetylcholine via the nucleus basalis. Vigilance can get stuck at too high or too low values, thereby causing learning problems in autism and medial temporal amnesia. Similar synaptic learning laws support qualitatively different behaviors: Invariant object category learning in the inferotemporal cortex; learning of grid cells and place cells in the entorhinal and hippocampal cortices during spatial navigation; and learning of time cells in the entorhinal-hippocampal system during adaptively timed conditioning, including trace conditioning. Spatial and temporal processes through the medial and lateral entorhinal-hippocampal system seem to be carried out with homologous circuit designs. Variations of a shared laminar neocortical circuit design have modeled 3D vision, speech perception, and cognitive working memory and learning. A complementary kind of inhibitory matching and mismatch learning controls movement. This article is part of a Special Issue entitled SI: Brain and Memory. Copyright © 2014 Elsevier B.V. All rights reserved.
Tran, Dominic M D; Westbrook, R Frederick
2017-03-01
A high-fat high-sugar (HFHS) diet is associated with cognitive deficits in people and produces spatial learning and memory deficits in rodents. Notable, such diets rapidly impair place-, but not object-recognition memory in rats within one week of exposure. Three experiments examined whether this impairment was reversed by removal of the diet, or prevented by pre-diet training. Experiment 1 showed that rats switched from HFHS to chow recovered from the place-recognition impairment that they displayed while on HFHS. Experiment 2 showed that control rats ("Untrained") who were exposed to an empty testing arena while on chow, were impaired in place-recognition when switched to HFHS and tested for the first time. However, rats tested ("Trained") on the place and object task while on chow, were protected from the diet-induce deficit and maintained good place-recognition when switched to HFHS. Experiment 3 examined the conditions of this protection effect by training rats in a square arena while on chow, and testing them in a rectangular arena while on HFHS. We have previously demonstrated that chow rats, but not HFHS rats, show geometry-based reorientation on a rectangular arena place-recognition task (Tran & Westbrook, 2015). Experiment 3 assessed whether rats switched to the HFHS diet after training on the place and object tasks in a square area, would show geometry-based reorientation in a rectangular arena. The protective benefit of training was replicated in the square arena, but both Untrained and Trained HFHS failed to show geometry-based reorientation in the rectangular arena. These findings are discussed in relation to the specificity of the training effect, the role of the hippocampus in diet-induced deficits, and their implications for dietary effects on cognition in people. Copyright © 2016 Elsevier Ltd. All rights reserved.
2016-11-01
Introduction Fragile X syndrome is the leading cause of intellectual disability resulting from a single gene mutation...performance, which measures motor learning and coordination. Treatment with metformin did not significantly affect performance in the rotarod task (Fig 5...marble burying, novel object recognition, object place memory, and reversal learning in the water Y maze (data not shown). It has been previously
Belcher, Annabelle M; Harrington, Rebecca A; Malkova, Ludise; Mishkin, Mortimer
2006-01-01
Earlier studies found that recognition memory for object-place associations was impaired in patients with relatively selective hippocampal damage (Vargha-Khadem et al., Science 1997; 277:376-380), but was unaffected after selective hippocampal lesions in monkeys (Malkova and Mishkin, J Neurosci 2003; 23:1956-1965). A potentially important methodological difference between the two studies is that the patients were required to remember a set of 20 object-place associations for several minutes, whereas the monkeys had to remember only two such associations at a time, and only for a few seconds. To approximate more closely the task given to the patients, we trained monkeys on several successive sets of 10 object-place pairs each, with each set requiring learning across days. Despite the increased associative memory demands, monkeys given hippocampal lesions were unimpaired relative to their unoperated controls, suggesting that differences other than set size and memory duration underlie the different outcomes in the human and animal studies. (c) 2005 Wiley-Liss, Inc.
The relationships between trait anxiety, place recognition memory, and learning strategy.
Hawley, Wayne R; Grissom, Elin M; Dohanich, Gary P
2011-01-20
Rodents learn to navigate mazes using various strategies that are governed by specific regions of the brain. The type of strategy used when learning to navigate a spatial environment is moderated by a number of factors including emotional states. Heightened anxiety states, induced by exposure to stressors or administration of anxiogenic agents, have been found to bias male rats toward the use of a striatum-based stimulus-response strategy rather than a hippocampus-based place strategy. However, no study has yet examined the relationship between natural anxiety levels, or trait anxiety, and the type of learning strategy used by rats on a dual-solution task. In the current experiment, levels of inherent anxiety were measured in an open field and compared to performance on two separate cognitive tasks, a Y-maze task that assessed place recognition memory, and a visible platform water maze task that assessed learning strategy. Results indicated that place recognition memory on the Y-maze correlated with the use of place learning strategy on the water maze. Furthermore, lower levels of trait anxiety correlated positively with better place recognition memory and with the preferred use of place learning strategy. Therefore, competency in place memory and bias in place strategy are linked to the levels of inherent anxiety in male rats. Copyright © 2010 Elsevier B.V. All rights reserved.
Early handling effect on female rat spatial and non-spatial learning and memory.
Plescia, Fulvio; Marino, Rosa A M; Navarra, Michele; Gambino, Giuditta; Brancato, Anna; Sardo, Pierangelo; Cannizzaro, Carla
2014-03-01
This study aims at providing an insight into early handling procedures on learning and memory performance in adult female rats. Early handling procedures were started on post-natal day 2 until 21, and consisted in 15 min, daily separations of the dams from their litters. Assessment of declarative memory was carried out in the novel-object recognition task; spatial learning, reference- and working memory were evaluated in the Morris water maze (MWM). Our results indicate that early handling induced an enhancement in: (1) declarative memory, in the object recognition task, both at 1h and 24h intervals; (2) reference memory in the probe test and working memory and behavioral flexibility in the "single-trial and four-trial place learning paradigm" of the MWM. Short-term separation by increasing maternal care causes a dampening in HPA axis response in the pups. A modulated activation of the stress response may help to protect brain structures, involved in cognitive function. In conclusion, this study shows the long-term effects of a brief maternal separation in enhancing object recognition-, spatial reference- and working memory in female rats, remarking the impact of early environmental experiences and the consequent maternal care on the behavioral adaptive mechanisms in adulthood. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Boisselier, Lise; Ferry, Barbara; Gervais, Rémi
2017-01-01
The hippocampal formation has been extensively described as a key component for object recognition in conjunction with place and context. The present study aimed at describing neural mechanisms in the hippocampal formation that support olfactory-tactile (OT) object discrimination in a task where space and context were not taken into account. The…
Mechanisms of object recognition: what we have learned from pigeons
Soto, Fabian A.; Wasserman, Edward A.
2014-01-01
Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons. PMID:25352784
Kesby, James P; Markou, Athina; Semenova, Svetlana
2015-01-01
Methamphetamine abuse is common among individuals infected by human immunodeficiency virus (HIV). Neurocognitive outcomes tend to be worse in methamphetamine users with HIV. However, it is unclear whether discrete cognitive domains are susceptible to impairment after combined HIV infection and methamphetamine abuse. The expression of HIV/gp120 protein induces neuropathology in mice similar to HIV-induced pathology in humans. We investigated the separate and combined effects of methamphetamine exposure and gp120 expression on cognitive function in transgenic (gp120-tg) and control mice. The mice underwent an escalating methamphetamine binge regimen and were tested in novel object/location recognition, object-in-place recognition, and Barnes maze tests. gp120 expression disrupted performance in the object-in-place test (i.e. similar time spent with all objects, regardless of location), indicating deficits in associative recognition memory. gp120 expression also altered reversal learning in the Barnes maze, suggesting impairments in executive function. Methamphetamine exposure impaired spatial strategy in the Barnes maze, indicating deficits in spatial learning. Methamphetamine-exposed gp120-tg mice had the lowest spatial strategy scores in the final acquisition trials in the Barnes maze, suggesting greater deficits in spatial learning than all of the other groups. Although HIV infection involves interactions between multiple proteins and processes, in addition to gp120, our findings in gp120-tg mice suggest that humans with the dual insult of HIV infection and methamphetamine abuse may exhibit a broader spectrum of cognitive deficits than those with either factor alone. Depending on the cognitive domain, the combination of both insults may exacerbate deficits in cognitive performance compared with each individual insult. Copyright © 2014 Elsevier B.V. and ECNP. All rights reserved.
Kesby, James P.; Markou, Athina; Semenova, Svetlana
2014-01-01
Methamphetamine abuse is common among individuals infected by human immunodeficiency virus (HIV). Neurocognitive outcomes tend to be worse in methamphetamine users with HIV. However, it is unclear whether discrete cognitive domains are susceptible to impairment after combined HIV infection and methamphetamine abuse. The expression of HIV/gp120 protein induces neuropathology in mice similar to HIV-induced pathology in humans. We investigated the separate and combined effects of methamphetamine exposure and gp120 expression on cognitive function in transgenic (gp120-tg) and control mice. The mice underwent an escalating methamphetamine binge regimen and were tested in novel object/location recognition, object-in-place recognition, and Barnes maze tests. gp120 expression disrupted performance in the object-in-place test (i.e., similar time spent with all objects, regardless of location), indicating deficits in associative recognition memory. gp120 expression also altered reversal learning in the Barnes maze, suggesting impairments in executive function. Methamphetamine exposure impaired spatial strategy in the Barnes maze, indicating deficits in spatial learning. Methamphetamine-exposed gp120-tg mice had the lowest spatial strategy scores in the final acquisition trials in the Barnes maze, suggesting greater deficits in spatial learning than all of the other groups. Although HIV infection involves interactions between multiple proteins and processes, in addition to gp120, our findings in gp120-tg mice suggest that humans with the dual insult of HIV infection and methamphetamine abuse may exhibit a broader spectrum of cognitive deficits than those with either factor alone. Depending on the cognitive domain, the combination of both insults may exacerbate deficits in cognitive performance compared with each individual insult. PMID:25476577
Lateral entorhinal cortex is necessary for associative but not nonassociative recognition memory
Wilson, David IG; Watanabe, Sakurako; Milner, Helen; Ainge, James A
2013-01-01
The lateral entorhinal cortex (LEC) provides one of the two major input pathways to the hippocampus and has been suggested to process the nonspatial contextual details of episodic memory. Combined with spatial information from the medial entorhinal cortex it is hypothesised that this contextual information is used to form an integrated spatially selective, context-specific response in the hippocampus that underlies episodic memory. Recently, we reported that the LEC is required for recognition of objects that have been experienced in a specific context (Wilson et al. (2013) Hippocampus 23:352-366). Here, we sought to extend this work to assess the role of the LEC in recognition of all associative combinations of objects, places and contexts within an episode. Unlike controls, rats with excitotoxic lesions of the LEC showed no evidence of recognizing familiar combinations of object in place, place in context, or object in place and context. However, LEC lesioned rats showed normal recognition of objects and places independently from each other (nonassociative recognition). Together with our previous findings, these data suggest that the LEC is critical for associative recognition memory and may bind together information relating to objects, places, and contexts needed for episodic memory formation. PMID:23836525
Learned Non-Rigid Object Motion is a View-Invariant Cue to Recognizing Novel Objects
Chuang, Lewis L.; Vuong, Quoc C.; Bülthoff, Heinrich H.
2012-01-01
There is evidence that observers use learned object motion to recognize objects. For instance, studies have shown that reversing the learned direction in which a rigid object rotated in depth impaired recognition accuracy. This motion reversal can be achieved by playing animation sequences of moving objects in reverse frame order. In the current study, we used this sequence-reversal manipulation to investigate whether observers encode the motion of dynamic objects in visual memory, and whether such dynamic representations are encoded in a way that is dependent on the viewing conditions. Participants first learned dynamic novel objects, presented as animation sequences. Following learning, they were then tested on their ability to recognize these learned objects when their animation sequence was shown in the same sequence order as during learning or in the reverse sequence order. In Experiment 1, we found that non-rigid motion contributed to recognition performance; that is, sequence-reversal decreased sensitivity across different tasks. In subsequent experiments, we tested the recognition of non-rigidly deforming (Experiment 2) and rigidly rotating (Experiment 3) objects across novel viewpoints. Recognition performance was affected by viewpoint changes for both experiments. Learned non-rigid motion continued to contribute to recognition performance and this benefit was the same across all viewpoint changes. By comparison, learned rigid motion did not contribute to recognition performance. These results suggest that non-rigid motion provides a source of information for recognizing dynamic objects, which is not affected by changes to viewpoint. PMID:22661939
Rapid effects of dorsal hippocampal G-protein coupled estrogen receptor on learning in female mice.
Lymer, Jennifer; Robinson, Alana; Winters, Boyer D; Choleris, Elena
2017-03-01
Through rapid mechanisms of action, estrogens affect learning and memory processes. It has been shown that 17β-estradiol and an Estrogen Receptor (ER) α agonist enhances performance in social recognition, object recognition, and object placement tasks when administered systemically or infused in the dorsal hippocampus. In contrast, systemic and dorsal hippocampal ERβ activation only promote spatial learning. In addition, 17β-estradiol, the ERα and the G-protein coupled estrogen receptor (GPER) agonists increase dendritic spine density in the CA1 hippocampus. Recently, we have shown that selective systemic activation of the GPER also rapidly facilitated social recognition, object recognition, and object placement learning in female mice. Whether activation the GPER specifically in the dorsal hippocampus can also rapidly improve learning and memory prior to acquisition is unknown. Here, we investigated the rapid effects of infusion of the GPER agonist, G-1 (dose: 50nM, 100nM, 200nM), in the dorsal hippocampus on social recognition, object recognition, and object placement learning tasks in home cage. These paradigms were completed within 40min, which is within the range of rapid estrogenic effects. Dorsal hippocampal administration of G-1 improved social (doses: 50nM, 200nM G-1) and object (dose: 200nM G-1) recognition with no effect on object placement. Additionally, when spatial cues were minimized by testing in a Y-apparatus, G-1 administration promoted social (doses: 100nM, 200nM G-1) and object (doses: 50nM, 100nM, 200nM G-1) recognition. Therefore, like ERα, the GPER in the hippocampus appears to be sufficient for the rapid facilitation of social and object recognition in female mice, but not for the rapid facilitation of object placement learning. Thus, the GPER in the dorsal hippocampus is involved in estrogenic mediation of learning and memory and these effects likely occur through rapid signalling mechanisms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning
Yee, Meagan; Jones, Susan S.; Smith, Linda B.
2012-01-01
Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015
Mitchell, Anna S.; Baxter, Mark G.; Gaffan, David
2008-01-01
Monkeys with aspiration lesions of the magnocellular division of the mediodorsal thalamus (MDmc) are impaired in object-in-place scene learning, object recognition and stimulus-reward association. These data have been interpreted to mean that projections from MDmc to prefrontal cortex are required to sustain normal prefrontal function in a variety of task settings. In the present study, we investigated the extent to which bilateral neurotoxic lesions of the MDmc impair a pre-operatively learnt strategy implementation task that is impaired by a crossed lesion technique that disconnects the frontal cortex in one hemisphere from the contralateral inferotemporal cortex. Postoperative memory impairments were also examined using the object-in-place scene memory task. Monkeys learnt both strategy implementation and scene memory tasks separately to a stable level pre-operatively. Bilateral neurotoxic lesions of the MDmc, produced by 10 × 1 μl injections of a mixture of ibotenate and N-methyl-D-aspartate did not affect performance in the strategy implementation task. However, new learning of object-in-place scene memory was substantially impaired. These results provide new evidence about the role of the magnocellular mediodorsal thalamic nucleus in memory processing, indicating that interconnections with the prefrontal cortex are essential during new learning but are not required when implementing a preoperatively acquired strategy task. Thus not all functions of the prefrontal cortex require MDmc input. Instead the involvement of MDmc in prefrontal function may be limited to situations in which new learning must occur. PMID:17978029
Cross, Laura; Brown, Malcolm W; Aggleton, John P; Warburton, E Clea
2012-12-21
In humans recognition memory deficits, a typical feature of diencephalic amnesia, have been tentatively linked to mediodorsal thalamic nucleus (MD) damage. Animal studies have occasionally investigated the role of the MD in single-item recognition, but have not systematically analyzed its involvement in other recognition memory processes. In Experiment 1 rats with bilateral excitotoxic lesions in the MD or the medial prefrontal cortex (mPFC) were tested in tasks that assessed single-item recognition (novel object preference), associative recognition memory (object-in-place), and recency discrimination (recency memory task). Experiment 2 examined the functional importance of the interactions between the MD and mPFC using disconnection techniques. Unilateral excitotoxic lesions were placed in both the MD and the mPFC in either the same (MD + mPFC Ipsi) or opposite hemispheres (MD + mPFC Contra group). Bilateral lesions in the MD or mPFC impaired object-in-place and recency memory tasks, but had no effect on novel object preference. In Experiment 2 the MD + mPFC Contra group was significantly impaired in the object-in-place and recency memory tasks compared with the MD + mPFC Ipsi group, but novel object preference was intact. Thus, connections between the MD and mPFC are critical for recognition memory when the discriminations involve associative or recency information. However, the rodent MD is not necessary for single-item recognition memory.
The Significance of the Learner Profile in Recognition of Prior Learning
ERIC Educational Resources Information Center
Snyman, Marici; van den Berg, Geesje
2018-01-01
Recognition of prior learning (RPL) is based on the principle that valuable learning, worthy of recognition, takes place outside formal education. In the context of higher education, legislation provides an enabling framework for the implementation of RPL. However, RPL will only gain its rightful position if it can ensure the RPL candidates'…
A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP
Balduzzi, David; Tononi, Giulio
2012-01-01
In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips. PMID:22615855
Implicit and Explicit Contributions to Object Recognition: Evidence from Rapid Perceptual Learning
Hassler, Uwe; Friese, Uwe; Gruber, Thomas
2012-01-01
The present study investigated implicit and explicit recognition processes of rapidly perceptually learned objects by means of steady-state visual evoked potentials (SSVEP). Participants were initially exposed to object pictures within an incidental learning task (living/non-living categorization). Subsequently, degraded versions of some of these learned pictures were presented together with degraded versions of unlearned pictures and participants had to judge, whether they recognized an object or not. During this test phase, stimuli were presented at 15 Hz eliciting an SSVEP at the same frequency. Source localizations of SSVEP effects revealed for implicit and explicit processes overlapping activations in orbito-frontal and temporal regions. Correlates of explicit object recognition were additionally found in the superior parietal lobe. These findings are discussed to reflect facilitation of object-specific processing areas within the temporal lobe by an orbito-frontal top-down signal as proposed by bi-directional accounts of object recognition. PMID:23056558
Tian, Moqian; Grill-Spector, Kalanit
2015-01-01
Recognizing objects is difficult because it requires both linking views of an object that can be different and distinguishing objects with similar appearance. Interestingly, people can learn to recognize objects across views in an unsupervised way, without feedback, just from the natural viewing statistics. However, there is intense debate regarding what information during unsupervised learning is used to link among object views. Specifically, researchers argue whether temporal proximity, motion, or spatiotemporal continuity among object views during unsupervised learning is beneficial. Here, we untangled the role of each of these factors in unsupervised learning of novel three-dimensional (3-D) objects. We found that after unsupervised training with 24 object views spanning a 180° view space, participants showed significant improvement in their ability to recognize 3-D objects across rotation. Surprisingly, there was no advantage to unsupervised learning with spatiotemporal continuity or motion information than training with temporal proximity. However, we discovered that when participants were trained with just a third of the views spanning the same view space, unsupervised learning via spatiotemporal continuity yielded significantly better recognition performance on novel views than learning via temporal proximity. These results suggest that while it is possible to obtain view-invariant recognition just from observing many views of an object presented in temporal proximity, spatiotemporal information enhances performance by producing representations with broader view tuning than learning via temporal association. Our findings have important implications for theories of object recognition and for the development of computational algorithms that learn from examples. PMID:26024454
Examining object recognition and object-in-Place memory in plateau zokors, Eospalax baileyi.
Hegab, Ibrahim M; Tan, Yuchen; Wang, Chan; Yao, Baohui; Wang, Haifang; Ji, Weihong; Su, Junhu
2018-01-01
Recognition memory is important for the survival and fitness of subterranean rodents due to the barren underground conditions that require avoiding the burden of higher energy costs or possible conflict with conspecifics. Our study aims to examine the object and object/place recognition memories in plateau zokors (Eospalax baileyi) and test whether their underground life exerts sex-specific differences in memory functions using Novel Object Recognition (NOR) and Object-in-Place (OiP) paradigms. Animals were tested in the NOR with short (10min) and long-term (24h) inter-trial intervals (ITI) and in the OiP for a 30-min ITI between the familiarization and testing sessions. Plateau zokors showed a strong preference for novel objects manifested by a longer exploration time for the novel object after 10min ITI but failed to remember the familiar object when tested after 24h, suggesting a lack of long-term memory. In the OiP test, zokors effectively formed an association between the objects and the place where they were formerly encountered, resulting in a higher duration of exploration to the switched objects. However, both sexes showed equivalent results in exploration time during the NOR and OiP tests, which eliminates the possibility of discovering sex-specific variations in memory performance. Taken together, our study illustrates robust novelty preference and an effective short-term recognition memory without marked sex-specific differences, which might elucidate the dynamics of recognition memory formation and retrieval in plateau zokors. Copyright © 2017 Elsevier B.V. All rights reserved.
Prut, L; Prenosil, G; Willadt, S; Vogt, K; Fritschy, J-M; Crestani, F
2010-07-01
The memory for location of objects, which binds information about objects to discrete positions or spatial contexts of occurrence, is a form of episodic memory particularly sensitive to hippocampal damage. Its early decline is symptomatic for elderly dementia. Substances that selectively reduce alpha5-GABA(A) receptor function are currently developed as potential cognition enhancers for Alzheimer's syndrome and other dementia, consistent with genetic studies implicating these receptors that are highly expressed in hippocampus in learning performance. Here we explored the consequences of reduced GABA(A)alpha5-subunit contents, as occurring in alpha5(H105R) knock-in mice, on the memory for location of objects. This required the behavioral characterization of alpha5(H105R) and wild-type animals in various tasks examining learning and memory retrieval strategies for objects, locations, contexts and their combinations. In mutants, decreased amounts of alpha5-subunits and retained long-term potentiation in hippocampus were confirmed. They exhibited hyperactivity with conserved circadian rhythm in familiar actimeters, and normal exploration and emotional reactivity in novel places, allocentric spatial guidance, and motor pattern learning acquisition, inhibition and flexibility in T- and eight-arm mazes. Processing of object, position and context memories and object-guided response learning were spared. Genotype difference in object-in-place memory retrieval and in encoding and response learning strategies for object-location combinations manifested as a bias favoring object-based recognition and guidance strategies over spatial processing of objects in the mutants. These findings identify in alpha5(H105R) mice a behavioral-cognitive phenotype affecting basal locomotion and the memory for location of objects indicative of hippocampal dysfunction resulting from moderately decreased alpha5-subunit contents.
Barker, Gareth R I; Warburton, Elizabeth Clea
2018-03-28
Recognition memory for single items requires the perirhinal cortex (PRH), whereas recognition of an item and its associated location requires a functional interaction among the PRH, hippocampus (HPC), and medial prefrontal cortex (mPFC). Although the precise mechanisms through which these interactions are effected are unknown, the nucleus reuniens (NRe) has bidirectional connections with each regions and thus may play a role in recognition memory. Here we investigated, in male rats, whether specific manipulations of NRe function affected performance of recognition memory for single items, object location, or object-in-place associations. Permanent lesions in the NRe significantly impaired long-term, but not short-term, object-in-place associative recognition memory, whereas single item recognition memory and object location memory were unaffected. Temporary inactivation of the NRe during distinct phases of the object-in-place task revealed its importance in both the encoding and retrieval stages of long-term associative recognition memory. Infusions of specific receptor antagonists showed that encoding was dependent on muscarinic and nicotinic cholinergic neurotransmission, whereas NMDA receptor neurotransmission was not required. Finally, we found that long-term object-in-place memory required protein synthesis within the NRe. These data reveal a specific role for the NRe in long-term associative recognition memory through its interactions with the HPC and mPFC, but not the PRH. The delay-dependent involvement of the NRe suggests that it is not a simple relay station between brain regions, but, rather, during high mnemonic demand, facilitates interactions between the mPFC and HPC, a process that requires both cholinergic neurotransmission and protein synthesis. SIGNIFICANCE STATEMENT Recognizing an object and its associated location, which is fundamental to our everyday memory, requires specific hippocampal-cortical interactions, potentially facilitated by the nucleus reuniens (NRe) of the thalamus. However, the role of the NRe itself in associative recognition memory is unknown. Here, we reveal the crucial role of the NRe in encoding and retrieval of long-term object-in-place memory, but not for remembrance of an individual object or individual location and such involvement is cholinergic receptor and protein synthesis dependent. This is the first demonstration that the NRe is a key node within an associative recognition memory network and is not just a simple relay for information within the network. Rather, we argue, the NRe actively modulates information processing during long-term associative memory formation. Copyright © 2018 the authors 0270-6474/18/383208-10$15.00/0.
Ding, Fang; Zheng, Limin; Liu, Min; Chen, Rongfa; Leung, L Stan; Luo, Tao
2016-08-01
Exposure to volatile anesthetics has been reported to cause temporary or sustained impairments in learning and memory in pre-clinical studies. The selective antagonists of the histamine H3 receptors (H3R) are considered to be a promising group of novel therapeutic agents for the treatment of cognitive disorders. The aim of this study was to evaluate the effect of H3R antagonist ciproxifan on isoflurane-induced deficits in an object recognition task. Adult C57BL/6 J mice were exposed to isoflurane (1.3 %) or vehicle gas for 2 h. The object recognition tests were carried at 24 h or 7 days after exposure to anesthesia to exploit the tendency of mice to prefer exploring novel objects in an environment when a familiar object is also present. During the training phase, two identical objects were placed in two defined sites of the chamber. During the test phase, performed 1 or 24 h after the training phase, one of the objects was replaced by a new object with a different shape. The time spent exploring each object was recorded. A robust deficit in object recognition memory occurred 1 day after exposure to isoflurane anesthesia. Isoflurane-treated mice spent significantly less time exploring a novel object at 1 h but not at 24 h after the training phase. The deficit in short-term memory was reversed by the administration of ciproxifan 30 min before behavioral training. Isoflurane exposure induces reversible deficits in object recognition memory. Ciproxifan appears to be a potential therapeutic agent for improving post-anesthesia cognitive memory performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santhanam, A; Min, Y; Beron, P
Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter.more » To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.« less
It's all connected: Pathways in visual object recognition and early noun learning.
Smith, Linda B
2013-11-01
A developmental pathway may be defined as the route, or chain of events, through which a new structure or function forms. For many human behaviors, including object name learning and visual object recognition, these pathways are often complex and multicausal and include unexpected dependencies. This article presents three principles of development that suggest the value of a developmental psychology that explicitly seeks to trace these pathways and uses empirical evidence on developmental dependencies among motor development, action on objects, visual object recognition, and object name learning in 12- to 24-month-old infants to make the case. The article concludes with a consideration of the theoretical implications of this approach. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Online Feature Transformation Learning for Cross-Domain Object Category Recognition.
Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold
2017-06-09
In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.
Toward a unified model of face and object recognition in the human visual system
Wallis, Guy
2013-01-01
Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963
Anthropomorphic robot for recognition of objects
NASA Astrophysics Data System (ADS)
Ginzburg, Vera M.
1999-08-01
Heated debates were taking place a few decades ago between the proponents of digital and analog methods in information. Technology have resulted in unequivocal triumph of the former. However, some serious technological problems confronting the world civilization on the threshold of the new millennium, such as Y2K and computer network vulnerability, probably spring from this one-sided approach. Dire consequences of problems of this kind can be alleviated through learning from the nature.
Qiao, Yanhua; Wang, Xingyue; Ma, Lian; Li, Shengguang; Liang, Jing
2017-10-01
Deficits in behavioral flexibility and recognition memory are commonly observed in mental illnesses and neurodegenerative diseases. Abnormality of the striatum has been implicated in an association with the pathology of these diseases. However, the exact roles of striatal heterogeneous structures in these cognitive functions are still unknown. In the present study, we investigated the effects of suppressing neuronal activity in the dorsomedial striatum (DMStr) and nucleus accumbens core (NAcC) on reversal learning and novelty recognition in mice. In addition, the locomotor activity, anxiety-like behavior and social interaction were analyzed. Neuronal inactivation was performed by expressing lentivirus-mediated tetanus toxin (TeNT) in the target regions. The results showed that reversal learning was facilitated by neuronal inactivation in the DMStr but not the NAcC, which was attributable to accelerated extinction of acquired strategy but not to impaired memory retention. Furthermore, mice with NAcC inactivation spent more time exploring a novel object than a familiar one, comparable to control mice. In contrast, mice with DMStr inactivation exhibited no preference to a novel environment during the novel object or place recognition test. The DMStr mice also exhibited decreased anxiety level. No phenotypic effect was observed in the locomotion or social interaction in mice with either DMStr or NAcC inactivation. Altogether, these findings suggest that the DMStr but not the ventral area of the striatum plays a crucial role in learning and memory by coordinating spatial exploration as well as mediating information updating. Copyright © 2017 Elsevier Inc. All rights reserved.
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Speckle-learning-based object recognition through scattering media.
Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun
2015-12-28
We experimentally demonstrated object recognition through scattering media based on direct machine learning of a number of speckle intensity images. In the experiments, speckle intensity images of amplitude or phase objects on a spatial light modulator between scattering plates were captured by a camera. We used the support vector machine for binary classification of the captured speckle intensity images of face and non-face data. The experimental results showed that speckles are sufficient for machine learning.
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
Boehm, Stephan G; Smith, Ciaran; Muench, Niklas; Noble, Kirsty; Atherton, Catherine
2017-08-31
Repetition priming increases the accuracy and speed of responses to repeatedly processed stimuli. Repetition priming can result from two complementary sources: rapid response learning and facilitation within perceptual and conceptual networks. In conceptual classification tasks, rapid response learning dominates priming of object recognition, but it does not dominate priming of person recognition. This suggests that the relative engagement of network facilitation and rapid response learning depends on the stimulus domain. Here, we addressed the importance of the stimulus domain for rapid response learning by investigating priming in another domain, brands. In three experiments, participants performed conceptual decisions for brand logos. Strong priming was present, but it was not dominated by rapid response learning. These findings add further support to the importance of the stimulus domain for the relative importance of network facilitation and rapid response learning, and they indicate that brand priming is more similar to person recognition priming than object recognition priming, perhaps because priming of both brands and persons requires individuation.
NASA Astrophysics Data System (ADS)
Yan, Fengxia; Udupa, Jayaram K.; Tong, Yubing; Xu, Guoping; Odhner, Dewey; Torigian, Drew A.
2018-03-01
The recently developed body-wide Automatic Anatomy Recognition (AAR) methodology depends on fuzzy modeling of individual objects, hierarchically arranging objects, constructing an anatomy ensemble of these models, and a dichotomous object recognition-delineation process. The parent-to-offspring spatial relationship in the object hierarchy is crucial in the AAR method. We have found this relationship to be quite complex, and as such any improvement in capturing this relationship information in the anatomy model will improve the process of recognition itself. Currently, the method encodes this relationship based on the layout of the geometric centers of the objects. Motivated by the concept of virtual landmarks (VLs), this paper presents a new one-shot AAR recognition method that utilizes the VLs to learn object relationships by training a neural network to predict the pose and the VLs of an offspring object given the VLs of the parent object in the hierarchy. We set up two neural networks for each parent-offspring object pair in a body region, one for predicting the VLs and another for predicting the pose parameters. The VL-based learning/prediction method is evaluated on two object hierarchies involving 14 objects. We utilize 54 computed tomography (CT) image data sets of head and neck cancer patients and the associated object contours drawn by dosimetrists for routine radiation therapy treatment planning. The VL neural network method is found to yield more accurate object localization than the currently used simple AAR method.
Wu, Lin; Wang, Yang; Pan, Shirui
2017-12-01
It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.
Shi, Hai-Shui; Yin, Xi; Song, Li; Guo, Qing-Jun; Luo, Xiang-Heng
2012-02-01
Accumulating evidence has implicated neuropeptides in modulating recognition, learning and memory. However, to date, no study has investigated the effects of neuropeptide Trefoil factor 3 (TFF3) on the process of learning and memory. In the present study, we evaluated the acute effects of TFF3 administration (0.1 and 0.5mg/kg, i.p.) on the acquisition and retention of object recognition memory in mice. We found that TFF3 administration significantly enhanced both short-term and long-term memory during the retention test, conducted 90 min and 24h after training respectively. Remarkably, acute TFF3 administration transformed a learning event that would not normally result in long-term memory into an event retained for a long-term period and produced no effect on locomotor activity in mice. In conclusion, the present results provide an important role of TFF3 in improving object recognition memory and reserving it for a longer time, which suggests a potential therapeutic application for diseases with recognition and memory impairment. Copyright © 2011 Elsevier B.V. All rights reserved.
Gabor, Christopher; Lymer, Jennifer; Phan, Anna; Choleris, Elena
2015-10-01
Recently, oestrogen receptors (ERs) have been implicated in rapid learning processes. We have previously shown that 17β-estradiol, ERα and ERβ agonists can improve learning within 40 min of drug administration in mice. However, oestrogen action at the classical receptors may only in part explain these rapid learning effects. Chronic treatment of a G-protein coupled oestrogen receptor (GPER) agonist has been shown to affect learning and memory in ovariectomized rats, yet little is known about its rapid learning effects. Therefore we investigated whether the GPER agonist G-1 at 1 μg/kg, 6 μg/kg, 10 μg/kg, and 30 μg/kg could affect social recognition, object recognition, and object placement learning in ovariectomized CD1 mice within 40 min of drug administration. We also examined rapid effects of G-1 on CA1 hippocampal dendritic spine density and length within 40 min of drug administration, but in the absence of any learning tests. Results suggest a rapid enhancing effect of GPER activation on social recognition, object recognition and object placement learning. G-1 treatment also resulted in increased dendritic spine density in the stratum radiatum of the CA1 hippocampus. Hence GPER, along with the classical ERs, may mediate the rapid effects of oestrogen on learning and neuronal plasticity. To our knowledge, this is the first report of GPER effects occurring within a 40 min time frame. Copyright © 2015 Elsevier Inc. All rights reserved.
Zhu, Changlian; Gao, Jianfeng; Karlsson, Niklas; Li, Qian; Zhang, Yu; Huang, Zhiheng; Li, Hongfu; Kuhn, H Georg; Blomgren, Klas
2010-05-01
Isoflurane and related anesthetics are widely used to anesthetize children, ranging from premature babies to adolescents. Concerns have been raised about the safety of these anesthetics in pediatric patients, particularly regarding possible negative effects on cognition. The purpose of this study was to investigate the effects of repeated isoflurane exposure of juvenile and mature animals on cognition and neurogenesis. Postnatal day 14 (P14) rats and mice, as well as adult (P60) rats, were anesthetized with isoflurane for 35 mins daily for four successive days. Object recognition, place learning and reversal learning as well as cell death and cytogenesis were evaluated. Object recognition and reversal learning were significantly impaired in isoflurane-treated young rats and mice, whereas adult animals were unaffected, and these deficits became more pronounced as the animals grew older. The memory deficit was paralleled by a decrease in the hippocampal stem cell pool and persistently reduced neurogenesis, subsequently causing a reduction in the number of dentate gyrus granule cell neurons in isoflurane-treated rats. There were no signs of increased cell death of progenitors or neurons in the hippocampus. These findings show a previously unknown mechanism of neurotoxicity, causing cognitive deficits in a clearly age-dependent manner.
Effect of tDCS on task relevant and irrelevant perceptual learning of complex objects.
Van Meel, Chayenne; Daniels, Nicky; de Beeck, Hans Op; Baeck, Annelies
2016-01-01
During perceptual learning the visual representations in the brain are altered, but these changes' causal role has not yet been fully characterized. We used transcranial direct current stimulation (tDCS) to investigate the role of higher visual regions in lateral occipital cortex (LO) in perceptual learning with complex objects. We also investigated whether object learning is dependent on the relevance of the objects for the learning task. Participants were trained in two tasks: object recognition using a backward masking paradigm and an orientation judgment task. During both tasks, an object with a red line on top of it were presented in each trial. The crucial difference between both tasks was the relevance of the object: the object was relevant for the object recognition task, but not for the orientation judgment task. During training, half of the participants received anodal tDCS stimulation targeted at the lateral occipital cortex (LO). Afterwards, participants were tested on how well they recognized the trained objects, the irrelevant objects presented during the orientation judgment task and a set of completely new objects. Participants stimulated with tDCS during training showed larger improvements of performance compared to participants in the sham condition. No learning effect was found for the objects presented during the orientation judgment task. To conclude, this study suggests a causal role of LO in relevant object learning, but given the rather low spatial resolution of tDCS, more research on the specificity of this effect is needed. Further, mere exposure is not sufficient to train object recognition in our paradigm.
Bio-Inspired Neural Model for Learning Dynamic Models
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu; Suri, Ronald
2009-01-01
A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.
Leveraging Cognitive Context for Object Recognition
2014-06-01
learned from large image databases. We build upon this concept by exploring cognitive context, demonstrating how rich dynamic context provided by...context that people rely upon as they perceive the world. Context in ACT-R/E takes the form of associations between related concepts that are learned ...and accuracy of object recognition. Context is most often viewed as a static concept, learned from large image databases. We build upon this concept by
Parts and Relations in Young Children's Shape-Based Object Recognition
ERIC Educational Resources Information Center
Augustine, Elaine; Smith, Linda B.; Jones, Susan S.
2011-01-01
The ability to recognize common objects from sparse information about geometric shape emerges during the same period in which children learn object names and object categories. Hummel and Biederman's (1992) theory of object recognition proposes that the geometric shapes of objects have two components--geometric volumes representing major object…
Combining heterogenous features for 3D hand-held object recognition
NASA Astrophysics Data System (ADS)
Lv, Xiong; Wang, Shuang; Li, Xiangyang; Jiang, Shuqiang
2014-10-01
Object recognition has wide applications in the area of human-machine interaction and multimedia retrieval. However, due to the problem of visual polysemous and concept polymorphism, it is still a great challenge to obtain reliable recognition result for the 2D images. Recently, with the emergence and easy availability of RGB-D equipment such as Kinect, this challenge could be relieved because the depth channel could bring more information. A very special and important case of object recognition is hand-held object recognition, as hand is a straight and natural way for both human-human interaction and human-machine interaction. In this paper, we study the problem of 3D object recognition by combining heterogenous features with different modalities and extraction techniques. For hand-craft feature, although it reserves the low-level information such as shape and color, it has shown weakness in representing hiconvolutionalgh-level semantic information compared with the automatic learned feature, especially deep feature. Deep feature has shown its great advantages in large scale dataset recognition but is not always robust to rotation or scale variance compared with hand-craft feature. In this paper, we propose a method to combine hand-craft point cloud features and deep learned features in RGB and depth channle. First, hand-held object segmentation is implemented by using depth cues and human skeleton information. Second, we combine the extracted hetegerogenous 3D features in different stages using linear concatenation and multiple kernel learning (MKL). Then a training model is used to recognize 3D handheld objects. Experimental results validate the effectiveness and gerneralization ability of the proposed method.
Age-related impairments in active learning and strategic visual exploration.
Brandstatt, Kelly L; Voss, Joel L
2014-01-01
Old age could impair memory by disrupting learning strategies used by younger individuals. We tested this possibility by manipulating the ability to use visual-exploration strategies during learning. Subjects controlled visual exploration during active learning, thus permitting the use of strategies, whereas strategies were limited during passive learning via predetermined exploration patterns. Performance on tests of object recognition and object-location recall was matched for younger and older subjects for objects studied passively, when learning strategies were restricted. Active learning improved object recognition similarly for younger and older subjects. However, active learning improved object-location recall for younger subjects, but not older subjects. Exploration patterns were used to identify a learning strategy involving repeat viewing. Older subjects used this strategy less frequently and it provided less memory benefit compared to younger subjects. In previous experiments, we linked hippocampal-prefrontal co-activation to improvements in object-location recall from active learning and to the exploration strategy. Collectively, these findings suggest that age-related memory problems result partly from impaired strategies during learning, potentially due to reduced hippocampal-prefrontal co-engagement.
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.
The roles of perceptual and conceptual information in face recognition.
Schwartz, Linoy; Yovel, Galit
2016-11-01
The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Holdstock, J S; Mayes, A R; Roberts, N; Cezayirli, E; Isaac, C L; O'Reilly, R C; Norman, K A
2002-01-01
The claim that recognition memory is spared relative to recall after focal hippocampal damage has been disputed in the literature. We examined this claim by investigating object and object-location recall and recognition memory in a patient, YR, who has adult-onset selective hippocampal damage. Our aim was to identify the conditions under which recognition was spared relative to recall in this patient. She showed unimpaired forced-choice object recognition but clearly impaired recall, even when her control subjects found the object recognition task to be numerically harder than the object recall task. However, on two other recognition tests, YR's performance was not relatively spared. First, she was clearly impaired at an equivalently difficult yes/no object recognition task, but only when targets and foils were very similar. Second, YR was clearly impaired at forced-choice recognition of object-location associations. This impairment was also unrelated to difficulty because this task was no more difficult than the forced-choice object recognition task for control subjects. The clear impairment of yes/no, but not of forced-choice, object recognition after focal hippocampal damage, when targets and foils are very similar, is predicted by the neural network-based Complementary Learning Systems model of recognition. This model postulates that recognition is mediated by hippocampally dependent recollection and cortically dependent familiarity; thus hippocampal damage should not impair item familiarity. The model postulates that familiarity is ineffective when very similar targets and foils are shown one at a time and subjects have to identify which items are old (yes/no recognition). In contrast, familiarity is effective in discriminating which of similar targets and foils, seen together, is old (forced-choice recognition). Independent evidence from the remember/know procedure also indicates that YR's familiarity is normal. The Complementary Learning Systems model can also accommodate the clear impairment of forced-choice object-location recognition memory if it incorporates the view that the most complete convergence of spatial and object information, represented in different cortical regions, occurs in the hippocampus.
Butler, Andrew J; James, Thomas W; James, Karin Harman
2011-11-01
Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.
ERIC Educational Resources Information Center
Bukach, Cindy M.; Bub, Daniel N.; Masson, Michael E. J.; Lindsay, D. Stephen
2004-01-01
Studies of patients with category-specific agnosia (CSA) have given rise to multiple theories of object recognition, most of which assume the existence of a stable, abstract semantic memory system. We applied an episodic view of memory to questions raised by CSA in a series of studies examining normal observers' recall of newly learned attributes…
NASA Astrophysics Data System (ADS)
Graham, James; Ternovskiy, Igor V.
2013-06-01
We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.
Grossberg, Stephen; Markowitz, Jeffrey; Cao, Yongqiang
2011-12-01
Visual object recognition is an essential accomplishment of advanced brains. Object recognition needs to be tolerant, or invariant, with respect to changes in object position, size, and view. In monkeys and humans, a key area for recognition is the anterior inferotemporal cortex (ITa). Recent neurophysiological data show that ITa cells with high object selectivity often have low position tolerance. We propose a neural model whose cells learn to simulate this tradeoff, as well as ITa responses to image morphs, while explaining how invariant recognition properties may arise in stages due to processes across multiple cortical areas. These processes include the cortical magnification factor, multiple receptive field sizes, and top-down attentive matching and learning properties that may be tuned by task requirements to attend to either concrete or abstract visual features with different levels of vigilance. The model predicts that data from the tradeoff and image morph tasks emerge from different levels of vigilance in the animals performing them. This result illustrates how different vigilance requirements of a task may change the course of category learning, notably the critical features that are attended and incorporated into learned category prototypes. The model outlines a path for developing an animal model of how defective vigilance control can lead to symptoms of various mental disorders, such as autism and amnesia. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Babayan, Pavel; Smirnov, Sergey; Strotov, Valery
2017-10-01
This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Learning to distinguish similar objects
NASA Astrophysics Data System (ADS)
Seibert, Michael; Waxman, Allen M.; Gove, Alan N.
1995-04-01
This paper describes how the similarities and differences among similar objects can be discovered during learning to facilitate recognition. The application domain is single views of flying model aircraft captured in silhouette by a CCD camera. The approach was motivated by human psychovisual and monkey neurophysiological data. The implementation uses neural net processing mechanisms to build a hierarchy that relates similar objects to superordinate classes, while simultaneously discovering the salient differences between objects within a class. Learning and recognition experiments both with and without the class similarity and difference learning show the effectiveness of the approach on this visual data. To test the approach, the hierarchical approach was compared to a non-hierarchical approach, and was found to improve the average percentage of correctly classified views from 77% to 84%.
Multi-objects recognition for distributed intelligent sensor networks
NASA Astrophysics Data System (ADS)
He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.
2008-04-01
This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.
Perceptual Learning of Object Shape
Golcu, Doruk; Gilbert, Charles D.
2009-01-01
Recognition of objects is accomplished through the use of cues that depend on internal representations of familiar shapes. We used a paradigm of perceptual learning during visual search to explore what features human observers use to identify objects. Human subjects were trained to search for a target object embedded in an array of distractors, until their performance improved from near-chance levels to over 80% of trials in an object specific manner. We determined the role of specific object components in the recognition of the object as a whole by measuring the transfer of learning from the trained object to other objects sharing components with it. Depending on the geometric relationship of the trained object with untrained objects, transfer to untrained objects was observed. Novel objects that shared a component with the trained object were identified at much higher levels than those that did not, and this could be used as an indicator of which features of the object were important for recognition. Training on an object also transferred to the components of the object when these components were embedded in an array of distractors of similar complexity. These results suggest that objects are not represented in a holistic manner during learning, but that their individual components are encoded. Transfer between objects was not complete, and occurred for more than one component, regardless of how well they distinguish the object from distractors. This suggests that a joint involvement of multiple components was necessary for full performance. PMID:19864574
Recognition vs Reverse Engineering in Boolean Concepts Learning
ERIC Educational Resources Information Center
Shafat, Gabriel; Levin, Ilya
2012-01-01
This paper deals with two types of logical problems--recognition problems and reverse engineering problems, and with the interrelations between these types of problems. The recognition problems are modeled in the form of a visual representation of various objects in a common pattern, with a composition of represented objects in the pattern.…
Recognition of strong earthquake-prone areas with a single learning class
NASA Astrophysics Data System (ADS)
Gvishiani, A. D.; Agayan, S. M.; Dzeboev, B. A.; Belov, I. O.
2017-05-01
This article presents a new Barrier recognition algorithm with learning, designed for recognition of earthquake-prone areas. In comparison to the Crust (Kora) algorithm, used by the classical EPA approach, the Barrier algorithm proceeds with learning just on one "pure" high-seismic class. The new algorithm operates in the space of absolute values of the geological-geophysical parameters of the objects. The algorithm is used for recognition of earthquake-prone areas with M ≥ 6.0 in the Caucasus region. Comparative analysis of the Crust and Barrier algorithms justifies their productive coherence.
Learning and Forgetting New Names and Objects in MCI and AD
ERIC Educational Resources Information Center
Gronholm-Nyman, Petra; Rinne, Juha O.; Laine, Matti
2010-01-01
We studied how subjects with mild cognitive impairment (MCI), early Alzheimer's disease (AD) and age-matched controls learned and maintained the names of unfamiliar objects that were trained with or without semantic support (object definitions). Naming performance, phonological cueing, incidental learning of the definitions and recognition of the…
Rolls, Edmund T; Mills, W Patrick C
2018-05-01
When objects transform into different views, some properties are maintained, such as whether the edges are convex or concave, and these non-accidental properties are likely to be important in view-invariant object recognition. The metric properties, such as the degree of curvature, may change with different views, and are less likely to be useful in object recognition. It is shown that in a model of invariant visual object recognition in the ventral visual stream, VisNet, non-accidental properties are encoded much more than metric properties by neurons. Moreover, it is shown how with the temporal trace rule training in VisNet, non-accidental properties of objects become encoded by neurons, and how metric properties are treated invariantly. We also show how VisNet can generalize between different objects if they have the same non-accidental property, because the metric properties are likely to overlap. VisNet is a 4-layer unsupervised model of visual object recognition trained by competitive learning that utilizes a temporal trace learning rule to implement the learning of invariance using views that occur close together in time. A second crucial property of this model of object recognition is, when neurons in the level corresponding to the inferior temporal visual cortex respond selectively to objects, whether neurons in the intermediate layers can respond to combinations of features that may be parts of two or more objects. In an investigation using the four sides of a square presented in every possible combination, it was shown that even though different layer 4 neurons are tuned to encode each feature or feature combination orthogonally, neurons in the intermediate layers can respond to features or feature combinations present is several objects. This property is an important part of the way in which high capacity can be achieved in the four-layer ventral visual cortical pathway. These findings concerning non-accidental properties and the use of neurons in intermediate layers of the hierarchy help to emphasise fundamental underlying principles of the computations that may be implemented in the ventral cortical visual stream used in object recognition. Copyright © 2018 Elsevier Inc. All rights reserved.
The Neural Regions Sustaining Episodic Encoding and Recognition of Objects
ERIC Educational Resources Information Center
Hofer, Alex; Siedentopf, Christian M.; Ischebeck, Anja; Rettenbacher, Maria A.; Widschwendter, Christian G.; Verius, Michael; Golaszewski, Stefan M.; Koppelstaetter, Florian; Felber, Stephan; Wolfgang Fleischhacker, W.
2007-01-01
In this functional MRI experiment, encoding of objects was associated with activation in left ventrolateral prefrontal/insular and right dorsolateral prefrontal and fusiform regions as well as in the left putamen. By contrast, correct recognition of previously learned objects (R judgments) produced activation in left superior frontal, bilateral…
Soh, Harold; Demiris, Yiannis
2014-01-01
Human beings not only possess the remarkable ability to distinguish objects through tactile feedback but are further able to improve upon recognition competence through experience. In this work, we explore tactile-based object recognition with learners capable of incremental learning. Using the sparse online infinite Echo-State Gaussian process (OIESGP), we propose and compare two novel discriminative and generative tactile learners that produce probability distributions over objects during object grasping/palpation. To enable iterative improvement, our online methods incorporate training samples as they become available. We also describe incremental unsupervised learning mechanisms, based on novelty scores and extreme value theory, when teacher labels are not available. We present experimental results for both supervised and unsupervised learning tasks using the iCub humanoid, with tactile sensors on its five-fingered anthropomorphic hand, and 10 different object classes. Our classifiers perform comparably to state-of-the-art methods (C4.5 and SVM classifiers) and findings indicate that tactile signals are highly relevant for making accurate object classifications. We also show that accurate "early" classifications are possible using only 20-30 percent of the grasp sequence. For unsupervised learning, our methods generate high quality clusterings relative to the widely-used sequential k-means and self-organising map (SOM), and we present analyses into the differences between the approaches.
Multi-channel feature dictionaries for RGB-D object recognition
NASA Astrophysics Data System (ADS)
Lan, Xiaodong; Li, Qiming; Chong, Mina; Song, Jian; Li, Jun
2018-04-01
Hierarchical matching pursuit (HMP) is a popular feature learning method for RGB-D object recognition. However, the feature representation with only one dictionary for RGB channels in HMP does not capture sufficient visual information. In this paper, we propose multi-channel feature dictionaries based feature learning method for RGB-D object recognition. The process of feature extraction in the proposed method consists of two layers. The K-SVD algorithm is used to learn dictionaries in sparse coding of these two layers. In the first-layer, we obtain features by performing max pooling on sparse codes of pixels in a cell. And the obtained features of cells in a patch are concatenated to generate patch jointly features. Then, patch jointly features in the first-layer are used to learn the dictionary and sparse codes in the second-layer. Finally, spatial pyramid pooling can be applied to the patch jointly features of any layer to generate the final object features in our method. Experimental results show that our method with first or second-layer features can obtain a comparable or better performance than some published state-of-the-art methods.
Case-Based Learning in Athletic Training
ERIC Educational Resources Information Center
Berry, David C.
2013-01-01
The National Athletic Trainers' Association (NATA) Executive Committee for Education has emphasized the need for proper recognition and management of orthopaedic and general medical conditions through their support of numerous learning objectives and the clinical integrated proficiencies. These learning objectives and integrated clinical…
Chao, Owen Y; Huston, Joseph P; Li, Jay-Shake; Wang, An-Li; de Souza Silva, Maria A
2016-05-01
The prefrontal cortex directly projects to the lateral entorhinal cortex (LEC), an important substrate for engaging item-associated information and relaying the information to the hippocampus. Here we ask to what extent the communication between the prefrontal cortex and LEC is critically involved in the processing of episodic-like memory. We applied a disconnection procedure to test whether the interaction between the medial prefrontal cortex (mPFC) and LEC is essential for the expression of recognition memory. It was found that male rats that received unilateral NMDA lesions of the mPFC and LEC in the same hemisphere, exhibited intact episodic-like (what-where-when) and object-recognition memories. When these lesions were placed in the opposite hemispheres (disconnection), episodic-like and associative memories for object identity, location and context were impaired. However, the disconnection did not impair the components of episodic memory, namely memory for novel object (what), object place (where) and temporal order (when), per se. Thus, the present findings suggest that the mPFC and LEC are a critical part of a neural circuit that underlies episodic-like and associative object-recognition memory. © 2015 Wiley Periodicals, Inc.
Lactobacillus helveticus-fermented milk improves learning and memory in mice.
Ohsawa, Kazuhito; Uchida, Naoto; Ohki, Kohji; Nakamura, Yasunori; Yokogoshi, Hidehiko
2015-07-01
To investigate the effects of Calpis sour milk whey, a Lactobacillus helveticus-fermented milk product, on learning and memory. We evaluated improvement in scopolamine-induced memory impairment using the spontaneous alternation behaviour test, a measure of short-term memory. We also evaluated learning and working memory in mice using the novel object recognition test, which does not involve primary reinforcement (food or electric shocks). A total of 195 male ddY mice were used in the spontaneous alternation behaviour test and 60 in the novel object recognition test. Forced orally administered Calpis sour milk whey powder (200 and 2000 mg/kg) significantly improved scopolamine-induced cognitive impairments (P < 0.05 and P < 0.01, respectively) and object recognition memory (2000 mg/kg; P < 0.05). These results suggest that Calpis sour milk whey may be useful for the prevention of neurodegenerative disorders, such as Alzheimer's disease, and enhancing learning and memory in healthy human subjects; however, human clinical studies are necessary.
A rat in the sewer: How mental imagery interacts with object recognition
Hamburger, Kai
2018-01-01
The role of mental imagery has been puzzling researchers for more than two millennia. Both positive and negative effects of mental imagery on information processing have been discussed. The aim of this work was to examine how mental imagery affects object recognition and associative learning. Based on different perceptual and cognitive accounts we tested our imagery-induced interaction hypothesis in a series of two experiments. According to that, mental imagery could lead to (1) a superior performance in object recognition and associative learning if these objects are imagery-congruent (semantically) and to (2) an inferior performance if these objects are imagery-incongruent. In the first experiment, we used a static environment and tested associative learning. In the second experiment, subjects encoded object information in a dynamic environment by means of a virtual sewer system. Our results demonstrate that subjects who received a role adoption task (by means of guided mental imagery) performed better when imagery-congruent objects were used and worse when imagery-incongruent objects were used. We finally discuss our findings also with respect to alternative accounts and plead for a multi-methodological approach for future research in order to solve this issue. PMID:29590161
A rat in the sewer: How mental imagery interacts with object recognition.
Karimpur, Harun; Hamburger, Kai
2018-01-01
The role of mental imagery has been puzzling researchers for more than two millennia. Both positive and negative effects of mental imagery on information processing have been discussed. The aim of this work was to examine how mental imagery affects object recognition and associative learning. Based on different perceptual and cognitive accounts we tested our imagery-induced interaction hypothesis in a series of two experiments. According to that, mental imagery could lead to (1) a superior performance in object recognition and associative learning if these objects are imagery-congruent (semantically) and to (2) an inferior performance if these objects are imagery-incongruent. In the first experiment, we used a static environment and tested associative learning. In the second experiment, subjects encoded object information in a dynamic environment by means of a virtual sewer system. Our results demonstrate that subjects who received a role adoption task (by means of guided mental imagery) performed better when imagery-congruent objects were used and worse when imagery-incongruent objects were used. We finally discuss our findings also with respect to alternative accounts and plead for a multi-methodological approach for future research in order to solve this issue.
ERIC Educational Resources Information Center
Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio
2009-01-01
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified…
Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition.
Wong, Sebastien C; Stamatescu, Victor; Gatt, Adam; Kearney, David; Lee, Ivan; McDonnell, Mark D
2017-10-01
This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.
ERIC Educational Resources Information Center
Heermann, Barry
Sinclair Community College's (SCC's) Experience Based Education (EBE) program offers an alternative approach to learning which operates outside the time, format, and place constraints imposed by traditional, classroom-based education. After introductory material defining EBE and tracing the increased recognition of adult, lifelong learning…
ERIC Educational Resources Information Center
Wood, Justin N.; Wood, Samantha M. W.
2018-01-01
How do newborns learn to recognize objects? According to temporal learning models in computational neuroscience, the brain constructs object representations by extracting smoothly changing features from the environment. To date, however, it is unknown whether newborns depend on smoothly changing features to build invariant object representations.…
Bello-Medina, Paola C; Sánchez-Carrasco, Livia; González-Ornelas, Nadia R; Jeffery, Kathryn J; Ramírez-Amaya, Víctor
2013-08-01
Here we tested whether the well-known superiority of spaced training over massed training is equally evident in both object identity and object location recognition memory. We trained animals with objects placed in a variable or in a fixed location to produce a location-independent object identity memory or a location-dependent object representation. The training consisted of 5 trials that occurred either on one day (Massed) or over the course of 5 consecutive days (Spaced). The memory test was done in independent groups of animals either 24h or 7 days after the last training trial. In each test the animals were exposed to either a novel object, when trained with the objects in variable locations, or to a familiar object in a novel location, when trained with objects in fixed locations. The difference in time spent exploring the changed versus the familiar objects was used as a measure of recognition memory. For the object-identity-trained animals, spaced training produced clear evidence of recognition memory after both 24h and 7 days, but massed-training animals showed it only after 24h. In contrast, for the object-location-trained animals, recognition memory was evident after both retention intervals and with both training procedures. When objects were placed in variable locations for the two types of training and the test was done with a brand-new location, only the spaced-training animals showed recognition at 24h, but surprisingly, after 7 days, animals trained using both procedures were able to recognize the change, suggesting a post-training consolidation process. We suggest that the two training procedures trigger different neural mechanisms that may differ in the two segregated streams that process object information and that may consolidate differently. Copyright © 2013 Elsevier B.V. All rights reserved.
Beilharz, Jessica E; Maniam, Jayanthi; Morris, Margaret J
2014-03-01
High energy diets have been shown to impair cognition however, the rapidity of these effects, and the dietary component/s responsible are currently unclear. We conducted two experiments in rats to examine the effects of short-term exposure to a diet rich in sugar and fat or rich in sugar on object (perirhinal-dependent) and place (hippocampal-dependent) recognition memory, and the role of inflammatory mediators in these responses. In Experiment 1, rats fed a cafeteria style diet containing chow supplemented with lard, cakes, biscuits, and a 10% sucrose solution performed worse on the place, but not the object recognition task, than chow fed control rats when tested after 5, 11, and 20 days. In Experiment 2, rats fed the cafeteria style diet either with or without sucrose and rats fed chow supplemented with sucrose also performed worse on the place, but not the object recognition task when tested after 5, 11, and 20 days. Rats fed the cafeteria diets consumed five times more energy than control rats and exhibited increased plasma leptin, insulin and triglyceride concentrations; these were not affected in the sucrose only rats. Rats exposed to sucrose exhibited both increased hippocampal inflammation (TNF-α and IL-1β mRNA) and oxidative stress, as indicated by an upregulation of NRF1 mRNA compared to control rats. In contrast, these markers were not significantly elevated in rats that received the cafeteria diet without added sucrose. Hippocampal BDNF and neuritin mRNA were similar across all groups. These results show that relatively short exposures to diets rich in both fat and sugar or rich in sugar, impair hippocampal-dependent place recognition memory prior to the emergence of weight differences, and suggest a role for oxidative stress and neuroinflammation in this impairment. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Social cues at encoding affect memory in 4-month-old infants.
Kopp, Franziska; Lindenberger, Ulman
2012-01-01
Available evidence suggests that infants use adults' social cues for learning by the second half of the first year of life. However, little is known about the short-term or long-term effects of joint attention interactions on learning and memory in younger infants. In the present study, 4-month-old infants were familiarized with visually presented objects in either of two conditions that differed in the degree of joint attention (high vs. low). Brain activity in response to familiar and novel objects was assessed immediately after the familiarization phase (immediate recognition), and following a 1-week delay (delayed recognition). The latency of the Nc component differentiated between recognition of old versus new objects. Pb amplitude and latency were affected by joint attention in delayed recognition. Moreover, the frequency of infant gaze to the experimenter during familiarization differed between the two experimental groups and modulated the Pb response. Results show that joint attention affects the mechanisms of long-term retention in 4-month-old infants. We conclude that joint attention helps children at this young age to recognize the relevance of learned items.
NetVLAD: CNN Architecture for Weakly Supervised Place Recognition.
Arandjelovic, Relja; Gronat, Petr; Torii, Akihiko; Pajdla, Tomas; Sivic, Josef
2018-06-01
We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the "Vector of Locally Aggregated Descriptors" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.
The influence of personality on neural mechanisms of observational fear and reward learning
Hooker, Christine I.; Verosky, Sara C.; Miyakawa, Asako; Knight, Robert T.; D’Esposito, Mark
2012-01-01
Fear and reward learning can occur through direct experience or observation. Both channels can enhance survival or create maladaptive behavior. We used fMRI to isolate neural mechanisms of observational fear and reward learning and investigate whether neural response varied according to individual differences in neuroticism and extraversion. Participants learned object-emotion associations by observing a woman respond with fearful (or neutral) and happy (or neutral) facial expressions to novel objects. The amygdala-hippocampal complex was active when learning the object-fear association, and the hippocampus was active when learning the object-happy association. After learning, objects were presented alone; amygdala activity was greater for the fear (vs. neutral) and happy (vs. neutral) associated object. Importantly, greater amygdala-hippocampal activity during fear (vs. neutral) learning predicted better recognition of learned objects on a subsequent memory test. Furthermore, personality modulated neural mechanisms of learning. Neuroticism positively correlated with neural activity in the amygdala and hippocampus during fear (vs. neutral) learning. Low extraversion/high introversion was related to faster behavioral predictions of the fearful and neutral expressions during fear learning. In addition, low extraversion/high introversion was related to greater amygdala activity during happy (vs. neutral) learning, happy (vs. neutral) object recognition, and faster reaction times for predicting happy and neutral expressions during reward learning. These findings suggest that neuroticism is associated with an increased sensitivity in the neural mechanism for fear learning which leads to enhanced encoding of fear associations, and that low extraversion/high introversion is related to enhanced conditionability for both fear and reward learning. PMID:18573512
Farr, Susan A; Erickson, Michelle A; Niehoff, Michael L; Banks, William A; Morley, John E
2014-01-01
Alzheimer's disease (AD) is a progressive neurodegenerative disease. Currently, there are no therapies to stop or reverse the symptoms of AD. We have developed an antisense oligonucleotide (OL-1) against the amyloid-β protein precursor (AβPP) that can decrease AβPP expression and amyloid-β protein (Aβ) production. This antisense rapidly crosses the blood-brain barrier, reverses learning and memory impairments, reduces oxidative stress, and restores brain-to-blood efflux of Aβ in SAMP8 mice. Here, we examined the effects of this AβPP antisense in the Tg2576 mouse model of AD. We administered the OL-1 antisense into the lateral ventricle 3 times at 2week intervals. Seventy-two hours after the third injection, we tested learning and memory in T-maze foot shock avoidance. In the second study, we injected the mice with OL-1 antisense 3 times at 2-week intervals via the tail vein. Seventy-two hours later, we tested learning and memory T-maze, novel object recognition, and elevated plus maze. At the end of behavioral testing, brain tissue was collected. OL-1 antisense administered centrally improved acquisition and retention of T-maze foot shock avoidance. OL-1 antisense administered via tail vein improved learning and memory in both T-maze foot shock avoidance and novel object-place recognition. In the elevated plus maze, the mice which received OL-1 antisense spent less time in the open arms and had fewer entries into the open arms indicating reduced disinhibitation. Biochemical analyses reveal significant reduction of AβPP signal and a reduction of measures of neuroinflammation. The current findings support the therapeutic potential of OL-1 AβPP antisense.
Higher-Order Neural Networks Applied to 2D and 3D Object Recognition
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1994-01-01
A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.
Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.
Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao
2017-06-21
In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.
A biologically inspired neural network model to transformation invariant object recognition
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Li, Yaqin; Siddiqui, Faraz
2007-09-01
Transformation invariant image recognition has been an active research area due to its widespread applications in a variety of fields such as military operations, robotics, medical practices, geographic scene analysis, and many others. The primary goal for this research is detection of objects in the presence of image transformations such as changes in resolution, rotation, translation, scale and occlusion. We investigate a biologically-inspired neural network (NN) model for such transformation-invariant object recognition. In a classical training-testing setup for NN, the performance is largely dependent on the range of transformation or orientation involved in training. However, an even more serious dilemma is that there may not be enough training data available for successful learning or even no training data at all. To alleviate this problem, a biologically inspired reinforcement learning (RL) approach is proposed. In this paper, the RL approach is explored for object recognition with different types of transformations such as changes in scale, size, resolution and rotation. The RL is implemented in an adaptive critic design (ACD) framework, which approximates the neuro-dynamic programming of an action network and a critic network, respectively. Two ACD algorithms such as Heuristic Dynamic Programming (HDP) and Dual Heuristic dynamic Programming (DHP) are investigated to obtain transformation invariant object recognition. The two learning algorithms are evaluated statistically using simulated transformations in images as well as with a large-scale UMIST face database with pose variations. In the face database authentication case, the 90° out-of-plane rotation of faces from 20 different subjects in the UMIST database is used. Our simulations show promising results for both designs for transformation-invariant object recognition and authentication of faces. Comparing the two algorithms, DHP outperforms HDP in learning capability, as DHP takes fewer steps to perform a successful recognition task in general. Further, the residual critic error in DHP is generally smaller than that of HDP, and DHP achieves a 100% success rate more frequently than HDP for individual objects/subjects. On the other hand, HDP is more robust than the DHP as far as success rate across the database is concerned when applied in a stochastic and uncertain environment, and the computational time involved in DHP is more.
Learning Distance Functions for Exemplar-Based Object Recognition
2007-08-08
requires prior specific permission. Learning Distance Functions for Exemplar-Based Object Recognition by Andrea Lynn Frome B.S. ( Mary Washington...fantastic advisor and advocate when I was at Mary Washington College i and has since become a dear friend. Thank you, Dr. Bass, for continuing to stand...Antonio Torralba. 5 Chapter 1. Introduction 0 5 10 15 20 25 30 35 10 15 20 25 30 35 40 45 50 55 60 65 70 Number of training examples per class M ea n
Learning Distance Functions for Exemplar-Based Object Recognition
2007-01-01
Learning Distance Functions for Exemplar-Based Object Recognition by Andrea Lynn Frome B.S. ( Mary Washington College) 1996 A dissertation submitted...advisor and advocate when I was at Mary Washington College i and has since become a dear friend. Thank you, Dr. Bass, for continuing to stand by my...Torralba. 5 Chapter 1. Introduction 0 5 10 15 20 25 30 35 10 15 20 25 30 35 40 45 50 55 60 65 70 Number of training examples per class M ea n re co
Object recognition in images via a factor graph model
NASA Astrophysics Data System (ADS)
He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu
2018-04-01
Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.
Study on cognitive impairment in diabetic rats by different behavioral experiments
NASA Astrophysics Data System (ADS)
Yu-bin, Ji; Zeng-yi, Li; Guo-song, Xin; Chi, Wei; Hong-jian, Zhu
2017-12-01
Object recognition test and Y maze test are widely used in learning and memory behavior evaluation techniques and methods. It was found that in the new object recognition experiment, the diabetic rats did more slowly than the normal rats in the discrimination of the old and new objects, and the learning and memory of the rats in the diabetic rats were injured. And the ratio of retention time and the number of errors in the Y maze test was much higher than that in the blank control group. These two methods can reflect the cognitive impairment in diabetic rats.
The development of newborn object recognition in fast and slow visual worlds
Wood, Justin N.; Wood, Samantha M. W.
2016-01-01
Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world. PMID:27097925
NASA Astrophysics Data System (ADS)
Lecun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-01
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-28
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Sequential Learning and Recognition of Comprehensive Behavioral Patterns Based on Flow of People
NASA Astrophysics Data System (ADS)
Gibo, Tatsuya; Aoki, Shigeki; Miyamoto, Takao; Iwata, Motoi; Shiozaki, Akira
Recently, surveillance cameras have been set up everywhere, for example, in streets and public places, in order to detect irregular situations. In the existing surveillance systems, as only a handful of surveillance agents watch a large number of images acquired from surveillance cameras, there is a possibility that they may miss important scenes such as accidents or abnormal incidents. Therefore, we propose a method for sequential learning and the recognition of comprehensive behavioral patterns in crowded places. First, we comprehensively extract a flow of people from input images by using optical flow. Second, we extract behavioral patterns on the basis of change-point detection of the flow of people. Finally, in order to recognize an observed behavioral pattern, we draw a comparison between the behavioral pattern and previous behavioral patterns in the database. We verify the effectiveness of our approach by placing a surveillance camera on a campus.
Platz, T
1996-10-01
Somaesthetic, motor and cognitive functions were studied in a man with impaired tactile object-recognition (TOR) in his left hand due to a right parietal convexity meningeoma which had been surgically removed. Primary motor and somatosensory functions were not impaired, and discriminative abilities for various tactile aspects and cognitive skills were preserved. Nevertheless, the patient could often not appreciate the object's nature or significance when it was placed in his left hand and was unable to name or to describe or demonstrate the use of these objects. Therefore, he can be regarded as an example of associative tactile agnosia. The view is taken and elaborated that defective modality-specific meaning representations account for associative tactile agnosia. These meaning representations are conceptualized as learned unimodal feature-entity relationships which are thought to be defective in tactile agnosia. In line with this hypothesis, tactile feature analysis and cross-modal matching of features were largely preserved in the investigated patient, while combining features to form entities was defective in the tactile domain. The alternative hypothesis of agnosia as deficit of cross-modal association of features was not supported. The presumed distributed functional network responsible for TOR is thought to involve perception of features, object recognition and related tactile motor behaviour interactively. A deficit leading primarily to impaired combining features to form entities can therefore be expected to result in additional minor impairment of related perceptual-motor processes. Unilaterality of the gnostic deficit can be explained by a lateralized organization of the functional network responsible for tactile recognition of objects.
Lawson, Rebecca
2014-02-01
The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.
Decreased acetylcholine release delays the consolidation of object recognition memory.
De Jaeger, Xavier; Cammarota, Martín; Prado, Marco A M; Izquierdo, Iván; Prado, Vania F; Pereira, Grace S
2013-02-01
Acetylcholine (ACh) is important for different cognitive functions such as learning, memory and attention. The release of ACh depends on its vesicular loading by the vesicular acetylcholine transporter (VAChT). It has been demonstrated that VAChT expression can modulate object recognition memory. However, the role of VAChT expression on object recognition memory persistence still remains to be understood. To address this question we used distinct mouse lines with reduced expression of VAChT, as well as pharmacological manipulations of the cholinergic system. We showed that reduction of cholinergic tone impairs object recognition memory measured at 24h. Surprisingly, object recognition memory, measured at 4 days after training, was impaired by substantial, but not moderate, reduction in VAChT expression. Our results suggest that levels of acetylcholine release strongly modulate object recognition memory consolidation and appear to be of particular importance for memory persistence 4 days after training. Copyright © 2012 Elsevier B.V. All rights reserved.
Sticht, Martin A; Jacklin, Derek L; Mechoulam, Raphael; Parker, Linda A; Winters, Boyer D
2015-03-25
Cannabinoids disrupt learning and memory in human and nonhuman participants. Object recognition memory, which is particularly susceptible to the impairing effects of cannabinoids, relies critically on the perirhinal cortex (PRh); however, to date, the effects of cannabinoids within PRh have not been assessed. In the present study, we evaluated the effects of localized administration of the synthetic cannabinoid, HU210 (0.01, 1.0 μg/hemisphere), into PRh on spontaneous object recognition in Long-Evans rats. Animals received intra-PRh infusions of HU210 before the sample phase, and object recognition memory was assessed at various delays in a subsequent retention test. We found that presample intra-PRh HU210 dose dependently (1.0 μg but not 0.01 μg) interfered with spontaneous object recognition performance, exerting an apparently more pronounced effect when memory demands were increased. These novel findings show that cannabinoid agonists in PRh disrupt object recognition memory. Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.
Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei
2016-10-01
Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.
A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity
Lomp, Oliver; Faubel, Christian; Schöner, Gregor
2017-01-01
Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145
View Combination: A Generalization Mechanism for Visual Recognition
ERIC Educational Resources Information Center
Friedman, Alinda; Waller, David; Thrash, Tyler; Greenauer, Nathan; Hodgson, Eric
2011-01-01
We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four "views" of a two dimensional visual array derived from a three-dimensional…
Incremental concept learning with few training examples and hierarchical classification
NASA Astrophysics Data System (ADS)
Bouma, Henri; Eendebak, Pieter T.; Schutte, Klamer; Azzopardi, George; Burghouts, Gertjan J.
2015-10-01
Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible with only a few training samples. Secondly, we show that novel objects can be added incrementally without retraining existing objects, which is important for fast interaction. Thirdly, we show that an unbalanced number of positive training samples leads to biased classifier scores that can be corrected by modifying weights. Fourthly, we show that the detector performance can deteriorate due to hard-negative mining for similar or closely related classes (e.g., for Barbie and dress, because the doll is wearing a dress). This can be solved by our hierarchical classification. We introduce a new dataset, which we call TOSO, and use it to demonstrate the effectiveness of the proposed method for the localization and recognition of multiple objects in images.
Learning viewpoint invariant object representations using a temporal coherence principle.
Einhäuser, Wolfgang; Hipp, Jörg; Eggert, Julian; Körner, Edgar; König, Peter
2005-07-01
Invariant object recognition is arguably one of the major challenges for contemporary machine vision systems. In contrast, the mammalian visual system performs this task virtually effortlessly. How can we exploit our knowledge on the biological system to improve artificial systems? Our understanding of the mammalian early visual system has been augmented by the discovery that general coding principles could explain many aspects of neuronal response properties. How can such schemes be transferred to system level performance? In the present study we train cells on a particular variant of the general principle of temporal coherence, the "stability" objective. These cells are trained on unlabeled real-world images without a teaching signal. We show that after training, the cells form a representation that is largely independent of the viewpoint from which the stimulus is looked at. This finding includes generalization to previously unseen viewpoints. The achieved representation is better suited for view-point invariant object classification than the cells' input patterns. This property to facilitate view-point invariant classification is maintained even if training and classification take place in the presence of an--also unlabeled--distractor object. In summary, here we show that unsupervised learning using a general coding principle facilitates the classification of real-world objects, that are not segmented from the background and undergo complex, non-isomorphic, transformations.
Automation of the novel object recognition task for use in adolescent rats
Silvers, Janelle M.; Harrod, Steven B.; Mactutus, Charles F.; Booze, Rosemarie M.
2010-01-01
The novel object recognition task is gaining popularity for its ability to test a complex behavior which relies on the integrity of memory and attention systems without placing undue stress upon the animal. While the task places few requirements upon the animal, it traditionally requires the experimenter to observe the test phase directly and record behavior. This approach can severely limit the number of subjects which can be tested in a reasonable period of time, as training and testing occur on the same day and span several hours. The current study was designed to test the feasibility of automation of this task for adolescent rats using standard activity chambers, with the goals of increased objectivity, flexibility, and throughput of subjects. PMID:17719091
Atoms of recognition in human and computer vision.
Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel
2016-03-08
Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.
Neural correlates of object-in-place learning in hippocampus and prefrontal cortex.
Kim, Jangjin; Delcasso, Sébastien; Lee, Inah
2011-11-23
Hippocampus and prefrontal cortex (PFC) process spatiotemporally discrete events while maintaining goal-directed task demands. Although some studies have reported that neural activities in the two regions are coordinated, such observations have rarely been reported in an object-place paired-associate (OPPA) task in which animals must learn an object-in-place rule. In this study, we recorded single units and local field potentials simultaneously from the CA1 subfield of the hippocampus and PFC as rats learned that Object A, but not Object B, was rewarded in Place 1, but not in Place 2 (vice versa for Object B). Both hippocampus and PFC are required for normal performance in this task. PFC neurons fired in association with the regularity of the occurrence of a certain type of event independent of space, whereas neuronal firing in CA1 was spatially localized for representing a discrete place. Importantly, the differential firing patterns were observed in tandem with common learning-related changes in both regions. Specifically, once OPPA learning occurred and rats used an object-in-place strategy, (1) both CA1 and PFC neurons exhibited spatially more similar and temporally more synchronized firing patterns, (2) spiking activities in both regions were more phase locked to theta rhythms, and (3) CA1-medial PFC coherence in theta oscillation was maximal before entering a critical place for decision making. The results demonstrate differential as well as common neural dynamics between hippocampus and PFC in acquiring the OPPA task and strongly suggest that both regions form a unified functional network for processing an episodic event.
Neural correlates of object-in-place learning in hippocampus and prefrontal cortex
Kim, Jangjin; Delcasso, Sébastien; Lee, Inah
2011-01-01
Hippocampus and prefrontal cortex (PFC) process spatiotemporally discrete events while maintaining goal-directed task demands. Although some studies have reported that neural activities in the two regions are coordinated, such observations have rarely been reported in an object-place paired-associate (OPPA) task in which animals must learn an object-in-place rule. In this study, we recorded single units and local field potentials simultaneously from the CA1 subfield of the hippocampus and PFC as rats learned that object A, but not object B, was rewarded in place 1, but not in place 2 (vice versa for object B). Both hippocampus and PFC are required for normal performance in this task. PFC neurons fired in association with the regularity of the occurrence of a certain type of event independent of space, whereas neuronal firing in CA1 was spatially localized for representing a discrete place. Importantly, the differential firing patterns were observed in tandem with common learning-related changes in both regions. Specifically, once OPPA learning occurred and rats used an object-in-place strategy, (i) both CA1 and PFC neurons exhibited spatially more similar and temporally more synchronized firing patterns, (ii) spiking activities in both regions were more phase-locked to theta rhythms, (iii) CA1-mPFC coherence in theta oscillation was maximal before entering a critical place for decision making. The results demonstrate differential as well as common neural dynamics between hippocampus and PFC in acquiring the OPPA task and strongly suggest that both regions form a unified functional network for processing an episodic event. PMID:22114269
Exploiting range imagery: techniques and applications
NASA Astrophysics Data System (ADS)
Armbruster, Walter
2009-07-01
Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex
Leibo, Joel Z.; Liao, Qianli; Anselmi, Fabio; Poggio, Tomaso
2015-01-01
Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system’s optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA). Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions in agreement with the available data. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions. PMID:26496457
Purpura, Giulia; Cioni, Giovanni; Tinelli, Francesca
2018-07-01
Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.
ERIC Educational Resources Information Center
Savalli, Giorgia; Bashir, Zafar I.; Warburton, E. Clea
2015-01-01
Object-in-place (OiP) memory is critical for remembering the location in which an object was last encountered and depends conjointly on the medial prefrontal cortex, perirhinal cortex, and hippocampus. Here we examined the role of dopamine D[subscript 1]/D[subscript 5] receptor neurotransmission within these brain regions for OiP memory. Bilateral…
Yuan, Tao; Zheng, Xinqi; Hu, Xuan; Zhou, Wei; Wang, Wei
2014-01-01
Objective and effective image quality assessment (IQA) is directly related to the application of optical remote sensing images (ORSI). In this study, a new IQA method of standardizing the target object recognition rate (ORR) is presented to reflect quality. First, several quality degradation treatments with high-resolution ORSIs are implemented to model the ORSIs obtained in different imaging conditions; then, a machine learning algorithm is adopted for recognition experiments on a chosen target object to obtain ORRs; finally, a comparison with commonly used IQA indicators was performed to reveal their applicability and limitations. The results showed that the ORR of the original ORSI was calculated to be up to 81.95%, whereas the ORR ratios of the quality-degraded images to the original images were 65.52%, 64.58%, 71.21%, and 73.11%. The results show that these data can more accurately reflect the advantages and disadvantages of different images in object identification and information extraction when compared with conventional digital image assessment indexes. By recognizing the difference in image quality from the application effect perspective, using a machine learning algorithm to extract regional gray scale features of typical objects in the image for analysis, and quantitatively assessing quality of ORSI according to the difference, this method provides a new approach for objective ORSI assessment.
Dimension Reduction With Extreme Learning Machine.
Kasun, Liyanaarachchi Lekamalage Chamara; Yang, Yan; Huang, Guang-Bin; Zhang, Zhengyou
2016-08-01
Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.
Higher-order neural network software for distortion invariant object recognition
NASA Technical Reports Server (NTRS)
Reid, Max B.; Spirkovska, Lilly
1991-01-01
The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.
Boersma, Gretha J; Treesukosol, Yada; Cordner, Zachary A; Kastelein, Anneke; Choi, Pique; Moran, Timothy H; Tamashiro, Kellie L
2016-02-01
Relapse rates are high amongst cases of anorexia nervosa (AN) suggesting that some alterations induced by AN may remain after weight restoration. To study the consequences of AN without confounds of environmental variability, a rodent model of activity-based anorexia (ABA) can be employed. We hypothesized that exposure to ABA during adolescence may have long-term consequences in taste function, cognition, and anxiety-like behavior after weight restoration. To test this hypothesis, we exposed adolescent female rats to ABA (1.5 h food access, combined with voluntary running wheel access) and compared their behavior to that of control rats after weight restoration was achieved. The rats were tested for learning/memory, anxiety, food preference, and taste in a set of behavioral tests performed during the light period. Our data show that ABA exposure leads to reduced performance during the novel object recognition task, a test for contextual learning, without altering performance in the novel place recognition task or the Barnes maze, both tasks that test spatial learning. Furthermore, we do not observe alterations in unconditioned lick responses to sucrose nor quinine (described by humans as "sweet" and "bitter," respectively). Nor Do we find alterations in anxiety-like behavior during an elevated plus maze or an open field test. Finally, preference for a diet high in fat is not altered. Overall, our data suggest that ABA exposure during adolescence impairs contextual learning in adulthood without altering spatial leaning, taste, anxiety, or fat preference. © 2015 Wiley Periodicals, Inc.
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)
1987-01-01
The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.
2010 Presidential Address: Learning Religion and Religiously Learning amid Global Cultural Flows
ERIC Educational Resources Information Center
Hess, Mary E.
2011-01-01
Emerging social media that build on digital technologies are reshaping how we interact with each other. Religious education and identity formation within these new cultural flows demands recognition of the shifts in authority, authenticity, and agency that are taking place, as well as the challenges posed by "context collapse." Digital…
Braun, Moria D; Kisko, Theresa M; Vecchia, Débora Dalla; Andreatini, Roberto; Schwarting, Rainer K W; Wöhr, Markus
2018-05-23
The CACNA1C gene is strongly implicated in the etiology of multiple major neuropsychiatric disorders, such as bipolar disorder, major depression, and schizophrenia, with cognitive deficits being a common feature. It is unclear, however, by which mechanisms CACNA1C variants advance the risk of developing neuropsychiatric disorders. This study set out to investigate cognitive functioning in a newly developed genetic Cacna1c rat model. Specifically, spatial and reversal learning, as well as object recognition memory were assessed in heterozygous Cacna1c +/- rats and compared to wildtype Cacna1c +/+ littermate controls in both sexes. Our results show that both Cacna1c +/+ and Cacna1c +/- animals were able to learn the rewarded arm configuration of a radial maze over the course of seven days. Both groups also showed reversal learning patterns indicative of intact abilities. In females, genotype differences were evident in the initial spatial learning phase, with Cacna1c +/- females showing hypo-activity and fewer mixed errors. In males, a difference was found during probe trials for both learning phases, with Cacna1c +/- rats displaying better distinction between previously baited and non-baited arms; and regarding cognitive flexibility in favor of the Cacna1c +/+ animals. All experimental groups proved to be sensitive to reward magnitude and fully able to distinguish between novel and familiar objects in the novel object recognition task. Taken together, these results indicate that Cacna1c haploinsufficiency has a minor, but positive impact on (spatial) memory functions in rats. Copyright © 2018 Elsevier Inc. All rights reserved.
Recognition-induced forgetting is not due to category-based set size.
Maxcey, Ashleigh M
2016-01-01
What are the consequences of accessing a visual long-term memory representation? Previous work has shown that accessing a long-term memory representation via retrieval improves memory for the targeted item and hurts memory for related items, a phenomenon called retrieval-induced forgetting. Recently we found a similar forgetting phenomenon with recognition of visual objects. Recognition-induced forgetting occurs when practice recognizing an object during a two-alternative forced-choice task, from a group of objects learned at the same time, leads to worse memory for objects from that group that were not practiced. An alternative explanation of this effect is that category-based set size is inducing forgetting, not recognition practice as claimed by some researchers. This alternative explanation is possible because during recognition practice subjects make old-new judgments in a two-alternative forced-choice task, and are thus exposed to more objects from practiced categories, potentially inducing forgetting due to set-size. Herein I pitted the category-based set size hypothesis against the recognition-induced forgetting hypothesis. To this end, I parametrically manipulated the amount of practice objects received in the recognition-induced forgetting paradigm. If forgetting is due to category-based set size, then the magnitude of forgetting of related objects will increase as the number of practice trials increases. If forgetting is recognition induced, the set size of exemplars from any given category should not be predictive of memory for practiced objects. Consistent with this latter hypothesis, additional practice systematically improved memory for practiced objects, but did not systematically affect forgetting of related objects. These results firmly establish that recognition practice induces forgetting of related memories. Future directions and important real-world applications of using recognition to access our visual memories of previously encountered objects are discussed.
Spatial Object Recognition Enables Endogenous LTD that Curtails LTP in the Mouse Hippocampus
Goh, Jinzhong Jeremy
2013-01-01
Although synaptic plasticity is believed to comprise the cellular substrate for learning and memory, limited direct evidence exists that hippocampus-dependent learning actually triggers synaptic plasticity. It is likely, however, that long-term potentiation (LTP) works in concert with its counterpart, long-term depression (LTD) in the creation of spatial memory. It has been reported in rats that weak synaptic plasticity is facilitated into persistent plasticity if afferent stimulation is coupled with a novel spatial learning event. It is not known if this phenomenon also occurs in other species. We recorded from the hippocampal CA1 of freely behaving mice and observed that novel spatial learning triggers endogenous LTD. Specifically, we observed that LTD is enabled when test-pulse afferent stimulation is given during the learning of object constellations or during a spatial object recognition task. Intriguingly, LTP is significantly impaired by the same tasks, suggesting that LTD is the main cellular substrate for this type of learning. These data indicate that learning-facilitated plasticity is not exclusive to rats and that spatial learning leads to endogenous LTD in the hippocampus, suggesting an important role for this type of synaptic plasticity in the creation of hippocampus-dependent memory. PMID:22510536
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1993-01-01
A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.
ERIC Educational Resources Information Center
Baxter, Mark G.; Browning, Philip G. F.; Mitchell, Anna S.
2008-01-01
Surgical disconnection of the frontal cortex and inferotemporal cortex severely impairs many aspects of visual learning and memory, including learning of new object-in-place scene memory problems, a monkey model of episodic memory. As part of a study of specialization within prefrontal cortex in visual learning and memory, we tested monkeys with…
Running Improves Pattern Separation during Novel Object Recognition.
Bolz, Leoni; Heigele, Stefanie; Bischofberger, Josef
2015-10-09
Running increases adult neurogenesis and improves pattern separation in various memory tasks including context fear conditioning or touch-screen based spatial learning. However, it is unknown whether pattern separation is improved in spontaneous behavior, not emotionally biased by positive or negative reinforcement. Here we investigated the effect of voluntary running on pattern separation during novel object recognition in mice using relatively similar or substantially different objects.We show that running increases hippocampal neurogenesis but does not affect object recognition memory with 1.5 h delay after sample phase. By contrast, at 24 h delay, running significantly improves recognition memory for similar objects, whereas highly different objects can be distinguished by both, running and sedentary mice. These data show that physical exercise improves pattern separation, independent of negative or positive reinforcement. In sedentary mice there is a pronounced temporal gradient for remembering object details. In running mice, however, increased neurogenesis improves hippocampal coding and temporally preserves distinction of novel objects from familiar ones.
From E-Learning Space to E-Learning Place
ERIC Educational Resources Information Center
Wahlstedt, Ari; Pekkola, Samuli; Niemela, Marketta
2008-01-01
In this paper, it is argued that e-learning environments are currently more like "buildings", i.e., learning spaces, rather than "schools", i.e., places for learning. The concepts originated from architecture and urban design, where they are used both to distinguish static spaces from inhabited places, and more importantly, as design objectives.…
Multi-Touch Tabletop System Using Infrared Image Recognition for User Position Identification.
Suto, Shota; Watanabe, Toshiya; Shibusawa, Susumu; Kamada, Masaru
2018-05-14
A tabletop system can facilitate multi-user collaboration in a variety of settings, including small meetings, group work, and education and training exercises. The ability to identify the users touching the table and their positions can promote collaborative work among participants, so methods have been studied that involve attaching sensors to the table, chairs, or to the users themselves. An effective method of recognizing user actions without placing a burden on the user would be some type of visual process, so the development of a method that processes multi-touch gestures by visual means is desired. This paper describes the development of a multi-touch tabletop system using infrared image recognition for user position identification and presents the results of touch-gesture recognition experiments and a system-usability evaluation. Using an inexpensive FTIR touch panel and infrared light, this system picks up the touch areas and the shadow area of the user's hand by an infrared camera to establish an association between the hand and table touch points and estimate the position of the user touching the table. The multi-touch gestures prepared for this system include an operation to change the direction of an object to face the user and a copy operation in which two users generate duplicates of an object. The system-usability evaluation revealed that prior learning was easy and that system operations could be easily performed.
Multi-Touch Tabletop System Using Infrared Image Recognition for User Position Identification
Suto, Shota; Watanabe, Toshiya; Shibusawa, Susumu; Kamada, Masaru
2018-01-01
A tabletop system can facilitate multi-user collaboration in a variety of settings, including small meetings, group work, and education and training exercises. The ability to identify the users touching the table and their positions can promote collaborative work among participants, so methods have been studied that involve attaching sensors to the table, chairs, or to the users themselves. An effective method of recognizing user actions without placing a burden on the user would be some type of visual process, so the development of a method that processes multi-touch gestures by visual means is desired. This paper describes the development of a multi-touch tabletop system using infrared image recognition for user position identification and presents the results of touch-gesture recognition experiments and a system-usability evaluation. Using an inexpensive FTIR touch panel and infrared light, this system picks up the touch areas and the shadow area of the user’s hand by an infrared camera to establish an association between the hand and table touch points and estimate the position of the user touching the table. The multi-touch gestures prepared for this system include an operation to change the direction of an object to face the user and a copy operation in which two users generate duplicates of an object. The system-usability evaluation revealed that prior learning was easy and that system operations could be easily performed. PMID:29758006
Individual recognition and learning of queen odors by worker honeybees
Breed, Michael D.
1981-01-01
A honeybee queen is usually attacked if she is placed among the workers of a colony other than her own. This rejection occurs even if environmental sources of odor, such as food, water, and genetic origin of the workers, are kept constant in laboratory conditions. The genetic similarity of queens determines how similar their recognition characteristics are; inbred sister queens were accepted in 35% of exchanges, outbred sister queens in 12%, and nonsister queens in 0%. Carbon dioxide narcosis results in worker honeybees accepting nonnestmate queens. A learning curve is presented, showing the time after narcosis required by workers to learn to recognize a new queen. In contrast, worker transfers result in only a small percentage of the workers being rejected. The reason for the difference between queens and workers may be because of worker and queen recognition cues having different sources. PMID:16593008
Automatic anatomy recognition on CT images with pathology
NASA Astrophysics Data System (ADS)
Huang, Lidong; Udupa, Jayaram K.; Tong, Yubing; Odhner, Dewey; Torigian, Drew A.
2016-03-01
Body-wide anatomy recognition on CT images with pathology becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem because various diseases result in various abnormalities of objects such as shape and intensity patterns. We previously developed an automatic anatomy recognition (AAR) system [1] whose applicability was demonstrated on near normal diagnostic CT images in different body regions on 35 organs. The aim of this paper is to investigate strategies for adapting the previous AAR system to diagnostic CT images of patients with various pathologies as a first step toward automated body-wide disease quantification. The AAR approach consists of three main steps - model building, object recognition, and object delineation. In this paper, within the broader AAR framework, we describe a new strategy for object recognition to handle abnormal images. In the model building stage an optimal threshold interval is learned from near-normal training images for each object. This threshold is optimally tuned to the pathological manifestation of the object in the test image. Recognition is performed following a hierarchical representation of the objects. Experimental results for the abdominal body region based on 50 near-normal images used for model building and 20 abnormal images used for object recognition show that object localization accuracy within 2 voxels for liver and spleen and 3 voxels for kidney can be achieved with the new strategy.
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2015-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects. PMID:22425615
Foley, Nicholas C; Grossberg, Stephen; Mingolla, Ennio
2012-08-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects. Copyright © 2012 Elsevier Inc. All rights reserved.
What's she doing in the kitchen? Context helps when actions are hard to recognize.
Wurm, Moritz F; Schubotz, Ricarda I
2017-04-01
Specific spatial environments are often indicative of where certain actions may take place: In kitchens we prepare food, and in bathrooms we engage in personal hygiene, but not vice versa. In action recognition, contextual cues may constrain an observer's expectations toward actions that are more strongly associated with a particular context than others. Such cues should become particularly helpful when the action itself is difficult to recognize. However, to date only easily identifiable actions were investigated, and the effects of context on recognition were rather interfering than facilitatory. To test whether context also facilitates action recognition, we measured recognition performance of hardly identifiable actions that took place in compatible, incompatible, and neutral contextual settings. Action information was degraded by pixelizing the area of the object manipulation while the room in which the action took place remained fully visible. We found significantly higher accuracy for actions that took place in compatible compared to incompatible and neutral settings, indicating facilitation. Additionally, action recognition was slower in incompatible settings than in compatible and neutral settings, indicating interference. Together, our findings demonstrate that contextual information is effectively exploited during action observation, in particular when visual information about the action itself is sparse. Differential effects on speed and accuracy suggest that contexts modulate action recognition at different levels of processing. Our findings emphasize the importance of contextual information in comprehensive, ecologically valid models of action recognition.
Paris, Jason J; Frye, Cheryl A
2008-01-01
Ovarian hormone elevations are associated with enhanced learning/memory. During behavioral estrus or pregnancy, progestins, such as progesterone (P4) and its metabolite 5α-pregnan-3α-ol-20-one (3α,5α-THP), are elevated due, in part, to corpora luteal and placental secretion. During ‘pseudopregnancy’, the induction of corpora luteal functioning results in a hormonal milieu analogous to pregnancy, which ceases after about 12 days, due to the lack of placental formation. Multiparity is also associated with enhanced learning/memory, perhaps due to prior steroid exposure during pregnancy. Given evidence that progestins and/or parity may influence cognition, we investigated how natural alterations in the progestin milieu influence cognitive performance. In Experiment 1, virgin rats (nulliparous) or rats with two prior pregnancies (multiparous) were assessed on the object placement and recognition tasks, when in high-estrogen/P4 (behavioral estrus) or low-estrogen/P4 (diestrus) phases of the estrous cycle. In Experiment 2, primiparous or multiparous rats were tested in the object placement and recognition tasks when not pregnant, pseudopregnant, or pregnant (between gestational days (GDs) 6 and 12). In Experiment 3, pregnant primiparous or multiparous rats were assessed daily in the object placement or recognition tasks. Females in natural states associated with higher endogenous progestins (behavioral estrus, pregnancy, multiparity) outperformed rats in low progestin states (diestrus, non-pregnancy, nulliparity) on the object placement and recognition tasks. In earlier pregnancy, multiparous, compared with primiparous, rats had a lower corticosterone, but higher estrogen levels, concomitant with better object placement performance. From GD 13 until post partum, primiparous rats had higher 3α,5α-THP levels and improved object placement performance compared with multiparous rats. PMID:18390689
ERIC Educational Resources Information Center
Singh, Madhu
2015-01-01
This book deals with the relevance of recognition, validation and accreditation (RVA) of non-formal and informal learning in education and training, the workplace and society. It examines RVA's strategic policy objectives and best practice features as well as the challenges faced and ways forward as reported by Member States. Special attention is…
Park, Seong-Wook; Park, Junyoung; Bong, Kyeongryeol; Shin, Dongjoo; Lee, Jinmook; Choi, Sungpill; Yoo, Hoi-Jun
2015-12-01
Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.
CNN based approach for activity recognition using a wrist-worn accelerometer.
Panwar, Madhuri; Dyuthi, S Ram; Chandra Prakash, K; Biswas, Dwaipayan; Acharyya, Amit; Maharatna, Koushik; Gautam, Arvind; Naik, Ganesh R
2017-07-01
In recent years, significant advancements have taken place in human activity recognition using various machine learning approaches. However, feature engineering have dominated conventional methods involving the difficult process of optimal feature selection. This problem has been mitigated by using a novel methodology based on deep learning framework which automatically extracts the useful features and reduces the computational cost. As a proof of concept, we have attempted to design a generalized model for recognition of three fundamental movements of the human forearm performed in daily life where data is collected from four different subjects using a single wrist worn accelerometer sensor. The validation of the proposed model is done with different pre-processing and noisy data condition which is evaluated using three possible methods. The results show that our proposed methodology achieves an average recognition rate of 99.8% as opposed to conventional methods based on K-means clustering, linear discriminant analysis and support vector machine.
ERIC Educational Resources Information Center
Atencio, Matthew; Tan, Yuen Sze Michelle; Ho, Susanna; Ching, Chew Ting
2015-01-01
This paper details the potential contribution of outdoor education (OE) in Singaporean education given the recent raft of national curricular reforms aimed at fostering holistic and exploratory learning opportunities. In this context, we contend that increasing recognition of the value of OE, both internationally and locally, heralds specific…
Object recognition with hierarchical discriminant saliency networks.
Han, Sunhyoung; Vasconcelos, Nuno
2014-01-01
The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and computer vision literatures. This demonstrates benefits for all the functional enhancements of the HDSN, the class tuning inherent to discriminant saliency, and saliency layers based on templates of increasing target selectivity and invariance. Altogether, these experiments suggest that there are non-trivial benefits in integrating attention and recognition.
Rapid effects of estrogens on short-term memory: Possible mechanisms.
Paletta, Pietro; Sheppard, Paul A S; Matta, Richard; Ervin, Kelsy S J; Choleris, Elena
2018-06-01
Estrogens affect learning and memory through rapid and delayed mechanisms. Here we review studies on rapid effects on short-term memory. Estradiol rapidly improves social and object recognition memory, spatial memory, and social learning when administered systemically. The dorsal hippocampus mediates estrogen rapid facilitation of object, social and spatial short-term memory. The medial amygdala mediates rapid facilitation of social recognition. The three estrogen receptors, α (ERα), β (ERβ) and the G-protein coupled estrogen receptor (GPER) appear to play different roles depending on the task and brain region. Both ERα and GPER agonists rapidly facilitate short-term social and object recognition and spatial memory when administered systemically or into the dorsal hippocampus and facilitate social recognition in the medial amygdala. Conversely, only GPER can facilitate social learning after systemic treatment and an ERβ agonist only rapidly improved short-term spatial memory when given systemically or into the hippocampus, but also facilitates social recognition in the medial amygdala. Investigations into the mechanisms behind estrogens' rapid effects on short term memory showed an involvement of the extracellular signal-regulated kinase (ERK) and the phosphoinositide 3-kinase (PI3K) kinase pathways. Recent evidence also showed that estrogens interact with the neuropeptide oxytocin in rapidly facilitating social recognition. Estrogens can increase the production and/or release of oxytocin and other neurotransmitters, such as dopamine and acetylcholine. Therefore, it is possible that estrogens' rapid effects on short-term memory may occur through the regulation of various neurotransmitters, although more research is need on these interactions as well as the mechanisms of estrogens' actions on short-term memory. Copyright © 2018 Elsevier Inc. All rights reserved.
Schiapparelli, L; Simón, A M; Del Río, J; Frechilla, D
2006-06-01
It has been suggested that antagonists at serotonin 5-HT1A receptors may exert a procognitive effect by facilitating glutamatergic neurotransmission. Here we further explored this issue by looking for the ability of a 5-HT1A antagonist to prevent the learning deficit induced by AMPA receptor blockade in two behavioural procedures in rats, and for concomitant molecular changes presumably involved in memory formation in the hippocampus. Pretraining administration of the competitive AMPA receptor antagonist, NBQX, produced a dose-related retention impairment in a passive avoidance task 24h later, and also impaired retention in a novel object recognition test when an intertrial interval of 3h was selected. Pretreatment with the selective 5-HT1A receptor antagonist, WAY-100635, prevented the learning deficit induced by NBQX in the two behavioural procedures. In biochemical studies performed on rat hippocampus after the retention tests, we found that learning increased the membrane levels of AMPA receptor GluR1 and GluR2/3 subunits, as well as the phosphorylated forms of GluR1, effects that were abolished by NBQX administration before the training session. Pretreatment with WAY-100635 counteracted the NBQX effects and restored the initial learning-specific increase in Ca2+/calmodulin-dependent protein kinase II (CaMKII) function and the later increase in GluR2/3 and phosphorylated GluR1 surface expression. Moreover, administration of WAY-100635 before object recognition training improved recognition memory 24h later and potentiated the learning-associated increase in AMPA receptor subunits. The results support the proposed utility of 5-HT1A antagonists in the treatment of cognitive disorders.
Label consistent K-SVD: learning a discriminative dictionary for recognition.
Jiang, Zhuolin; Lin, Zhe; Davis, Larry S
2013-11-01
A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called "discriminative sparse-code error" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.
Serrano-Gotarredona, Rafael; Oster, Matthias; Lichtsteiner, Patrick; Linares-Barranco, Alejandro; Paz-Vicente, Rafael; Gomez-Rodriguez, Francisco; Camunas-Mesa, Luis; Berner, Raphael; Rivas-Perez, Manuel; Delbruck, Tobi; Liu, Shih-Chii; Douglas, Rodney; Hafliger, Philipp; Jimenez-Moreno, Gabriel; Civit Ballcels, Anton; Serrano-Gotarredona, Teresa; Acosta-Jimenez, Antonio J; Linares-Barranco, Bernabé
2009-09-01
This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synaptic operations per second, and achieves millisecond object recognition and tracking latencies.
NASA Astrophysics Data System (ADS)
Syryamkim, V. I.; Kuznetsov, D. N.; Kuznetsova, A. S.
2018-05-01
Image recognition is an information process implemented by some information converter (intelligent information channel, recognition system) having input and output. The input of the system is fed with information about the characteristics of the objects being presented. The output of the system displays information about which classes (generalized images) the recognized objects are assigned to. When creating and operating an automated system for pattern recognition, a number of problems are solved, while for different authors the formulations of these tasks, and the set itself, do not coincide, since it depends to a certain extent on the specific mathematical model on which this or that recognition system is based. This is the task of formalizing the domain, forming a training sample, learning the recognition system, reducing the dimensionality of space.
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
Hawkins, Jeff; Ahmad, Subutai; Cui, Yuwei
2017-01-01
Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed. PMID:29118696
Valt, Christian; Klein, Christoph; Boehm, Stephan G
2015-08-01
Repetition priming is a prominent example of non-declarative memory, and it increases the accuracy and speed of responses to repeatedly processed stimuli. Major long-hold memory theories posit that repetition priming results from facilitation within perceptual and conceptual networks for stimulus recognition and categorization. Stimuli can also be bound to particular responses, and it has recently been suggested that this rapid response learning, not network facilitation, provides a sound theory of priming of object recognition. Here, we addressed the relevance of network facilitation and rapid response learning for priming of person recognition with a view to advance general theories of priming. In four experiments, participants performed conceptual decisions like occupation or nationality judgments for famous faces. The magnitude of rapid response learning varied across experiments, and rapid response learning co-occurred and interacted with facilitation in perceptual and conceptual networks. These findings indicate that rapid response learning and facilitation in perceptual and conceptual networks are complementary rather than competing theories of priming. Thus, future memory theories need to incorporate both rapid response learning and network facilitation as individual facets of priming. © 2014 The British Psychological Society.
ERIC Educational Resources Information Center
Davis, Tyler; Love, Bradley C.; Preston, Alison R.
2012-01-01
Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and…
Gong, Xianmin; Xiao, Hongrui; Wang, Dahua
2016-11-01
False recognition results from the interplay of multiple cognitive processes, including verbatim memory, gist memory, phantom recollection, and response bias. In the current study, we modified the simplified Conjoint Recognition (CR) paradigm to investigate the way in which the valence of emotional stimuli affects the cognitive process and behavioral outcome of false recognition. In Study 1, we examined the applicability of the modification to the simplified CR paradigm and model. Twenty-six undergraduate students (13 females, aged 21.00±2.30years) learned and recognized both the large and small categories of photo objects. The applicability of the paradigm and model was confirmed by a fair goodness-of-fit of the model to the observational data and by their competence in detecting the memory differences between the large- and small-category conditions. In Study 2, we recruited another sample of 29 undergraduate students (14 females, aged 22.60±2.74years) to learn and recognize the categories of photo objects that were emotionally provocative. The results showed that negative valence increased false recognition, particularly the rate of false "remember" responses, by facilitating phantom recollection; positive valence did not influence false recognition significantly though enhanced gist processing. Copyright © 2016 Elsevier B.V. All rights reserved.
Bonardi, Charlotte; Pardon, Marie-Christine; Armstrong, Paul
2016-10-15
Performance was examined on three variants of the spontaneous object recognition (SOR) task, in 5-month old APPswe/PS1dE9 mice and wild-type littermate controls. A deficit was observed in an object-in-place (OIP) task, in which mice are preexposed to four different objects in specific locations, and then at test two of the objects swap locations (Experiment 2). Typically more exploration is seen of the objects which have switched location, which is taken as evidence of a retrieval-generated priming mechanism. However, no significant transgenic deficit was found in a relative recency (RR) task (Experiment 1), in which mice are exposed to two different objects in two separate sample phases, and then tested with both objects. Typically more exploration of the first-presented object is observed, which is taken as evidence of a self-generated priming mechanism. Nor was there any impairment in the simplest variant, the spontaneous object recognition (SOR) task, in which mice are preexposed to one object and then tested with the familiar and a novel object. This was true regardless of whether the sample-test interval was 5min (Experiment 1) or 24h (Experiments 1 and 2). It is argued that SOR performance depends on retrieval-generated priming as well as self-generated priming, and our preliminary evidence suggests that the retrieval-generated priming process is especially impaired in these young transgenic animals. Copyright © 2016 Elsevier B.V. All rights reserved.
The impact of privacy protection filters on gender recognition
NASA Astrophysics Data System (ADS)
Ruchaud, Natacha; Antipov, Grigory; Korshunov, Pavel; Dugelay, Jean-Luc; Ebrahimi, Touradj; Berrani, Sid-Ahmed
2015-09-01
Deep learning-based algorithms have become increasingly efficient in recognition and detection tasks, especially when they are trained on large-scale datasets. Such recent success has led to a speculation that deep learning methods are comparable to or even outperform human visual system in its ability to detect and recognize objects and their features. In this paper, we focus on the specific task of gender recognition in images when they have been processed by privacy protection filters (e.g., blurring, masking, and pixelization) applied at different strengths. Assuming a privacy protection scenario, we compare the performance of state of the art deep learning algorithms with a subjective evaluation obtained via crowdsourcing to understand how privacy protection filters affect both machine and human vision.
Tran, Dominic M D; Westbrook, R Frederick
2018-05-31
Exposure to a high-fat high-sugar (HFHS) diet rapidly impairs novel-place- but not novel-object-recognition memory in rats (Tran & Westbrook, 2015, 2017). Three experiments sought to investigate the generality of diet-induced cognitive deficits by examining whether there are conditions under which object-recognition memory is impaired. Experiments 1 and 3 tested the strength of short- and long-term object-memory trace, respectively, by varying the interval of time between object familiarization and subsequent novel object test. Experiment 2 tested the effect of increasing working memory load on object-recognition memory by interleaving additional object exposures between familiarization and test in an n-back style task. Experiments 1-3 failed to detect any differences in object recognition between HFHS and control rats. Experiment 4 controlled for object novelty by separately familiarizing both objects presented at test, which included one remote-familiar and one recent-familiar object. Under these conditions, when test objects differed in their relative recency, HFHS rats showed a weaker memory trace for the remote object compared to chow rats. This result suggests that the diet leaves intact recollection judgments, but impairs familiarity judgments. We speculate that the HFHS diet adversely affects "where" memories as well as the quality of "what" memories, and discuss these effects in relation to recollection and familiarity memory models, hippocampal-dependent functions, and episodic food memories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
Object recognition based on Google's reverse image search and image similarity
NASA Astrophysics Data System (ADS)
Horváth, András.
2015-12-01
Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.
Studying the Sky/Planets Can Drown You in Images: Machine Learning Solutions at JPL/Caltech
NASA Technical Reports Server (NTRS)
Fayyad, U. M.
1995-01-01
JPL is working to develop a domain-independent system capable of small-scale object recognition in large image databases for science analysis. Two applications discussed are the cataloging of three billion sky objects in the Sky Image Cataloging and Analysis Tool (SKICAT) and the detection of possibly one million small volcanoes visible in the Magellan synthetic aperture radar images of Venus (JPL Adaptive Recognition Tool, JARTool).
Single prolonged stress impairs social and object novelty recognition in rats.
Eagle, Andrew L; Fitzpatrick, Chris J; Perrine, Shane A
2013-11-01
Posttraumatic stress disorder (PTSD) results from exposure to a traumatic event and manifests as re-experiencing, arousal, avoidance, and negative cognition/mood symptoms. Avoidant symptoms, as well as the newly defined negative cognitions/mood, are a serious complication leading to diminished interest in once important or positive activities, such as social interaction; however, the basis of these symptoms remains poorly understood. PTSD patients also exhibit impaired object and social recognition, which may underlie the avoidance and symptoms of negative cognition, such as social estrangement or diminished interest in activities. Previous studies have demonstrated that single prolonged stress (SPS), models PTSD phenotypes, including impairments in learning and memory. Therefore, it was hypothesized that SPS would impair social and object recognition memory. Male Sprague Dawley rats were exposed to SPS then tested in the social choice test (SCT) or novel object recognition test (NOR). These tests measure recognition of novelty over familiarity, a natural preference of rodents. Results show that SPS impaired preference for both social and object novelty. In addition, SPS impairment in social recognition may be caused by impaired behavioral flexibility, or an inability to shift behavior during the SCT. These results demonstrate that traumatic stress can impair social and object recognition memory, which may underlie certain avoidant symptoms or negative cognition in PTSD and be related to impaired behavioral flexibility. Copyright © 2013 Elsevier B.V. All rights reserved.
Advanced miniature processing handware for ATR applications
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Daud, Taher (Inventor); Thakoor, Anikumar (Inventor)
2003-01-01
A Hybrid Optoelectronic Neural Object Recognition System (HONORS), is disclosed, comprising two major building blocks: (1) an advanced grayscale optical correlator (OC) and (2) a massively parallel three-dimensional neural-processor. The optical correlator, with its inherent advantages in parallel processing and shift invariance, is used for target of interest (TOI) detection and segmentation. The three-dimensional neural-processor, with its robust neural learning capability, is used for target classification and identification. The hybrid optoelectronic neural object recognition system, with its powerful combination of optical processing and neural networks, enables real-time, large frame, automatic target recognition (ATR).
Target recognition based on convolutional neural network
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian
2017-11-01
One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.
Semantic and phonological schema influence spoken word learning and overnight consolidation.
Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H
2018-06-01
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.
Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng
2018-05-01
Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.
The effect of product characteristic familiarity on product recognition
NASA Astrophysics Data System (ADS)
Yang, Cheng; An, Fang; Chen, Chen; Zhu, Bin
2017-09-01
In order to explore the effect of product appearance characteristic familiarity on product recognition, both EEG experiment and questionnaire evaluation are used in this research. The objective feedback of user is obtained through the EEG experiment and the subjective opinions are collected through the questionnaires. The EEG experiment is combined with the classical learning-recognition paradigm, and the old-new effect of recognition experiment is used as a metric of recognition degree. Experimental results show that the difference of characteristic familiarity does have a significant effect on product recognition. The conclusion can be used in innovation design.
Ursino, Mauro; Magosso, Elisa; Cuppini, Cristiano
2009-02-01
Synchronization of neural activity in the gamma band is assumed to play a significant role not only in perceptual processing, but also in higher cognitive functions. Here, we propose a neural network of Wilson-Cowan oscillators to simulate recognition of abstract objects, each represented as a collection of four features. Features are ordered in topological maps of oscillators connected via excitatory lateral synapses, to implement a similarity principle. Experience on previous objects is stored in long-range synapses connecting the different topological maps, and trained via timing dependent Hebbian learning (previous knowledge principle). Finally, a downstream decision network detects the presence of a reliable object representation, when all features are oscillating in synchrony. Simulations performed giving various simultaneous objects to the network (from 1 to 4), with some missing and/or modified properties suggest that the network can reconstruct objects, and segment them from the other simultaneously present objects, even in case of deteriorated information, noise, and moderate correlation among the inputs (one common feature). The balance between sensitivity and specificity depends on the strength of the Hebbian learning. Achieving a correct reconstruction in all cases, however, requires ad hoc selection of the oscillation frequency. The model represents an attempt to investigate the interactions among topological maps, autoassociative memory, and gamma-band synchronization, for recognition of abstract objects.
Learning viewpoint invariant perceptual representations from cluttered images.
Spratling, Michael W
2005-05-01
In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations.
da Silva de Vargas, Liane; Neves, Ben-Hur Souto das; Roehrs, Rafael; Izquierdo, Iván; Mello-Carpes, Pâmela
2017-06-30
Previously we showed the involvement of the hippocampal noradrenergic system in the consolidation and persistence of object recognition (OR) memory. Here we show that one-single physical exercise session performed immediately after learning promotes OR memory persistence and increases norepinephrine levels in the hippocampus. Additionally, effects of exercise on memory are avoided by an intra-hippocampal beta-adrenergic antagonist infusion. Taken together, these results suggest that exercise effects on memory can be related to noradrenergic mechanisms and acute physical exercise can be a non-pharmacological intervention to assist memory consolidation and persistence, with few or no side effects. Copyright © 2017 Elsevier B.V. All rights reserved.
Visual recognition and inference using dynamic overcomplete sparse learning.
Murray, Joseph F; Kreutz-Delgado, Kenneth
2007-09-01
We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.
Pedestrian recognition using automotive radar sensors
NASA Astrophysics Data System (ADS)
Bartsch, A.; Fitzek, F.; Rasshofer, R. H.
2012-09-01
The application of modern series production automotive radar sensors to pedestrian recognition is an important topic in research on future driver assistance systems. The aim of this paper is to understand the potential and limits of such sensors in pedestrian recognition. This knowledge could be used to develop next generation radar sensors with improved pedestrian recognition capabilities. A new raw radar data signal processing algorithm is proposed that allows deep insights into the object classification process. The impact of raw radar data properties can be directly observed in every layer of the classification system by avoiding machine learning and tracking. This gives information on the limiting factors of raw radar data in terms of classification decision making. To accomplish the very challenging distinction between pedestrians and static objects, five significant and stable object features from the spatial distribution and Doppler information are found. Experimental results with data from a 77 GHz automotive radar sensor show that over 95% of pedestrians can be classified correctly under optimal conditions, which is compareable to modern machine learning systems. The impact of the pedestrian's direction of movement, occlusion, antenna beam elevation angle, linear vehicle movement, and other factors are investigated and discussed. The results show that under real life conditions, radar only based pedestrian recognition is limited due to insufficient Doppler frequency and spatial resolution as well as antenna side lobe effects.
Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim
2012-01-01
The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.
Learning Weight Uncertainty with Stochastic Gradient MCMC for Shape Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chunyuan; Stevens, Andrew J.; Chen, Changyou
2016-08-10
Learning the representation of shape cues in 2D & 3D objects for recognition is a fundamental task in computer vision. Deep neural networks (DNNs) have shown promising performance on this task. Due to the large variability of shapes, accurate recognition relies on good estimates of model uncertainty, ignored in traditional training of DNNs, typically learned via stochastic optimization. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (SG-MCMC) to learn weight uncertainty in DNNs. It yields principled Bayesian interpretations for the commonly used Dropout/DropConnect techniques and incorporates them into the SG-MCMC framework. Extensive experiments on 2D &more » 3D shape datasets and various DNN models demonstrate the superiority of the proposed approach over stochastic optimization. Our approach yields higher recognition accuracy when used in conjunction with Dropout and Batch-Normalization.« less
Learning and disrupting invariance in visual recognition with a temporal association rule
Isik, Leyla; Leibo, Joel Z.; Poggio, Tomaso
2012-01-01
Learning by temporal association rules such as Foldiak's trace rule is an attractive hypothesis that explains the development of invariance in visual recognition. Consistent with these rules, several recent experiments have shown that invariance can be broken at both the psychophysical and single cell levels. We show (1) that temporal association learning provides appropriate invariance in models of object recognition inspired by the visual cortex, (2) that we can replicate the “invariance disruption” experiments using these models with a temporal association learning rule to develop and maintain invariance, and (3) that despite dramatic single cell effects, a population of cells is very robust to these disruptions. We argue that these models account for the stability of perceptual invariance despite the underlying plasticity of the system, the variability of the visual world and expected noise in the biological mechanisms. PMID:22754523
Component Pin Recognition Using Algorithms Based on Machine Learning
NASA Astrophysics Data System (ADS)
Xiao, Yang; Hu, Hong; Liu, Ze; Xu, Jiangchang
2018-04-01
The purpose of machine vision for a plug-in machine is to improve the machine’s stability and accuracy, and recognition of the component pin is an important part of the vision. This paper focuses on component pin recognition using three different techniques. The first technique involves traditional image processing using the core algorithm for binary large object (BLOB) analysis. The second technique uses the histogram of oriented gradients (HOG), to experimentally compare the effect of the support vector machine (SVM) and the adaptive boosting machine (AdaBoost) learning meta-algorithm classifiers. The third technique is the use of an in-depth learning method known as convolution neural network (CNN), which involves identifying the pin by comparing a sample to its training. The main purpose of the research presented in this paper is to increase the knowledge of learning methods used in the plug-in machine industry in order to achieve better results.
Learning in non-formal education: Is it "youthful" for youth in action?
NASA Astrophysics Data System (ADS)
Norqvist, Lars; Leffler, Eva
2017-04-01
This article offers insights into the practices of a non-formal education programme for youth provided by the European Union (EU). It takes a qualitative approach and is based on a case study of the European Voluntary Service (EVS). Data were collected during individual and focus group interviews with learners (the EVS volunteers), decision takers and trainers, with the aim of deriving an understanding of learning in non-formal education. The research questions concerned learning, the recognition of learning and perspectives of usefulness. The study also examined the Youthpass documentation tool as a key to understanding the recognition of learning and to determine whether the learning was useful for learners (the volunteers). The findings and analysis offer several interpretations of learning, and the recognition of learning, which take place in non-formal education. The findings also revealed that it is complicated to divide learning into formal and non- formal categories; instead, non-formal education is useful for individual learners when both formal and non-formal educational contexts are integrated. As a consequence, the division of formal and non-formal (and possibly even informal) learning creates a gap which works against the development of flexible and interconnected education with ubiquitous learning and mobility within and across formal and non-formal education. This development is not in the best interests of learners, especially when seeking useful learning and education for youth (what the authors term "youthful" for youth in action).
Kéri, Szabolcs
2014-05-01
Most of our learning activity takes place in a social context. I examined how social interactions influence associative learning in neurodegenerative diseases and atypical neurodevelopmental conditions primarily characterised by social cognitive and memory dysfunctions. Participants were individuals with high-functioning autism (HFA, n = 18), early-stage behavioural variant frontotemporal dementia (bvFTD, n = 16) and Alzheimer's disease (AD, n = 20). The leading symptoms in HFA and bvFTD were social and behavioural dysfunctions, whereas AD was characterised by memory deficits. Participants received three versions of a paired associates learning task. In the game with boxes test, objects were hidden in six candy boxes placed in different locations on the computer screen. In the game with faces, each box was labelled by a photo of a person. In the real-life version of the game, participants played with real persons. Individuals with HFA and bvFTD performed well in the computer games, but failed on the task including real persons. In contrast, in patients with early-stage AD, social interactions boosted paired associates learning up to the level of healthy control volunteers. Worse performance in the real life game was associated with less successful recognition of complex emotions and mental states in the Reading the Mind in the Eyes Test. Spatial span did not affect the results. When social cognition is impaired, but memory systems are less compromised (HFA and bvFTD), real-life interactions disrupt associative learning; when disease process impairs memory systems but social cognition is relatively intact (early-stage AD), social interactions have a beneficial effect on learning and memory. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2012-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued…
Huang, Lijie; Song, Yiying; Li, Jingguang; Zhen, Zonglei; Yang, Zetian; Liu, Jia
2014-01-01
In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain. PMID:25071513
NASA Technical Reports Server (NTRS)
Laughlin, Daniel
2008-01-01
Persistent Immersive Synthetic Environments (PISE) are not just connection points, they are meeting places. They are the new public squares, village centers, malt shops, malls and pubs all rolled into one. They come with a sense of 'thereness" that engages the mind like a real place does. Learning starts as a real code. The code defines "objects." The objects exist in computer space, known as the "grid." The objects and space combine to create a "place." A "world" is created, Before long, the grid and code becomes obscure, and the "world maintains focus.
Lee, Inah; Kim, Jangjin
2010-08-01
Hippocampal-dependent tasks often involve specific associations among stimuli (including egocentric information), and such tasks are therefore prone to interference from irrelevant task strategies before a correct strategy is found. Using an object-place paired-associate task, we investigated changes in neural firing patterns in the hippocampus in association with a shift in strategy during learning. We used an object-place paired-associate task in which a pair of objects was presented in two different arms of a radial maze. Each object was associated with reward only in one of the arms, thus requiring the rats to consider both object identity and its location in the maze. Hippocampal neurons recorded in CA1 displayed a dynamic transition in their firing patterns during the acquisition of the task across days, and this corresponded to a shift in strategy manifested in behavioral data. Specifically, before the rats learned the task, they chose an object that maintained a particular egocentric relationship with their body (response strategy) irrespective of the object identity. However, as the animal acquired the task, it chose an object according to both its identity and the associated location in the maze (object-in-place strategy). We report that CA1 neurons in the hippocampus changed their prospective firing correlates according to the dominant strategy (i.e., response versus object-in-place strategy) employed at a given stage of learning. The results suggest that neural firing pattern in the hippocampus is heavily influenced by the task demand hypothesized by the animal and the firing pattern changes flexibly as the perceived task demand changes.
Graded effects in hierarchical figure-ground organization: reply to Peterson (1999).
Vecera, S P; O'Reilly, R C
2000-06-01
An important issue in vision research concerns the order of visual processing. S. P. Vecera and R. C. O'Reilly (1998) presented an interactive, hierarchical model that placed figure-ground segregation prior to object recognition. M. A. Peterson (1999) critiqued this model, arguing that because it used ambiguous stimulus displays, figure-ground processing did not precede object processing. In the current article, the authors respond to Peterson's (1999) interpretation of ambiguity in the model and her interpretation of what it means for figure-ground processing to come before object recognition. The authors argue that complete stimulus ambiguity is not critical to the model and that figure-ground precedes object recognition architecturally in the model. The arguments are supported with additional simulation results and an experiment, demonstrating that top-down inputs can influence figure-ground organization in displays that contain stimulus cues.
Symbolic Play Connects to Language through Visual Object Recognition
ERIC Educational Resources Information Center
Smith, Linda B.; Jones, Susan S.
2011-01-01
Object substitutions in play (e.g. using a box as a car) are strongly linked to language learning and their absence is a diagnostic marker of language delay. Classic accounts posit a symbolic function that underlies both words and object substitutions. Here we show that object substitutions depend on developmental changes in visual object…
Recognition & Response: Response to Intervention for PreK
ERIC Educational Resources Information Center
Buysse, Virginia; Peisner-Feinberg, Ellen
2010-01-01
Some young children show signs that they may not be learning in an expected manner, even during the prekindergarten (PreK) years. These children may exhibit learning challenges in areas such as developing language, counting objects, hearing differences in letter sounds, paying attention during story time, or learning how to write. Teachers,…
Extreme Trust Region Policy Optimization for Active Object Recognition.
Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei
2018-06-01
In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.
Lateral Entorhinal Cortex is Critical for Novel Object-Context Recognition
Wilson, David IG; Langston, Rosamund F; Schlesiger, Magdalene I; Wagner, Monica; Watanabe, Sakurako; Ainge, James A
2013-01-01
Episodic memory incorporates information about specific events or occasions including spatial locations and the contextual features of the environment in which the event took place. It has been modeled in rats using spontaneous exploration of novel configurations of objects, their locations, and the contexts in which they are presented. While we have a detailed understanding of how spatial location is processed in the brain relatively little is known about where the nonspatial contextual components of episodic memory are processed. Initial experiments measured c-fos expression during an object-context recognition (OCR) task to examine which networks within the brain process contextual features of an event. Increased c-fos expression was found in the lateral entorhinal cortex (LEC; a major hippocampal afferent) during OCR relative to control conditions. In a subsequent experiment it was demonstrated that rats with lesions of LEC were unable to recognize object-context associations yet showed normal object recognition and normal context recognition. These data suggest that contextual features of the environment are integrated with object identity in LEC and demonstrate that recognition of such object-context associations requires the LEC. This is consistent with the suggestion that contextual features of an event are processed in LEC and that this information is combined with spatial information from medial entorhinal cortex to form episodic memory in the hippocampus. © 2013 Wiley Periodicals, Inc. PMID:23389958
2016-07-01
reconstruction, video synchronization, multi - view tracking, action recognition, reasoning with uncertainty 16. SECURITY CLASSIFICATION OF: 17...3.4.2. Human action recognition across multi - views ......................................................................................... 44 3.4.3...68 4.2.1. Multi - view Multi -object Tracking with 3D cues
Evaluating the Visualization of What a Deep Neural Network Has Learned.
Samek, Wojciech; Binder, Alexander; Montavon, Gregoire; Lapuschkin, Sebastian; Muller, Klaus-Robert
Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance.Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance.
Salvetti, Beatrice; Morris, Richard G M; Wang, Szu-Han
2014-01-15
Many insignificant events in our daily life are forgotten quickly but can be remembered for longer when other memory-modulating events occur before or after them. This phenomenon has been investigated in animal models in a protocol in which weak memories persist longer if exploration in a novel context is introduced around the time of memory encoding. This study aims to understand whether other types of rewarding or novel tasks, such as rewarded learning in a T-maze and novel object recognition, can also be effective memory-modulating events. Rats were trained in a delayed matching-to-place task to encode and retrieve food locations in an event arena. Weak encoding with only one food pellet at the sample location induced memory encoding but forgetting over 24 h. When this same weak encoding was followed by a rewarded task in a T-maze, the memory persisted for 24 h. Moreover, the same persistence of memory over 24 h could be achieved by exploration in a novel box or by a rewarded T-maze task after a "non-rewarded" weak encoding. When the one-pellet weak encoding was followed by novel object exploration, the memory did not persist at 24 h. Together, the results confirm that place encoding is possible without explicit reward, and that rewarded learning in a separate task lacking novelty can be an effective memory-modulating event. The behavioral and neurobiological implications are discussed.
Constraints in distortion-invariant target recognition system simulation
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Razzaque, Md A.
2000-11-01
Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.
Circular blurred shape model for multiclass symbol recognition.
Escalera, Sergio; Fornés, Alicia; Pujol, Oriol; Lladós, Josep; Radeva, Petia
2011-04-01
In this paper, we propose a circular blurred shape model descriptor to deal with the problem of symbol detection and classification as a particular case of object recognition. The feature extraction is performed by capturing the spatial arrangement of significant object characteristics in a correlogram structure. The shape information from objects is shared among correlogram regions, where a prior blurring degree defines the level of distortion allowed in the symbol, making the descriptor tolerant to irregular deformations. Moreover, the descriptor is rotation invariant by definition. We validate the effectiveness of the proposed descriptor in both the multiclass symbol recognition and symbol detection domains. In order to perform the symbol detection, the descriptors are learned using a cascade of classifiers. In the case of multiclass categorization, the new feature space is learned using a set of binary classifiers which are embedded in an error-correcting output code design. The results over four symbol data sets show the significant improvements of the proposed descriptor compared to the state-of-the-art descriptors. In particular, the results are even more significant in those cases where the symbols suffer from elastic deformations.
NASA Astrophysics Data System (ADS)
Rishi, Rahul; Choudhary, Amit; Singh, Ravinder; Dhaka, Vijaypal Singh; Ahlawat, Savita; Rao, Mukta
2010-02-01
In this paper we propose a system for classification problem of handwritten text. The system is composed of preprocessing module, supervised learning module and recognition module on a very broad level. The preprocessing module digitizes the documents and extracts features (tangent values) for each character. The radial basis function network is used in the learning and recognition modules. The objective is to analyze and improve the performance of Multi Layer Perceptron (MLP) using RBF transfer functions over Logarithmic Sigmoid Function. The results of 35 experiments indicate that the Feed Forward MLP performs accurately and exhaustively with RBF. With the change in weight update mechanism and feature-drawn preprocessing module, the proposed system is competent with good recognition show.
Visual object recognition for mobile tourist information systems
NASA Astrophysics Data System (ADS)
Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander
2005-03-01
We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.
Roschlau, Corinna; Hauber, Wolfgang
2017-04-14
Growing evidence suggests that the catecholamine (CA) neurotransmitters dopamine and noradrenaline support hippocampus-mediated learning and memory. However, little is known to date about which forms of hippocampus-mediated spatial learning are modulated by CA signaling in the hippocampus. Therefore, in the current study we examined the effects of 6-hydroxydopamine-induced CA depletion in the dorsal hippocampus on two prominent forms of hippocampus-based spatial learning, that is learning of object-location associations (paired-associates learning) as well as learning and choosing actions based on a representation of the context (place learning). Results show that rats with CA depletion of the dorsal hippocampus were able to learn object-location associations in an automated touch screen paired-associates learning (PAL) task. One possibility to explain this negative result is that object-location learning as tested in the touchscreen PAL task seems to require relatively little hippocampal processing. Results further show that in rats with CA depletion of the dorsal hippocampus the use of a response strategy was facilitated in a T-maze spatial learning task. We suspect that impaired hippocampus CA signaling may attenuate hippocampus-based place learning and favor dorsolateral striatum-based response learning. Copyright © 2017 Elsevier B.V. All rights reserved.
Enhanced recognition memory following glycine transporter 1 deletion in forebrain neurons.
Singer, Philipp; Boison, Detlev; Möhler, Hanns; Feldon, Joram; Yee, Benjamin K
2007-10-01
Selective deletion of glycine transporter 1 (GlyT1) in forebrain neurons enhances N-methyl-D-aspartate receptor (NMDAR)-dependent neurotransmission and facilitates associative learning. These effects are attributable to increases in extracellular glycine availability in forebrain neurons due to reduced glycine re-uptake. Using a forebrain- and neuron-specific GlyT1-knockout mouse line (CamKIIalphaCre; GlyT1tm1.2fl/fI), the authors investigated whether this molecular intervention can affect recognition memory. In a spontaneous object recognition memory test, enhanced preference for a novel object was demonstrated in mutant mice relative to littermate control subjects at a retention interval of 2 hr, but not at 2 min. Furthermore, mutants were responsive to a switch in the relative spatial positions of objects, whereas control subjects were not. These potential procognitive effects were demonstrated against a lack of difference in contextual novelty detection: Mutant and control subjects showed equivalent preference for a novel over a familiar context. Results therefore extend the possible range of potential promnesic effects of specific forebrain neuronal GlyT1 deletion from associative learning to recognition memory and further support the possibility that mnemonic functions can be enhanced by reducing GlyT1 function. (PsycINFO Database Record (c) 2007 APA, all rights reserved).
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-01-01
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition. PMID:29786665
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-05-22
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.
a Fully Automated Pipeline for Classification Tasks with AN Application to Remote Sensing
NASA Astrophysics Data System (ADS)
Suzuki, K.; Claesen, M.; Takeda, H.; De Moor, B.
2016-06-01
Nowadays deep learning has been intensively in spotlight owing to its great victories at major competitions, which undeservedly pushed `shallow' machine learning methods, relatively naive/handy algorithms commonly used by industrial engineers, to the background in spite of their facilities such as small requisite amount of time/dataset for training. We, with a practical point of view, utilized shallow learning algorithms to construct a learning pipeline such that operators can utilize machine learning without any special knowledge, expensive computation environment, and a large amount of labelled data. The proposed pipeline automates a whole classification process, namely feature-selection, weighting features and the selection of the most suitable classifier with optimized hyperparameters. The configuration facilitates particle swarm optimization, one of well-known metaheuristic algorithms for the sake of generally fast and fine optimization, which enables us not only to optimize (hyper)parameters but also to determine appropriate features/classifier to the problem, which has conventionally been a priori based on domain knowledge and remained untouched or dealt with naïve algorithms such as grid search. Through experiments with the MNIST and CIFAR-10 datasets, common datasets in computer vision field for character recognition and object recognition problems respectively, our automated learning approach provides high performance considering its simple setting (i.e. non-specialized setting depending on dataset), small amount of training data, and practical learning time. Moreover, compared to deep learning the performance stays robust without almost any modification even with a remote sensing object recognition problem, which in turn indicates that there is a high possibility that our approach contributes to general classification problems.
Ball-scale based hierarchical multi-object recognition in 3D medical images
NASA Astrophysics Data System (ADS)
Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian
2010-03-01
This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.
d'Isa, Raffaele; Brambilla, Riccardo; Fasano, Stefania
2014-01-01
Memory is a high-level brain function that enables organisms to adapt their behavioral responses to the environment, hence increasing their probability of survival. The Ras-ERK pathway is a key molecular intracellular signalling cascade for memory consolidation. In this chapter we will describe two main one-trial behavioral tests commonly used in the field of memory research in order to assess the role of Ras-ERK signalling in long-term memory: passive avoidance and object recognition. Passive avoidance (PA) is a fear-motivated instrumental learning task, designed by Jarvik and Essman in 1960, in which animals learn to refrain from emitting a behavioral response that has previously been associated with a punishment. We will describe here the detailed protocol and show some examples of how PA can reveal impairments or enhancements in memory consolidation following loss or gain of function genetic manipulations of the Ras-ERK pathway. The phenotypes of global mutants as Ras-GRF1 KO, GENA53, and ERK1 KO mice, as well as of conditional region-specific mutants (striatal K-CREB mice), will be illustrated as examples. Novel object recognition (NOR), developed by Ennaceur and Delacour in 1988, is instead a more recent and highly ecological test, which relies on the natural tendency of rodents to spontaneously approach and explore novel objects, representing hence a useful non-stressful tool for the study of memory in animals without the employment of punishments or starvation/water restriction regimens. Careful indications will be given on how to select the positions for the novel object, in order to counterbalance for individual side preferences among mice during the training. Finally, the methods for calculating two learning indexes will be described. In addition to the classical discrimination index (DI) that measures the ability of an animal to discriminate between two different objects which are presented at the same time, we will describe the formula of a new index that we present here for the first time, the recognition index (RI), which quantifies the ability of an animal to recognize a same object at different time points and that, by taking into account the basal individual preferences displayed during the training, can give a more accurate measure of an animal's actual recognition memory.
Data-centric method for object observation through scattering media
NASA Astrophysics Data System (ADS)
Tanida, Jun; Horisaki, Ryoichi
2018-03-01
A data-centric method is introduced for object observation through scattering media. A large number of training pairs are used to characterize the relation between the object and the observation signals based on machine learning. Using the method object information can be retrieved even from strongly-disturbed signals. As potential applications, object recognition, imaging, and focusing through scattering media were demonstrated.
Children's Reactions to a Children's News Program: Reception, Recognition and Learning.
ERIC Educational Resources Information Center
Ward, Sara Ann
The major objectives of this study were to determine the reception of "In the News" by children within the target audience's ages, to determine if children within the target audience recognize the news program as a program, to determine if children learn from "In the News," and to compare children's learning from hard news…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Win-Shwe, Tin-Tin, E-mail: tin.tin.win.shwe@nies.go.jp; Fujimaki, Hidekazu; Fujitani, Yuji
2012-08-01
Recently, our laboratory reported that exposure to nanoparticle-rich diesel exhaust (NRDE) for 3 months impaired hippocampus-dependent spatial learning ability and up-regulated the expressions of memory function-related genes in the hippocampus of female mice. However, whether NRDE affects the hippocampus-dependent non-spatial learning ability and the mechanism of NRDE-induced neurotoxicity was unknown. Female BALB/c mice were exposed to clean air, middle-dose NRDE (M-NRDE, 47 μg/m{sup 3}), high-dose NRDE (H-NRDE, 129 μg/m{sup 3}), or filtered H-NRDE (F-DE) for 3 months. We then investigated the effect of NRDE exposure on non-spatial learning ability and the expression of genes related to glutamate neurotransmission using amore » novel object recognition test and a real-time RT-PCR analysis, respectively. We also examined microglia marker Iba1 immunoreactivity in the hippocampus using immunohistochemical analyses. Mice exposed to H-NRDE or F-DE could not discriminate between familiar and novel objects. The control and M-NRDE-exposed groups showed a significantly increased discrimination index, compared to the H-NRDE-exposed group. Although no significant changes in the expression levels of the NMDA receptor subunits were observed, the expression of glutamate transporter EAAT4 was decreased and that of glutamic acid decarboxylase GAD65 was increased in the hippocampus of H-NRDE-exposed mice, compared with the expression levels in control mice. We also found that microglia activation was prominent in the hippocampal area of the H-NRDE-exposed mice, compared with the other groups. These results indicated that exposure to NRDE for 3 months impaired the novel object recognition ability. The present study suggests that genes related to glutamate metabolism may be involved in the NRDE-induced neurotoxicity observed in the present mouse model. -- Highlights: ► The effects of nanoparticle-induced neurotoxicity remain unclear. ► We investigated the effect of exposure to nanoparticles on learning behavior. ► We found that exposure to nanoparticles impaired novel object recognition ability.« less
3D Object Recognition: Symmetry and Virtual Views
1992-12-01
NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial
Bokov, Plamen; Mahut, Bruno; Flaud, Patrice; Delclaux, Christophe
2016-03-01
Respiratory diseases in children are a common reason for physician visits. A diagnostic difficulty arises when parents hear wheezing that is no longer present during the medical consultation. Thus, an outpatient objective tool for recognition of wheezing is of clinical value. We developed a wheezing recognition algorithm from recorded respiratory sounds with a Smartphone placed near the mouth. A total of 186 recordings were obtained in a pediatric emergency department, mostly in toddlers (mean age 20 months). After exclusion of recordings with artefacts and those with a single clinical operator auscultation, 95 recordings with the agreement of two operators on auscultation diagnosis (27 with wheezing and 68 without) were subjected to a two phase algorithm (signal analysis and pattern classifier using machine learning algorithms) to classify records. The best performance (71.4% sensitivity and 88.9% specificity) was observed with a Support Vector Machine-based algorithm. We further tested the algorithm over a set of 39 recordings having a single operator and found a fair agreement (kappa=0.28, CI95% [0.12, 0.45]) between the algorithm and the operator. The main advantage of such an algorithm is its use in contact-free sound recording, thus valuable in the pediatric population. Copyright © 2016 Elsevier Ltd. All rights reserved.
Haettig, Jakob; Stefanko, Daniel P.; Multani, Monica L.; Figueroa, Dario X.; McQuown, Susan C.; Wood, Marcelo A.
2011-01-01
Transcription of genes required for long-term memory not only involves transcription factors, but also enzymatic protein complexes that modify chromatin structure. Chromatin-modifying enzymes, such as the histone acetyltransferase (HAT) CREB (cyclic-AMP response element binding) binding protein (CBP), are pivotal for the transcriptional regulation required for long-term memory. Several studies have shown that CBP and histone acetylation are necessary for hippocampus-dependent long-term memory and hippocampal long-term potentiation (LTP). Importantly, every genetically modified Cbp mutant mouse exhibits long-term memory impairments in object recognition. However, the role of the hippocampus in object recognition is controversial. To better understand how chromatin-modifying enzymes modulate long-term memory for object recognition, we first examined the role of the hippocampus in retrieval of long-term memory for object recognition or object location. Muscimol inactivation of the dorsal hippocampus prior to retrieval had no effect on long-term memory for object recognition, but completely blocked long-term memory for object location. This was consistent with experiments showing that muscimol inactivation of the hippocampus had no effect on long-term memory for the object itself, supporting the idea that the hippocampus encodes spatial information about an object (such as location or context), whereas cortical areas (such as the perirhinal or insular cortex) encode information about the object itself. Using location-dependent object recognition tasks that engage the hippocampus, we demonstrate that CBP is essential for the modulation of long-term memory via HDAC inhibition. Together, these results indicate that HDAC inhibition modulates memory in the hippocampus via CBP and that different brain regions utilize different chromatin-modifying enzymes to regulate learning and memory. PMID:21224411
Critical object recognition in millimeter-wave images with robustness to rotation and scale.
Mohammadzade, Hoda; Ghojogh, Benyamin; Faezi, Sina; Shabany, Mahdi
2017-06-01
Locating critical objects is crucial in various security applications and industries. For example, in security applications, such as in airports, these objects might be hidden or covered under shields or secret sheaths. Millimeter-wave images can be utilized to discover and recognize the critical objects out of the hidden cases without any health risk due to their non-ionizing features. However, millimeter-wave images usually have waves in and around the detected objects, making object recognition difficult. Thus, regular image processing and classification methods cannot be used for these images and additional pre-processings and classification methods should be introduced. This paper proposes a novel pre-processing method for canceling rotation and scale using principal component analysis. In addition, a two-layer classification method is introduced and utilized for recognition. Moreover, a large dataset of millimeter-wave images is collected and created for experiments. Experimental results show that a typical classification method such as support vector machines can recognize 45.5% of a type of critical objects at 34.2% false alarm rate (FAR), which is a drastically poor recognition. The same method within the proposed recognition framework achieves 92.9% recognition rate at 0.43% FAR, which indicates a highly significant improvement. The significant contribution of this work is to introduce a new method for analyzing millimeter-wave images based on machine vision and learning approaches, which is not yet widely noted in the field of millimeter-wave image analysis.
Implicit multisensory associations influence voice recognition.
von Kriegstein, Katharina; Giraud, Anne-Lise
2006-10-01
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.
Deep Learning for Computer Vision: A Brief Review
Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios
2018-01-01
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619
Leibo, Joel Z.; Liao, Qianli; Freiwald, Winrich A.; Anselmi, Fabio; Poggio, Tomaso
2017-01-01
SUMMARY The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations like depth-rotations [1, 2]. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3, 4, 5, 6]. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here we demonstrate that one specific biologically-plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli like faces at intermediate levels of the architecture and show why it does so. Thus the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. PMID:27916522
ERIC Educational Resources Information Center
Raghuveer, V. R.; Tripathy, B. K.
2012-01-01
With the advancements in the WWW and ICT, the e-learning domain has developed very fast. Even many educational institutions these days have shifted their focus towards the e-learning and mobile learning environments. However, from the quality of learning point of view, which is measured in terms of "active learning" taking place, the…
Younger and Older Adults Weigh Multiple Cues in a Similar Manner to Generate Judgments of Learning
Hines, Jarrod C.; Hertzog, Christopher; Touron, Dayna R.
2015-01-01
One's memory for past test performance (MPT) is a key piece of information individuals use when deciding how to restudy material. We used a multi-trial recognition memory task to examine adult age differences in the influence of MPT (measured by actual Trial 1 memory accuracy and subjective confidence judgments, CJs) along with Trial 1 judgments of learning (JOLs), objective and participant-estimated recognition fluencies, and Trial 2 study time on Trial 2 JOLs. We found evidence of simultaneous and independent influences of multiple objective and subjective (i.e., metacognitive) cues on Trial 2 JOLs, and these relationships were highly similar for younger and older adults. Individual differences in Trial 1 recognition accuracy and CJs on Trial 2 JOLs indicate that individuals may vary in the degree to which they rely on each MPT cue when assessing subsequent memory confidence. Aging appears to spare the ability to access multiple cues when making JOLs. PMID:25827630
Orientation estimation of anatomical structures in medical images for object recognition
NASA Astrophysics Data System (ADS)
Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian
2011-03-01
Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.
Tc1 mouse model of trisomy-21 dissociates properties of short- and long-term recognition memory.
Hall, Jessica H; Wiseman, Frances K; Fisher, Elizabeth M C; Tybulewicz, Victor L J; Harwood, John L; Good, Mark A
2016-04-01
The present study examined memory function in Tc1 mice, a transchromosomic model of Down syndrome (DS). Tc1 mice demonstrated an unusual delay-dependent deficit in recognition memory. More specifically, Tc1 mice showed intact immediate (30sec), impaired short-term (10-min) and intact long-term (24-h) memory for objects. A similar pattern was observed for olfactory stimuli, confirming the generality of the pattern across sensory modalities. The specificity of the behavioural deficits in Tc1 mice was confirmed using APP overexpressing mice that showed the opposite pattern of object memory deficits. In contrast to object memory, Tc1 mice showed no deficit in either immediate or long-term memory for object-in-place information. Similarly, Tc1 mice showed no deficit in short-term memory for object-location information. The latter result indicates that Tc1 mice were able to detect and react to spatial novelty at the same delay interval that was sensitive to an object novelty recognition impairment. These results demonstrate (1) that novelty detection per se and (2) the encoding of visuo-spatial information was not disrupted in adult Tc1 mice. The authors conclude that the task specific nature of the short-term recognition memory deficit suggests that the trisomy of genes on human chromosome 21 in Tc1 mice impacts on (perirhinal) cortical systems supporting short-term object and olfactory recognition memory. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Self-recognition in pigeons revisited.
Uchino, Emiko; Watanabe, Shigeru
2014-11-01
Recognition of a self-image in a mirror is investigated using the mark test during which a mark is placed onto a point on the body that is not directly visible, and the presence or absence of self-directed behaviors is evaluated for the mirror-observing subjects. Great apes, dolphins, possibly elephants, and magpies have all passed the mark test, that is, displayed self-directed behaviors, whereas monkeys, crows, and other animals have failed the test even though they were able to use a mirror to find a not-directly-visible object. Self-directed behavior and mirror use are prerequisites of a successful mark test, and the absence of these behaviors may lead to false negative results. Epstein, Lanza, and Skinner (1981) reported self-directed behavior of pigeons in front of a mirror after explicit training of self-directed pecking and of pecking an object with the aid of a mirror, but certain other researchers could not confirm the results. The aim of the present study was to conduct the mark test with two pigeons that had received extensive training of the prerequisite behaviors. Crucial points of the training were identical topography (pecking) and the same reinforcement (food) in the prerequisite behaviors as well as sufficient training of these behaviors. After training for the prerequisite behaviors, both pigeons spontaneously integrated the learned self-directed and mirror-use behavior and displayed self-directed behavior in a mark test. This indicates that pigeons display mirror self-recognition after training of suitable ontogenetic contingency. © Society for the Experimental Analysis of Behavior.
Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni
2016-06-01
Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. Copyright © 2016 Elsevier Ltd. All rights reserved.
Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C.; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni
2017-01-01
Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. PMID:27085892
A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition.
Fuentes, Alvaro; Yoon, Sook; Kim, Sang Cheol; Park, Dong Sun
2017-09-04
Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called "deep learning meta-architectures". We combine each of these meta-architectures with "deep feature extractors" such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant's surrounding area.
A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition
Yoon, Sook; Kim, Sang Cheol; Park, Dong Sun
2017-01-01
Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called “deep learning meta-architectures”. We combine each of these meta-architectures with “deep feature extractors” such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant’s surrounding area. PMID:28869539
Cao, Yongqiang; Grossberg, Stephen; Markowitz, Jeffrey
2011-12-01
All primates depend for their survival on being able to rapidly learn about and recognize objects. Objects may be visually detected at multiple positions, sizes, and viewpoints. How does the brain rapidly learn and recognize objects while scanning a scene with eye movements, without causing a combinatorial explosion in the number of cells that are needed? How does the brain avoid the problem of erroneously classifying parts of different objects together at the same or different positions in a visual scene? In monkeys and humans, a key area for such invariant object category learning and recognition is the inferotemporal cortex (IT). A neural model is proposed to explain how spatial and object attention coordinate the ability of IT to learn invariant category representations of objects that are seen at multiple positions, sizes, and viewpoints. The model clarifies how interactions within a hierarchy of processing stages in the visual brain accomplish this. These stages include the retina, lateral geniculate nucleus, and cortical areas V1, V2, V4, and IT in the brain's What cortical stream, as they interact with spatial attention processes within the parietal cortex of the Where cortical stream. The model builds upon the ARTSCAN model, which proposed how view-invariant object representations are generated. The positional ARTSCAN (pARTSCAN) model proposes how the following additional processes in the What cortical processing stream also enable position-invariant object representations to be learned: IT cells with persistent activity, and a combination of normalizing object category competition and a view-to-object learning law which together ensure that unambiguous views have a larger effect on object recognition than ambiguous views. The model explains how such invariant learning can be fooled when monkeys, or other primates, are presented with an object that is swapped with another object during eye movements to foveate the original object. The swapping procedure is predicted to prevent the reset of spatial attention, which would otherwise keep the representations of multiple objects from being combined by learning. Li and DiCarlo (2008) have presented neurophysiological data from monkeys showing how unsupervised natural experience in a target swapping experiment can rapidly alter object representations in IT. The model quantitatively simulates the swapping data by showing how the swapping procedure fools the spatial attention mechanism. More generally, the model provides a unifying framework, and testable predictions in both monkeys and humans, for understanding object learning data using neurophysiological methods in monkeys, and spatial attention, episodic learning, and memory retrieval data using functional imaging methods in humans. Copyright © 2011 Elsevier Ltd. All rights reserved.
Khasnobish, Anwesha; Pal, Monalisa; Sardar, Dwaipayan; Tibarewala, D N; Konar, Amit
2016-08-01
This work is a preliminary study towards developing an alternative communication channel for conveying shape information to aid in recognition of items when tactile perception is hindered. Tactile data, acquired during object exploration by sensor fitted robot arm, are processed to recognize four basic geometric shapes. Patterns representing each shape, classified from tactile data, are generated using micro-controller-driven vibration motors which vibrotactually stimulate users to convey the particular shape information. These motors are attached on the subject's arm and their psychological (verbal) responses are recorded to assess the competence of the system to convey shape information to the user in form of vibrotactile stimulations. Object shapes are classified from tactile data with an average accuracy of 95.21 %. Three successive sessions of shape recognition from vibrotactile pattern depicted learning of the stimulus from subjects' psychological response which increased from 75 to 95 %. This observation substantiates the learning of vibrotactile stimulation in user over the sessions which in turn increase the system efficacy. The tactile sensing module and vibrotactile pattern generating module are integrated to complete the system whose operation is analysed in real-time. Thus, the work demonstrates a successful implementation of the complete schema of artificial tactile sensing system for object-shape recognition through vibrotactile stimulations.
Constrained Metric Learning by Permutation Inducing Isometries.
Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle
2016-01-01
The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.
Towards an Artificial Space Object Taxonomy
NASA Astrophysics Data System (ADS)
Wilkins, M.; Schumacher, P.; Jah, M.; Pfeffer, A.
2013-09-01
Object recognition is the first step in positively identifying a resident space object (RSO), i.e. assigning an RSO to a category such as GPS satellite or space debris. Object identification is the process of deciding that two RSOs are in fact one and the same. Provided we have appropriately defined a satellite taxonomy that allows us to place a given RSO into a particular class of object without any ambiguity, one can assess the probability of assignment to a particular class by determining how well the object satisfies the unique criteria of belonging to that class. Ultimately, tree-based taxonomies delineate unique signatures by defining the minimum amount of information required to positively identify a RSO. Therefore, taxonomic trees can be used to depict hypotheses in a Bayesian object recognition and identification process. This work describes a new RSO taxonomy along with specific reasoning behind the choice of groupings. An alternative taxonomy was recently presented at the Sixth Conference on Space Debris in Darmstadt, Germany. [1] The best example of a taxonomy that enjoys almost universal scientific acceptance is the classical Linnaean biological taxonomy. A strength of Linnaean taxonomy is that it can be used to organize the different kinds of living organisms, simply and practically. Every species can be given a unique name. This uniqueness and stability are a result of the acceptance by biologists specializing in taxonomy, not merely of the binomial names themselves. Fundamentally, the taxonomy is governed by rules for the use of these names, and these are laid down in formal Nomenclature Codes. We seek to provide a similar formal nomenclature system for RSOs through a defined tree-based taxonomy structure. Each categorization, beginning with the most general or inclusive, at any level is called a taxon. Taxon names are defined by a type, which can be a specimen or a taxon of lower rank, and a diagnosis, a statement intended to supply characters that differentiate the taxon from others with which it is likely to be confused. Each taxon will have a set of uniquely distinguishing features that will allow one to place a given object into a specific group without any ambiguity. When a new object does not fall into a specific taxon that is already defined, the entire tree structure will need to be evaluated to determine if a new taxon should be created. Ultimately, an online learning process to facilitate tree growth would be desirable. One can assess the probability of assignment to a particular taxon by determining how well the object satisfies the unique criteria of belonging to that taxon. Therefore, we can use taxonomic trees in a Bayesian process to assign prior probabilities to each of our object recognition and identification hypotheses. We will show that this taxonomy is robust by demonstrating specific stressing classification examples. We will also demonstrate how to implement this taxonomy in Figaro, an open source probabilistic programming language.
ERIC Educational Resources Information Center
Yang, Mu; Lewis, Freeman C.; Sarvi, Michael S.; Foley, Gillian M.; Crawley, Jacqueline N.
2015-01-01
Chromosomal 16p11.2 deletion syndrome frequently presents with intellectual disabilities, speech delays, and autism. Here we investigated the Dolmetsch line of 16p11.2 heterozygous (+/-) mice on a range of cognitive tasks with different neuroanatomical substrates. Robust novel object recognition deficits were replicated in two cohorts of 16p11.2…
Music causes deterioration of source memory: evidence from normal ageing.
El Haj, Mohamad; Omigie, Diana; Clément, Sylvain
2014-01-01
Previous research has shown that music exposure can impair a wide variety of cognitive and behavioural performance. We investigated whether this is the case for source memory. Forty-one younger adults and 35 healthy elderly were required to retain the location in which pictures of coloured objects were displayed. On a subsequent recognition test they were required to decide whether the objects were displayed in the same location as before or not. Encoding took place (a) in silence, (b) while listening to street noise, or (c) while listening to Vivaldi's "Four Seasons". Recognition always took place during silence. A significant reduction in source memory was observed following music exposure, a reduction that was more pronounced for older adults than for younger adults. This pattern was significantly correlated with performance on an executive binding task. The exposure to music appeared to interfere with binding in working memory, worsening source recall.
Central administration of angiotensin IV rapidly enhances novel object recognition among mice.
Paris, Jason J; Eans, Shainnel O; Mizrachi, Elisa; Reilley, Kate J; Ganno, Michelle L; McLaughlin, Jay P
2013-07-01
Angiotensin IV (Val(1)-Tyr(2)-Ile(3)-His(4)-Pro(5)-Phe(6)) has demonstrated potential cognitive-enhancing effects. The present investigation assessed and characterized: (1) dose-dependency of angiotensin IV's cognitive enhancement in a C57BL/6J mouse model of novel object recognition, (2) the time-course for these effects, (3) the identity of residues in the hexapeptide important to these effects and (4) the necessity of actions at angiotensin IV receptors for procognitive activity. Assessment of C57BL/6J mice in a novel object recognition task demonstrated that prior administration of angiotensin IV (0.1, 1.0, or 10.0, but not 0.01 nmol, i.c.v.) significantly enhanced novel object recognition in a dose-dependent manner. These effects were time dependent, with improved novel object recognition observed when angiotensin IV (0.1 nmol, i.c.v.) was administered 10 or 20, but not 30 min prior to the onset of the novel object recognition testing. An alanine scan of the angiotensin IV peptide revealed that replacement of the Val(1), Ile(3), His(4), or Phe(6) residues with Ala attenuated peptide-induced improvements in novel object recognition, whereas Tyr(2) or Pro(5) replacement did not significantly affect performance. Administration of the angiotensin IV receptor antagonist, divalinal-Ang IV (20 nmol, i.c.v.), reduced (but did not abolish) novel object recognition; however, this antagonist completely blocked the procognitive effects of angiotensin IV (0.1 nmol, i.c.v.) in this task. Rotorod testing demonstrated no locomotor effects with any angiotensin IV or divalinal-Ang IV dose tested. These data demonstrate that angiotensin IV produces a rapid enhancement of associative learning and memory performance in a mouse model that was dependent on the angiotensin IV receptor. Copyright © 2013 Elsevier Ltd. All rights reserved.
Central administration of angiotensin IV rapidly enhances novel object recognition among mice
Paris, Jason J.; Eans, Shainnel O.; Mizrachi, Elisa; Reilley, Kate J.; Ganno, Michelle L.; McLaughlin, Jay P.
2013-01-01
Angiotensin IV (Val1-Tyr2-Ile3-His4-Pro5-Phe6) has demonstrated potential cognitive-enhancing effects. The present investigation assessed and characterized: (1) dose-dependency of angiotensin IV's cognitive enhancement in a C57BL/6J mouse model of novel object recognition, (2) the time-course for these effects, (3) the identity of residues in the hexapeptide important to these effects and (4) the necessity of actions at angiotensin IV receptors for pro-cognitive activity. Assessment of C57BL/6J mice in a novel object recognition task demonstrated that prior administration of angiotensin IV (0.1, 1.0, or 10.0, but not 0.01, nmol, i.c.v.) significantly enhanced novel object recognition in a dose-dependent manner. These effects were time dependent, with improved novel object recognition observed when angiotensin IV (0.1 nmol, i.c.v.) was administered 10 or 20, but not 30, min prior to the onset of the novel object recognition testing. An alanine scan of the angiotensin IV peptide revealed that replacement of the Val1, Ile3, His4, or Phe6 residues with Ala attenuated peptide-induced improvements in novel object recognition, whereas Tyr2 or Pro5 replacement did not significantly affect performance. Administration of the angiotensin IV receptor antagonist, divalinal-Ang IV (20 nmol, i.c.v.), reduced (but did not abolish) novel object recognition; however, this antagonist completely blocked the pro-cognitive effects of angiotensin IV (0.1 nmol, i.c.v.) in this task. Rotorod testing demonstrated no locomotor effects for any angiotensin IV or divalinal-Ang IV dose tested. These data demonstrate that angiotensin IV produces a rapid enhancement of associative learning and memory performance in a mouse model that was dependent on the angiotensin IV receptor. PMID:23416700
Word-to-picture recognition is a function of motor components mappings at the stage of retrieval.
Brouillet, Denis; Brouillet, Thibaut; Milhau, Audrey; Heurley, Loïc; Vagnot, Caroline; Brunel, Lionel
2016-10-01
Embodied approaches of cognition argue that retrieval involves the re-enactment of both sensory and motor components of the desired remembering. In this study, we investigated the effect of motor action performed to produce the response in a recognition task when this action is compatible with the affordance of the objects that have to be recognised. In our experiment, participants were first asked to learn a list of words referring to graspable objects, and then told to make recognition judgements on pictures. The pictures represented objects where the graspable part was either pointing to the same or to the opposite side of the "Yes" response key. Results show a robust effect of compatibility between objects affordance and response hand. Moreover, this compatibility improves participants' ability of discrimination, suggesting that motor components are relevant cue for memory judgement at the stage of retrieval in a recognition task. More broadly, our data highlight that memory judgements are a function of motor components mappings at the stage of retrieval. © 2015 International Union of Psychological Science.
Improving Learning Outcomes: The iPad and Preschool Children with Disabilities
Chmiliar, Linda
2017-01-01
The digital age has reached early childhood, and the use of touch screens by young children is common place. Research on the use of touch screen tablets with young children is becoming more prevalent; however, less information is available on the use of touch screen tablets to support young children with disabilities. Touch screen tablets may offer possibilities to preschool children with disabilities to participate in learning in a digital way. The iPad provides easy interaction on the touch screen and access to a multitude of engaging early learning applications. This paper summarizes a pilot study with 8 young children with disabilities included in a preschool classroom, who were given iPads to use in class and at home for a period of 21 weeks. Systematic observations, classroom assessments, and teacher and parent interviews documented the improvements in learning outcomes for each child in many areas including, but not limited to: shape and color recognition, letter recognition, and tracing letters throughout six research cycles. PMID:28529493
Improving Learning Outcomes: The iPad and Preschool Children with Disabilities.
Chmiliar, Linda
2017-01-01
The digital age has reached early childhood, and the use of touch screens by young children is common place. Research on the use of touch screen tablets with young children is becoming more prevalent; however, less information is available on the use of touch screen tablets to support young children with disabilities. Touch screen tablets may offer possibilities to preschool children with disabilities to participate in learning in a digital way. The iPad provides easy interaction on the touch screen and access to a multitude of engaging early learning applications. This paper summarizes a pilot study with 8 young children with disabilities included in a preschool classroom, who were given iPads to use in class and at home for a period of 21 weeks. Systematic observations, classroom assessments, and teacher and parent interviews documented the improvements in learning outcomes for each child in many areas including, but not limited to: shape and color recognition, letter recognition, and tracing letters throughout six research cycles.
Eye-tracking the time-course of novel word learning and lexical competition in adults and children.
Weighall, A R; Henderson, L M; Barr, D J; Cairney, S A; Gaskell, M G
2017-04-01
Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e.g., looks to the novel object biscal upon hearing "click on the biscuit") were compared to fixations on untrained objects. Novel word-object pairings learned immediately before testing and those learned the previous day exhibited significant competition effects, with stronger competition for the previous day pairings for children but not adults. Crucially, this competition effect was significantly smaller for novel than existing competitors (e.g., looks to candy upon hearing "click on the candle"), suggesting that novel items may not compete for recognition like fully-fledged lexical items, even after 24h. Explicit memory (cued recall) was superior for words learned the day before testing, particularly for children; this effect (but not the lexical competition effects) correlated with sleep-spindle density. Together, the results suggest that different aspects of new word learning follow different time courses: visual world competition effects can emerge swiftly, but are qualitatively different from those observed with established words, and are less reliant upon sleep. Furthermore, the findings fit with the view that word learning earlier in development is boosted by sleep to a greater degree. Copyright © 2016. Published by Elsevier Inc.
Hierarchical Context Modeling for Video Event Recognition.
Wang, Xiaoyang; Ji, Qiang
2016-10-11
Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.
Identifying images of handwritten digits using deep learning in H2O
NASA Astrophysics Data System (ADS)
Sadhasivam, Jayakumar; Charanya, R.; Kumar, S. Harish; Srinivasan, A.
2017-11-01
Automatic digit recognition is of popular interest today. Deep learning techniques make it possible for object recognition in image data. Perceiving the digit has turned into a fundamental part as far as certifiable applications. Since, digits are composed in various styles in this way to distinguish the digit it is important to perceive and arrange it with the assistance of machine learning methods. This exploration depends on supervised learning vector quantization neural system arranged under counterfeit artificial neural network. The pictures of digits are perceived, prepared and tried. After the system is made digits are prepared utilizing preparing dataset vectors and testing is connected to the pictures of digits which are separated to each other by fragmenting the picture and resizing the digit picture as needs be for better precision.
Lee, Choong‐Hee; Ryu, Jungwon; Lee, Sang‐Hun; Kim, Hakjin
2016-01-01
ABSTRACT The hippocampus plays critical roles in both object‐based event memory and spatial navigation, but it is largely unknown whether the left and right hippocampi play functionally equivalent roles in these cognitive domains. To examine the hemispheric symmetry of human hippocampal functions, we used an fMRI scanner to measure BOLD activity while subjects performed tasks requiring both object‐based event memory and spatial navigation in a virtual environment. Specifically, the subjects were required to form object‐place paired associate memory after visiting four buildings containing discrete objects in a virtual plus maze. The four buildings were visually identical, and the subjects used distal visual cues (i.e., scenes) to differentiate the buildings. During testing, the subjects were required to identify one of the buildings when cued with a previously associated object, and when shifted to a random place, the subject was expected to navigate to the previously chosen building. We observed that the BOLD activity foci changed from the left hippocampus to the right hippocampus as task demand changed from identifying a previously seen object (object‐cueing period) to searching for its paired‐associate place (object‐cued place recognition period). Furthermore, the efficient retrieval of object‐place paired associate memory (object‐cued place recognition period) was correlated with the BOLD response of the left hippocampus, whereas the efficient retrieval of relatively pure spatial memory (spatial memory period) was correlated with the right hippocampal BOLD response. These findings suggest that the left and right hippocampi in humans might process qualitatively different information for remembering episodic events in space. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27009679
Implicit Multisensory Associations Influence Voice Recognition
von Kriegstein, Katharina; Giraud, Anne-Lise
2006-01-01
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules. PMID:17002519
Transfer learning for visual categorization: a survey.
Shao, Ling; Zhu, Fan; Li, Xuelong
2015-05-01
Regular machine learning and data mining techniques study the training data for future inferences under a major assumption that the future data are within the same feature space or have the same distribution as the training data. However, due to the limited availability of human labeled training data, training data that stay in the same feature space or have the same distribution as the future data cannot be guaranteed to be sufficient enough to avoid the over-fitting problem. In real-world applications, apart from data in the target domain, related data in a different domain can also be included to expand the availability of our prior knowledge about the target future data. Transfer learning addresses such cross-domain learning problems by extracting useful information from data in a related domain and transferring them for being used in target tasks. In recent years, with transfer learning being applied to visual categorization, some typical problems, e.g., view divergence in action recognition tasks and concept drifting in image classification tasks, can be efficiently solved. In this paper, we survey state-of-the-art transfer learning algorithms in visual categorization applications, such as object recognition, image classification, and human action recognition.
Attempting to "Increase Intake from the Input": Attention and Word Learning in Children with Autism.
Tenenbaum, Elena J; Amso, Dima; Righi, Giulia; Sheinkopf, Stephen J
2017-06-01
Previous work has demonstrated that social attention is related to early language abilities. We explored whether we can facilitate word learning among children with autism by directing attention to areas of the scene that have been demonstrated as relevant for successful word learning. We tracked eye movements to faces and objects while children watched videos of a woman teaching them new words. Test trials measured participants' recognition of these novel word-object pairings. Results indicate that for children with autism and typically developing children, pointing to the speaker's mouth while labeling a novel object impaired performance, likely because it distracted participants from the target object. In contrast, for children with autism, holding the object close to the speaker's mouth improved performance.
Establishing guardrails in leadership.
Kerfoot, Karlene
2005-01-01
Shared leadership/governance offers the best environment for growth of the professional staff and for leaders and managers. Tisch (2004) writes that the power of the partnership begins with the recognition that no one can operate effectively in a vacuum and concludes with the premise that partnerships can redefine the traditional business relationships and transform them from adversarial to cooperative. This happens when the road is clearly delineated and guardrails are put in place as reminders of where the car should be on the road. With an atmosphere of learning and partnering to learn more, the leader's job becomes that of teacher, and mentor, and everyone is aligned around the journey to excellence with guardrails in place to monitor the journey.
ERIC Educational Resources Information Center
Andersen, Per N.; Egeland, Jens; Øie, Merete
2013-01-01
There are relatively few studies on learning and delayed memory with attention-deficit/hyperactivity disorder (ADHD). The objective of the present study was to examine acquisition, free delayed memory, and recognition skills in medication naive children and adolescents aged 8-16 years with ADHD combined subtype (36 participants) and inattentive…
Spoken word recognition by Latino children learning Spanish as their first language*
HURTADO, NEREYDA; MARCHMAN, VIRGINIA A.; FERNALD, ANNE
2010-01-01
Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development. PMID:17542157
A Social and Self-Reflective Approach to MALL
ERIC Educational Resources Information Center
Ros i Sole, Cristina; Calic, Jelena; Neijmann, Daisy
2010-01-01
There is a growing recognition that learning is increasingly taking place on the move and located beyond educational environments, "in the gaps of daily life" (Sharples et al., 2007). And yet, language learners have mostly been perceived as being fixed in particular contexts, whether in the educational environment, abroad, or in their…
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.
1974-01-01
An approach to simultaneous interpretation of objects in complex structures so as to maximize a combined utility function is presented. Results of the application of a computer software system to assign meaning to regions in a segmented image based on the principles described in this paper and on a special interactive sequential classification learning system, which is referenced, are demonstrated.
Chen, C L Philip; Liu, Zhulin
2018-01-01
Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.
[Influence of object material and inter-trial interval on novel object recognition test in mice].
Li, Sheng-jian; Huang, Zhu-yan; Ye, Yi-lu; Yu, Yue-ping; Zhang, Wei-ping; Wei, Er-qing; Zhang, Qi
2014-05-01
To investigate the efficacy of novel object recognition (NOR) test in assessment of learning and memory ability in ICR mice in different experimental conditions. One hundred and thirty male ICR mice were randomly divided into 10 groups: 4 groups for different inter-trial intervals (ITI: 10 min, 90 min, 4 h, 24 h), 4 groups for different object materials (wood-wood, plastic-plastic, plastic-wood, wood-plastic) and 2 groups for repeated test (measured once a day or every 3 days, totally three times in each group). The locomotor tracks in the open field were recorded. The amount of time spent exploring the novel and familiar objects, the discrimination ratio (DR) and the discrimination index (DI) were analyzed. Compared with familiar object, DR and DI of novel object were both increased at ITI of 10 min and 90 min (P<0.01). Exploring time, DR and DI were greatly influenced by different object materials. DR and DI remained stable by using identical object material. NOR test could be done repeatedly in the same batch of mice. NOR test can be used to assess the learning and memory ability in mice at shorter ITI and with identical material. It can be done repeatedly.
Yang, Kechun; Broussard, John I; Levine, Amber T; Jenson, Daniel; Arenkiel, Benjamin R; Dani, John A
2017-01-01
Physiological and behavioral evidence supports that dopamine (DA) receptor signaling influences hippocampal function. While several recent studies examined how DA influences CA1 plasticity and learning, there are fewer studies investigating the influence of DA signaling to the dentate gyrus. The dentate gyrus receives convergent cortical input through the perforant path fiber tracts and has been conceptualized to detect novelty in spatial memory tasks. To test whether DA-receptor activity influences novelty-detection, we used a novel object recognition (NOR) task where mice remember previously presented objects as an indication of learning. Although DA innervation arises from other sources and the main DA signaling may be from those sources, our molecular approaches verified that midbrain dopaminergic fibers also sparsely innervate the dentate gyrus. During the NOR task, wild-type mice spent significantly more time investigating novel objects rather than previously observed objects. Dentate granule cells in slices cut from those mice showed an increased AMPA/NMDA-receptor current ratio indicative of potentiated synaptic transmission. Post-training injection of a D1-like receptor antagonist not only effectively blocked the preference for the novel objects, but also prevented the increased AMPA/NMDA ratio. Consistent with that finding, neither NOR learning nor the increase in the AMPA/NMDA ratio were observed in DA-receptor KO mice under the same experimental conditions. The results indicate that DA-receptor signaling contributes to the successful completion of the NOR task and to the associated synaptic plasticity of the dentate gyrus that likely contributes to the learning. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Extra virgin olive oil improves learning and memory in SAMP8 mice.
Farr, Susan A; Price, Tulin O; Dominguez, Ligia J; Motisi, Antonio; Saiano, Filippo; Niehoff, Michael L; Morley, John E; Banks, William A; Ercal, Nuran; Barbagallo, Mario
2012-01-01
Polyphenols are potent antioxidants found in extra virgin olive oil (EVOO); antioxidants have been shown to reverse age- and disease-related learning and memory deficits. We examined the effects of EVOO on learning and memory in SAMP8 mice, an age-related learning/memory impairment model associated with increased amyloid-β protein and brain oxidative damage. We administered EVOO, coconut oil, or butter to 11 month old SAMP8 mice for 6 weeks. Mice were tested in T-maze foot shock avoidance and one-trial novel object recognition with a 24 h delay. Mice which received EVOO had improved acquisition in the T-maze and spent more time with the novel object in one-trial novel object recognition versus mice which received coconut oil or butter. Mice that received EVOO had improve T-maze retention compared to the mice that received butter. EVOO increased brain glutathione levels suggesting reduced oxidative stress as a possible mechanism. These effects plus increased glutathione reductase activity, superoxide dismutase activity, and decreased tissue levels of 4-hydroxynoneal and 3-nitrotyrosine were enhanced with enriched EVOO (3 × and 5 × polyphenols concentration). Our findings suggest that EVOO has beneficial effects on learning and memory deficits found in aging and diseases, such as those related to the overproduction of amyloid-β protein, by reversing oxidative damage in the brain, effects that are augmented with increasing concentrations of polyphenols in EVOO.
Leibo, Joel Z; Liao, Qianli; Anselmi, Fabio; Freiwald, Winrich A; Poggio, Tomaso
2017-01-09
The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations, like depth rotations [1, 2]. Current computational models of object recognition, including recent deep-learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3-6]. Here, we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here, we demonstrate that one specific biologically plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli, like faces, at intermediate levels of the architecture and show why it does so. Thus, the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Austerweil, Joseph L.; Griffiths, Thomas L.; Palmer, Stephen E.
2017-01-01
How does the visual system recognize images of a novel object after a single observation despite possible variations in the viewpoint of that object relative to the observer? One possibility is comparing the image with a prototype for invariance over a relevant transformation set (e.g., translations and dilations). However, invariance over…
Villar, María Eugenia; Martinez, María Cecilia; Lopes da Cunha, Pamela; Ballarini, Fabricio; Viola, Haydee
2017-02-01
With the aim of analyzing if object recognition long-term memory (OR-LTM) formation is susceptible to retroactive interference (RI), we submitted rats to sequential sample sessions using the same arena but changing the identity of a pair of objects placed in it. Separate groups of animals were tested in the arena in order to evaluate the LTM for these objects. Our results suggest that OR-LTM formation was retroactively interfered within a critical time window by the exploration of a new, but not familiar, object. This RI acted on the consolidation of the object explored in the first sample session because its OR-STM measured 3h after training was not affected, whereas the OR-LTM measured at 24h was impaired. This sample session also impaired the expression of OR memory when it took place before the test. Moreover, local inactivation of the dorsal Hippocampus (Hp) or the medial Prefrontal Cortex (mPFC) previous to the exploration of the second pair of objects impaired their consolidation restoring the LTM for the objects explored in the first session. This data suggests that both brain regions are involved in the processing of OR-memory and also that if those regions are engaged in another process before finishing the first consolidation process its LTM will be impaired by RI. Copyright © 2016 Elsevier Inc. All rights reserved.
Flores-Balter, Gabriela; Cordova-Jadue, Héctor; Chiti-Morales, Alessandra; Lespay, Carolyne; Espina-Marchant, Pablo; Falcon, Romina; Grinspun, Noemi; Sanchez, Jessica; Bustamante, Diego; Morales, Paola; Herrera-Marschitz, Mario; Valdés, José L
2016-10-15
Perinatal asphyxia (PA) is associated with long-term neuronal damage and cognitive deficits in adulthood, such as learning and memory disabilities. After PA, specific brain regions are compromised, including neocortex, hippocampus, basal ganglia, and ascending neuromodulatory pathways, such as dopamine system, explaining some of the cognitive disabilities. We hypothesize that other neuromodulatory systems, such as histamine system from the tuberomammillary nucleus (TMN), which widely project to telencephalon, shown to be relevant for learning and memory, may be compromised by PA. We investigated here the effect of PA on (i) Density and neuronal activity of TMN neurons by double immunoreactivity for adenosine deaminase (ADA) and c-Fos, as marker for histaminergic neurons and neuronal activity respectively. (ii) Expression of the histamine-synthesizing enzyme, histidine decarboxylase (HDC) by western blot and (iii) thioperamide an H3 histamine receptor antagonist, on an object recognition memory task. Asphyxia-exposed rats showed a decrease of ADA density and c-Fos activity in TMN, and decrease of HDC expression in hypothalamus. Asphyxia-exposed rats also showed a low performance in object recognition memory compared to caesarean-delivered controls, which was reverted in a dose-dependent manner by the H3 antagonist thioperamide (5-10mg/kg, i.p.). The present results show that the histaminergic neuronal system of the TMN is involved in the long-term effects induced by PA, affecting learning and memory. Copyright © 2016 Elsevier B.V. All rights reserved.
Place preference and vocal learning rely on distinct reinforcers in songbirds.
Murdoch, Don; Chen, Ruidong; Goldberg, Jesse H
2018-04-30
In reinforcement learning (RL) agents are typically tasked with maximizing a single objective function such as reward. But it remains poorly understood how agents might pursue distinct objectives at once. In machines, multiobjective RL can be achieved by dividing a single agent into multiple sub-agents, each of which is shaped by agent-specific reinforcement, but it remains unknown if animals adopt this strategy. Here we use songbirds to test if navigation and singing, two behaviors with distinct objectives, can be differentially reinforced. We demonstrate that strobe flashes aversively condition place preference but not song syllables. Brief noise bursts aversively condition song syllables but positively reinforce place preference. Thus distinct behavior-generating systems, or agencies, within a single animal can be shaped by correspondingly distinct reinforcement signals. Our findings suggest that spatially segregated vocal circuits can solve a credit assignment problem associated with multiobjective learning.
Automatic lip reading by using multimodal visual features
NASA Astrophysics Data System (ADS)
Takahashi, Shohei; Ohya, Jun
2013-12-01
Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.
Chuk, Tim; Chan, Antoni B; Hsiao, Janet H
2017-12-01
The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Scientific Workflow Platform for Generic and Scalable Object Recognition on Medical Images
NASA Astrophysics Data System (ADS)
Möller, Manuel; Tuot, Christopher; Sintek, Michael
In the research project THESEUS MEDICO we aim at a system combining medical image information with semantic background knowledge from ontologies to give clinicians fully cross-modal access to biomedical image repositories. Therefore joint efforts have to be made in more than one dimension: Object detection processes have to be specified in which an abstraction is performed starting from low-level image features across landmark detection utilizing abstract domain knowledge up to high-level object recognition. We propose a system based on a client-server extension of the scientific workflow platform Kepler that assists the collaboration of medical experts and computer scientists during development and parameter learning.
Dries, Daniel R; Dean, Diane M; Listenberger, Laura L; Novak, Walter R P; Franzen, Margaret A; Craig, Paul A
2017-01-02
A thorough understanding of the molecular biosciences requires the ability to visualize and manipulate molecules in order to interpret results or to generate hypotheses. While many instructors in biochemistry and molecular biology use visual representations, few indicate that they explicitly teach visual literacy. One reason is the need for a list of core content and competencies to guide a more deliberate instruction in visual literacy. We offer here the second stage in the development of one such resource for biomolecular three-dimensional visual literacy. We present this work with the goal of building a community for online resource development and use. In the first stage, overarching themes were identified and submitted to the biosciences community for comment: atomic geometry; alternate renderings; construction/annotation; het group recognition; molecular dynamics; molecular interactions; monomer recognition; symmetry/asymmetry recognition; structure-function relationships; structural model skepticism; and topology and connectivity. Herein, the overarching themes have been expanded to include a 12th theme (macromolecular assemblies), 27 learning goals, and more than 200 corresponding objectives, many of which cut across multiple overarching themes. The learning goals and objectives offered here provide educators with a framework on which to map the use of molecular visualization in their classrooms. In addition, the framework may also be used by biochemistry and molecular biology educators to identify gaps in coverage and drive the creation of new activities to improve visual literacy. This work represents the first attempt, to our knowledge, to catalog a comprehensive list of explicit learning goals and objectives in visual literacy. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(1):69-75, 2017. © 2016 The Authors Biochemistry and Molecular Biology Education published by Wiley Periodicals, Inc. on behalf of International Union of Biochemistry and Molecular Biology.
Dries, Daniel R.; Dean, Diane M.; Listenberger, Laura L.; Novak, Walter R.P.
2016-01-01
Abstract A thorough understanding of the molecular biosciences requires the ability to visualize and manipulate molecules in order to interpret results or to generate hypotheses. While many instructors in biochemistry and molecular biology use visual representations, few indicate that they explicitly teach visual literacy. One reason is the need for a list of core content and competencies to guide a more deliberate instruction in visual literacy. We offer here the second stage in the development of one such resource for biomolecular three‐dimensional visual literacy. We present this work with the goal of building a community for online resource development and use. In the first stage, overarching themes were identified and submitted to the biosciences community for comment: atomic geometry; alternate renderings; construction/annotation; het group recognition; molecular dynamics; molecular interactions; monomer recognition; symmetry/asymmetry recognition; structure‐function relationships; structural model skepticism; and topology and connectivity. Herein, the overarching themes have been expanded to include a 12th theme (macromolecular assemblies), 27 learning goals, and more than 200 corresponding objectives, many of which cut across multiple overarching themes. The learning goals and objectives offered here provide educators with a framework on which to map the use of molecular visualization in their classrooms. In addition, the framework may also be used by biochemistry and molecular biology educators to identify gaps in coverage and drive the creation of new activities to improve visual literacy. This work represents the first attempt, to our knowledge, to catalog a comprehensive list of explicit learning goals and objectives in visual literacy. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(1):69–75, 2017. PMID:27486685
ERIC Educational Resources Information Center
Herold, Katherine H.; Akhtar, Nameera
2008-01-01
Young children's ability to learn something new from a third-party interaction may be related to the ability to imagine themselves in the third-party interaction. This imaginative ability presupposes an understanding of self-other equivalence, which is manifested in an objective understanding of the self and an understanding of others' subjective…
Mesa-Gresa, Patricia; Pérez-Martinez, Asunción; Redolat, Rosa
2013-01-01
Environmental enrichment (EE) is an experimental paradigm in which rodents are housed in complex environments containing objects that provide stimulation, the effects of which are expected to improve the welfare of these subjects. EE has been shown to considerably improve learning and memory in rodents. However, knowledge about the effects of EE on social interaction is generally limited and rather controversial. Thus, our aim was to evaluate both novel object recognition and agonistic behavior in NMRI mice receiving EE, hypothesizing enhanced cognition and slightly enhanced agonistic interaction upon EE rearing. During a 4-week period half the mice (n = 16) were exposed to EE and the other half (n = 16) remained in a standard environment (SE). On PND 56-57, animals performed the object recognition test, in which recognition memory was measured using a discrimination index. The social interaction test consisted of an encounter between an experimental animal and a standard opponent. Results indicated that EE mice explored the new object for longer periods than SE animals (P < .05). During social encounters, EE mice devoted more time to sociability and agonistic behavior (P < .05) than their non-EE counterparts. In conclusion, EE has been shown to improve object recognition and increase agonistic behavior in adolescent/early adulthood mice. In the future we intend to extend this study on a longitudinal basis in order to assess in more depth the effect of EE and the consistency of the above-mentioned observations in NMRI mice. Copyright © 2013 Wiley Periodicals, Inc.
Knowledge-based object recognition for different morphological classes of plants
NASA Astrophysics Data System (ADS)
Brendel, Thorsten; Schwanke, Joerg; Jensch, Peter F.; Megnet, Roland
1995-01-01
Micropropagation of plants is done by cutting juvenile plants and placing them into special container-boxes with nutrient-solution where the pieces can grow up and be cut again several times. To produce high amounts of biomass it is necessary to do plant micropropagation by a robotic syshoot. In this paper we describe parts of the vision syshoot that recognizes plants and their particular cutting points. Therefore, it is necessary to extract elements of the plants and relations between these elements (for example root, shoot, leaf). Different species vary in their morphological appearance, variation is also immanent in plants of the same species. Therefore, we introduce several morphological classes of plants from that we expect same recognition methods. As a result of our work we present rules which help users to create specific algorithms for object recognition of plant species.
Khellal, Atmane; Ma, Hongbin; Fei, Qing
2018-05-09
The success of Deep Learning models, notably convolutional neural networks (CNNs), makes them the favorable solution for object recognition systems in both visible and infrared domains. However, the lack of training data in the case of maritime ships research leads to poor performance due to the problem of overfitting. In addition, the back-propagation algorithm used to train CNN is very slow and requires tuning many hyperparameters. To overcome these weaknesses, we introduce a new approach fully based on Extreme Learning Machine (ELM) to learn useful CNN features and perform a fast and accurate classification, which is suitable for infrared-based recognition systems. The proposed approach combines an ELM based learning algorithm to train CNN for discriminative features extraction and an ELM based ensemble for classification. The experimental results on VAIS dataset, which is the largest dataset of maritime ships, confirm that the proposed approach outperforms the state-of-the-art models in term of generalization performance and training speed. For instance, the proposed model is up to 950 times faster than the traditional back-propagation based training of convolutional neural networks, primarily for low-level features extraction.
Joint object and action recognition via fusion of partially observable surveillance imagery data
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Chan, Alex L.
2017-05-01
Partially observable group activities (POGA) occurring in confined spaces are epitomized by their limited observability of the objects and actions involved. In many POGA scenarios, different objects are being used by human operators for the conduct of various operations. In this paper, we describe the ontology of such as POGA in the context of In-Vehicle Group Activity (IVGA) recognition. Initially, we describe the virtue of ontology modeling in the context of IVGA and show how such an ontology and a priori knowledge about the classes of in-vehicle activities can be fused for inference of human actions that consequentially leads to understanding of human activity inside the confined space of a vehicle. In this paper, we treat the problem of "action-object" as a duality problem. We postulate a correlation between observed human actions and the object that is being utilized within those actions, and conversely, if an object being handled is recognized, we may be able to expect a number of actions that are likely to be performed on that object. In this study, we use partially observable human postural sequences to recognition actions. Inspired by convolutional neural networks (CNNs) learning capability, we present an architecture design using a new CNN model to learn "action-object" perception from surveillance videos. In this study, we apply a sequential Deep Hidden Markov Model (DHMM) as a post-processor to CNN to decode realized observations into recognized actions and activities. To generate the needed imagery data set for the training and testing of these new methods, we use the IRIS virtual simulation software to generate high-fidelity and dynamic animated scenarios that depict in-vehicle group activities under different operational contexts. The results of our comparative investigation are discussed and presented in detail.
The use of global image characteristics for neural network pattern recognitions
NASA Astrophysics Data System (ADS)
Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.
2017-04-01
The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.
ERIC Educational Resources Information Center
Werenicz, Aline; Christoff, Raissa R.; Blank, Martina; Jobim, Paulo F. C.; Pedroso, Thiago R.; Reolon, Gustavo K.; Schroder, Nadja; Roesler, Rafael
2012-01-01
Here we show that administration of the phosphodiesterase type 4 (PDE4) inhibitor rolipram into the basolateral complex of the amygdala (BLA) at a specific time interval after training enhances memory consolidation and induces memory persistence for novel object recognition (NOR) in rats. Intra-BLA infusion of rolipram immediately, 1.5 h, or 6 h…
Yang, Mu; Lewis, Freeman C; Sarvi, Michael S; Foley, Gillian M; Crawley, Jacqueline N
2015-12-01
Chromosomal 16p11.2 deletion syndrome frequently presents with intellectual disabilities, speech delays, and autism. Here we investigated the Dolmetsch line of 16p11.2 heterozygous (+/-) mice on a range of cognitive tasks with different neuroanatomical substrates. Robust novel object recognition deficits were replicated in two cohorts of 16p11.2+/- mice, confirming previous findings. A similarly robust deficit in object location memory was discovered in +/-, indicating impaired spatial novelty recognition. Generalizability of novelty recognition deficits in +/- mice extended to preference for social novelty. Robust learning deficits and cognitive inflexibility were detected using Bussey-Saksida touchscreen operant chambers. During acquisition of pairwise visual discrimination, +/- mice required significantly more training trials to reach criterion than wild-type littermates (+/+), and made more errors and correction errors than +/+. In the reversal phase, all +/+ reached criterion, whereas most +/- failed to reach criterion by the 30-d cutoff. Contextual and cued fear conditioning were normal in +/-. These cognitive phenotypes may be relevant to some aspects of cognitive impairments in humans with 16p11.2 deletion, and support the use of 16p11.2+/- mice as a model system for discovering treatments for cognitive impairments in 16p11.2 deletion syndrome. © 2015 Yang et al.; Published by Cold Spring Harbor Laboratory Press.
Model Based Usability Heuristics for Constructivist E-Learning
ERIC Educational Resources Information Center
Katre, Dinesh S.
2007-01-01
Many e-learning applications and games have been studied to identify the common interaction models of constructivist learning, namely: 1. Move the object to appropriate location; 2. Place objects in appropriate order and location(s); 3. Click to identify; 4. Change the variable factors to observe the effects; and 5. System personification and…
Davis, Tyler; Love, Bradley C.; Preston, Alison R.
2012-01-01
Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and adjust their representations to support behavior in future encounters. Many techniques that are available to understand the neural basis of category learning assume that the multiple processes that subserve it can be neatly separated between different trials of an experiment. Model-based functional magnetic resonance imaging offers a promising tool to separate multiple, simultaneously occurring processes and bring the analysis of neuroimaging data more in line with category learning’s dynamic and multifaceted nature. We use model-based imaging to explore the neural basis of recognition and entropy signals in the medial temporal lobe and striatum that are engaged while participants learn to categorize novel stimuli. Consistent with theories suggesting a role for the anterior hippocampus and ventral striatum in motivated learning in response to uncertainty, we find that activation in both regions correlates with a model-based measure of entropy. Simultaneously, separate subregions of the hippocampus and striatum exhibit activation correlated with a model-based recognition strength measure. Our results suggest that model-based analyses are exceptionally useful for extracting information about cognitive processes from neuroimaging data. Models provide a basis for identifying the multiple neural processes that contribute to behavior, and neuroimaging data can provide a powerful test bed for constraining and testing model predictions. PMID:22746951
Object Recognition using Feature- and Color-Based Methods
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu; Stubberud, Allen
2008-01-01
An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.
Bonin, Patrick; Guillemard-Tsaparina, Diana; Méot, Alain
2013-09-01
We report object-naming and object recognition times collected from Russian native speakers for the colorized version of the Snodgrass and Vanderwart (Journal of Experimental Psychology: Human Learning and Memory 6:174-215, 1980) pictures (Rossion & Pourtois, Perception 33:217-236, 2004). New norms for image variability, body-object interaction [BOI], and subjective frequency collected in Russian, as well as new name agreement scores for the colorized pictures in French, are also reported. In both object-naming and object comprehension times, the name agreement, image agreement, and age-of-acquisition variables made significant independent contributions. Objective word frequency was reliable in object-naming latencies only. The variables of image variability, BOI, and subjective frequency were not significant in either object naming or object comprehension. Finally, imageability was reliable in both tasks. The new norms and object-naming and object recognition times are provided as supplemental materials.
Staffaroni, Adam M; Melrose, Rebecca J; Leskin, Lorraine P; Riskin-Jones, Hannah; Harwood, Dylan; Mandelkern, Mark; Sultzer, David L
2017-09-01
The objective of this study was to distinguish the functional neuroanatomy of verbal learning and recognition in Alzheimer's disease (AD) using the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) Word Learning task. In 81 Veterans diagnosed with dementia due to AD, we conducted a cluster-based correlation analysis to assess the relationships between recency and recognition memory scores from the CERAD Word Learning Task and cortical metabolic activity measured using [ 18 F]-fluoro-2-deoxy-D-glucose positron emission tomography (FDG-PET). AD patients (Mini-Mental State Examination, MMSE mean = 20.2) performed significantly better on the recall of recency items during learning trials than of primacy and middle items. Recency memory was associated with cerebral metabolism in the left middle and inferior temporal gyri and left fusiform gyrus (p < .05 at the corrected cluster level). In contrast, recognition memory was correlated with metabolic activity in two clusters: (a) a large cluster that included the left hippocampus, parahippocampal gyrus, entorhinal cortex, anterior temporal lobe, and inferior and middle temporal gyri; (b) the bilateral orbitofrontal cortices (OFC). The present study further informs our understanding of the disparate functional neuroanatomy of recency memory and recognition memory in AD. We anticipated that the recency effect would be relatively preserved and associated with temporoparietal brain regions implicated in short-term verbal memory, while recognition memory would be associated with the medial temporal lobe and possibly the OFC. Consistent with our a priori hypotheses, list learning in our AD sample was characterized by a reduced primacy effect and a relatively spared recency effect; however, recency memory was associated with cerebral metabolism in inferior and lateral temporal regions associated with the semantic memory network, rather than regions associated with short-term verbal memory. The correlates of recognition memory included the medial temporal lobe and OFC, replicating prior studies.
Orthographic recognition in late adolescents: an assessment through event-related brain potentials.
González-Garrido, Andrés Antonio; Gómez-Velázquez, Fabiola Reveca; Rodríguez-Santillán, Elizabeth
2014-04-01
Reading speed and efficiency are achieved through the automatic recognition of written words. Difficulties in learning and recognizing the orthography of words can arise despite reiterative exposure to texts. This study aimed to investigate, in native Spanish-speaking late adolescents, how different levels of orthographic knowledge might result in behavioral and event-related brain potential differences during the recognition of orthographic errors. Forty-five healthy high school students were selected and divided into 3 equal groups (High, Medium, Low) according to their performance on a 5-test battery of orthographic knowledge. All participants performed an orthographic recognition task consisting of the sequential presentation of a picture (object, fruit, or animal) followed by a correctly, or incorrectly, written word (orthographic mismatch) that named the picture just shown. Electroencephalogram (EEG) recording took place simultaneously. Behavioral results showed that the Low group had a significantly lower number of correct responses and increased reaction times while processing orthographical errors. Tests showed significant positive correlations between higher performance on the experimental task and faster and more accurate reading. The P150 and P450 components showed higher voltages in the High group when processing orthographic errors, whereas N170 seemed less lateralized to the left hemisphere in the lower orthographic performers. Also, trials with orthographic errors elicited a frontal P450 component that was only evident in the High group. The present results show that higher levels of orthographic knowledge correlate with high reading performance, likely because of faster and more accurate perceptual processing, better visual orthographic representations, and top-down supervision, as the event-related brain potential findings seem to suggest.
Do capuchin monkeys (Cebus apella) diagnose causal relations in the absence of a direct reward?
Edwards, Brian J; Rottman, Benjamin M; Shankar, Maya; Betzler, Riana; Chituc, Vladimir; Rodriguez, Ricardo; Silva, Liara; Wibecan, Leah; Widness, Jane; Santos, Laurie R
2014-01-01
We adapted a method from developmental psychology to explore whether capuchin monkeys (Cebus apella) would place objects on a "blicket detector" machine to diagnose causal relations in the absence of a direct reward. Across five experiments, monkeys could place different objects on the machine and obtain evidence about the objects' causal properties based on whether each object "activated" the machine. In Experiments 1-3, monkeys received both audiovisual cues and a food reward whenever the machine activated. In these experiments, monkeys spontaneously placed objects on the machine and succeeded at discriminating various patterns of statistical evidence. In Experiments 4 and 5, we modified the procedure so that in the learning trials, monkeys received the audiovisual cues when the machine activated, but did not receive a food reward. In these experiments, monkeys failed to test novel objects in the absence of an immediate food reward, even when doing so could provide critical information about how to obtain a reward in future test trials in which the food reward delivery device was reattached. The present studies suggest that the gap between human and animal causal cognition may be in part a gap of motivation. Specifically, we propose that monkey causal learning is motivated by the desire to obtain a direct reward, and that unlike humans, monkeys do not engage in learning for learning's sake.
Learning the 3-D structure of objects from 2-D views depends on shape, not format
Tian, Moqian; Yamins, Daniel; Grill-Spector, Kalanit
2016-01-01
Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format. PMID:27153196
Hopkins, Michael E.; Bucci, David J.
2010-01-01
Physical exercise induces widespread neurobiological adaptations and improves learning and memory. Most research in this field has focused on hippocampus-based spatial tasks and changes in brain-derived neurotrophic factor (BDNF) as a putative substrate underlying exercise-induced cognitive improvements. Chronic exercise can also be anxiolytic and causes adaptive changes in stress reactivity. The present study employed a perirhinal cortex-dependent object recognition task as well as the elevated plus maze to directly test for interactions between the cognitive and anxiolytic effects of exercise in male Long Evans rats. Hippocampal and perirhinal cortex tissue was collected to determine whether the relationship between BDNF and cognitive performance extends to this non-spatial and non-hippocampal-dependent task. We also examined whether the cognitive improvements persisted once the exercise regimen was terminated. Our data indicate that 4 weeks of voluntary exercise every-other-day improved object recognition memory. Importantly, BDNF expression in the perirhinal cortex of exercising rats was strongly correlated with object recognition memory. Exercise also decreased anxiety-like behavior, however there was no evidence to support a relationship between anxiety-like behavior and performance on the novel object recognition task. There was a trend for a negative relationship between anxiety-like behavior and hippocampal BDNF. Neither the cognitive improvements nor the relationship between cognitive function and perirhinal BDNF levels persisted after 2 weeks of inactivity. These are the first data demonstrating that region-specific changes in BDNF protein levels are correlated with exercise-induced improvements in non-spatial memory, mediated by structures outside the hippocampus and are consistent with the theory that, with regard to object recognition, the anxiolytic and cognitive effects of exercise may be mediated through separable mechanisms. PMID:20601027
Zhao, Zaorui; Fan, Lu; Fortress, Ashley M.; Boulware, Marissa I.; Frick, Karyn M.
2012-01-01
Histone acetylation has recently been implicated in learning and memory processes, yet necessity of histone acetylation for such processes has not been demonstrated using pharmacological inhibitors of histone acetyltransferases (HATs). As such, the present study tested whether garcinol, a potent HAT inhibitor in vitro, could impair hippocampal memory consolidation and block the memory-enhancing effects of the modulatory hormone 17β-estradiol (E2). We first showed that bilateral infusion of garcinol (0.1, 1, or 10 μg/side) into the dorsal hippocampus (DH) immediately after training impaired object recognition memory consolidation in ovariectomized female mice. A behaviorally effective dose of garcinol (10 μg/side) also significantly decreased DH HAT activity. We next examined whether DH infusion of a behaviorally subeffective dose of garcinol (1 ng/side) could block the effects of DH E2 infusion on object recognition and epigenetic processes. Immediately after training, ovariectomized female mice received bilateral DH infusions of vehicle, E2 (5 μg/side), garcinol (1 ng/side), or E2 plus garcinol. Forty-eight hours later, garcinol blocked the memory-enhancing effects of E2. Garcinol also reversed the E2-induced increase in DH histone H3 acetylation, HAT activity, and levels of the de novo methyltransferase DNMT3B, as well as the E2-induced decrease in levels of the memory repressor protein histone deacetylase 2 (HDAC2). Collectively, these findings suggest that histone acetylation is critical for object recognition memory consolidation and the beneficial effects of E2 on object recognition. Importantly, this work demonstrates that the role of histone acetylation in memory processes can be studied using a HAT inhibitor. PMID:22396409
Erkens, Mirthe; Bakker, Brenda; van Duijn, Lucette M; Hendriks, Wiljan J A J; Van der Zee, Catharina E E M
2014-05-15
Mouse gene Ptprr encodes multiple protein tyrosine phosphatase receptor type R (PTPRR) isoforms that negatively regulate mitogen-activated protein kinase (MAPK) signaling pathways. In the mouse brain, PTPRR proteins are expressed in cerebellum, olfactory bulb, hippocampus, amygdala and perirhinal cortex but their precise role in these regions remains to be determined. Here, we evaluated phenotypic consequences of loss of PTPRR activity and found that basal smell was normal for Ptprr(-/-) mice. Also, spatial learning and fear-associated contextual learning were unaffected. PTPRR deficiency, however, resulted in impaired novel object recognition and a striking increase in exploratory activity in a new environment. The data corroborate the importance of proper control of MAPK signaling in cerebral functions and put forward PTPRR as a novel target to modulate synaptic processes. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kruithof, Maarten C.; Bouma, Henri; Fischer, Noëlle M.; Schutte, Klamer
2016-10-01
Object recognition is important to understand the content of video and allow flexible querying in a large number of cameras, especially for security applications. Recent benchmarks show that deep convolutional neural networks are excellent approaches for object recognition. This paper describes an approach of domain transfer, where features learned from a large annotated dataset are transferred to a target domain where less annotated examples are available as is typical for the security and defense domain. Many of these networks trained on natural images appear to learn features similar to Gabor filters and color blobs in the first layer. These first-layer features appear to be generic for many datasets and tasks while the last layer is specific. In this paper, we study the effect of copying all layers and fine-tuning a variable number. We performed an experiment with a Caffe-based network on 1000 ImageNet classes that are randomly divided in two equal subgroups for the transfer from one to the other. We copy all layers and vary the number of layers that is fine-tuned and the size of the target dataset. We performed additional experiments with the Keras platform on CIFAR-10 dataset to validate general applicability. We show with both platforms and both datasets that the accuracy on the target dataset improves when more target data is used. When the target dataset is large, it is beneficial to freeze only a few layers. For a large target dataset, the network without transfer learning performs better than the transfer network, especially if many layers are frozen. When the target dataset is small, it is beneficial to transfer (and freeze) many layers. For a small target dataset, the transfer network boosts generalization and it performs much better than the network without transfer learning. Learning time can be reduced by freezing many layers in a network.
Wilson, C R E; Baxter, M G; Easton, A; Gaffan, D
2008-04-01
Both frontal-inferotemporal disconnection and fornix transection (Fx) in the monkey impair object-in-place scene learning, a model of human episodic memory. If the contribution of the fornix to scene learning is via interaction with or modulation of frontal-temporal interaction--that is, if they form a unitary system--then Fx should have no further effect when added to frontal-temporal disconnection. However, if the contribution of the fornix is to some extent distinct, then fornix lesions may produce an additional deficit in scene learning beyond that caused by frontal-temporal disconnection. To distinguish between these possibilities, we trained three male rhesus monkeys on the object-in-place scene-learning task. We tested their learning on the task following frontal-temporal disconnection, achieved by crossed unilateral aspiration of the frontal cortex in one hemisphere and the inferotemporal cortex in the other, and again following the addition of Fx. The monkeys were significantly impaired in scene learning following frontal-temporal disconnection, and furthermore showed a significant increase in this impairment following the addition of Fx, from 32.8% error to 40.5% error (chance = 50%). The increased impairment following the addition of Fx provides evidence that the fornix and frontal-inferotemporal interaction make distinct contributions to episodic memory.
Deep learning for EEG-Based preference classification
NASA Astrophysics Data System (ADS)
Teo, Jason; Hou, Chew Lin; Mountstephens, James
2017-10-01
Electroencephalogram (EEG)-based emotion classification is rapidly becoming one of the most intensely studied areas of brain-computer interfacing (BCI). The ability to passively identify yet accurately correlate brainwaves with our immediate emotions opens up truly meaningful and previously unattainable human-computer interactions such as in forensic neuroscience, rehabilitative medicine, affective entertainment and neuro-marketing. One particularly useful yet rarely explored areas of EEG-based emotion classification is preference recognition [1], which is simply the detection of like versus dislike. Within the limited investigations into preference classification, all reported studies were based on musically-induced stimuli except for a single study which used 2D images. The main objective of this study is to apply deep learning, which has been shown to produce state-of-the-art results in diverse hard problems such as in computer vision, natural language processing and audio recognition, to 3D object preference classification over a larger group of test subjects. A cohort of 16 users was shown 60 bracelet-like objects as rotating visual stimuli on a computer display while their preferences and EEGs were recorded. After training a variety of machine learning approaches which included deep neural networks, we then attempted to classify the users' preferences for the 3D visual stimuli based on their EEGs. Here, we show that that deep learning outperforms a variety of other machine learning classifiers for this EEG-based preference classification task particularly in a highly challenging dataset with large inter- and intra-subject variability.
Neurotoxic lesions of ventrolateral prefrontal cortex impair object-in-place scene memory
Wilson, Charles R E; Gaffan, David; Mitchell, Anna S; Baxter, Mark G
2007-01-01
Disconnection of the frontal lobe from the inferotemporal cortex produces deficits in a number of cognitive tasks that require the application of memory-dependent rules to visual stimuli. The specific regions of frontal cortex that interact with the temporal lobe in performance of these tasks remain undefined. One capacity that is impaired by frontal–temporal disconnection is rapid learning of new object-in-place scene problems, in which visual discriminations between two small typographic characters are learned in the context of different visually complex scenes. In the present study, we examined whether neurotoxic lesions of ventrolateral prefrontal cortex in one hemisphere, combined with ablation of inferior temporal cortex in the contralateral hemisphere, would impair learning of new object-in-place scene problems. Male macaque monkeys learned 10 or 20 new object-in-place problems in each daily test session. Unilateral neurotoxic lesions of ventrolateral prefrontal cortex produced by multiple injections of a mixture of ibotenate and N-methyl-d-aspartate did not affect performance. However, when disconnection from inferotemporal cortex was completed by ablating this region contralateral to the neurotoxic prefrontal lesion, new learning was substantially impaired. Sham disconnection (injecting saline instead of neurotoxin contralateral to the inferotemporal lesion) did not affect performance. These findings support two conclusions: first, that the ventrolateral prefrontal cortex is a critical area within the frontal lobe for scene memory; and second, the effects of ablations of prefrontal cortex can be confidently attributed to the loss of cell bodies within the prefrontal cortex rather than to interruption of fibres of passage through the lesioned area. PMID:17445247
Tomporowski, Phillip D; Albrecht, Chelesa; Pendleton, Daniel M
2017-03-01
The purpose of this study was to determine if physical arousal produced by isometric hand-dynamometer contraction performed during word-list learning affects young adults' free recall or recognition memory. Twenty-four young adults (12 female; M age = 22 years) were presented with 4 20-item word lists. Moderate arousal was induced in 12 adults by an initial 30-s maximal hand-dynamometer squeeze with force productions of 50% maximum; low arousal was induced in 12 adults by an initial 1-s maximal dynamometer squeeze with force production of 10% maximum during learning. Memory performances following dual-task conditions experienced during the encoding, consolidation, and recall phases of learning were compared to a single-task control condition during which words were learned in the absence of isometric exercise. Planned contrasts revealed that arousal coinciding with word encoding led to significantly poorer immediate recall, F(1, 23) = 10.13, p < .05, [Formula: see text] = .31, delayed free recall, F(1, 23) = 15.81, p < .05, [Formula: see text] = .41, and recognition memory, F(1, 23) = 6.07, p < .05, [Formula: see text] = .21, compared with when there was no arousal. Neither arousal condition facilitated participants' memory performance. The reduction in long-term memory performance specific to the encoding phase of learning is explained in terms of the dual-task attentional demands placed on participants.
Lost in Second Life: Virtual Embodiment and Language Learning via Multimodal Communication
ERIC Educational Resources Information Center
Pasfield-Neofitou, Sarah; Huang, Hui; Grant, Scott
2015-01-01
Increased recognition of the role of the body and environment in cognition has taken place in recent decades in the form of new theories of embodied and extended cognition. The growing use of ever more sophisticated computer-generated 3D virtual worlds and avatars has added a new dimension to these theories of cognition. Both developments provide…
A Human Activity Recognition System Using Skeleton Data from RGBD Sensors.
Cippitelli, Enea; Gasparrini, Samuele; Gambi, Ennio; Spinsante, Susanna
2016-01-01
The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.
Advanced Age Dissociates Dual Functions of the Perirhinal Cortex
Burke, Sara N.; Maurer, Andrew P.; Nematollahi, Saman; Uprety, Ajay; Wallace, Jenelle L.
2014-01-01
The perirhinal cortex (PRC) is proposed to both represent high-order sensory information and maintain those representations across delays. These cognitive processes are required for recognition memory, which declines during normal aging. Whether or not advanced age affects the ability of PRC principal cells to support these dual roles, however, is not known. The current experiment recorded PRC neurons as young and aged rats traversed a track. When objects were placed on the track, a subset of the neurons became active at discrete locations adjacent to objects. Importantly, the aged rats had a lower proportion of neurons that were activated by objects. Once PRC activity patterns in the presence of objects were established, however, both age groups maintained these representations across delays up to 2 h. These data support the hypothesis that age-associated deficits in stimulus recognition arise from impairments in high-order stimulus representation rather than difficulty in sustaining stable activity patterns over time. PMID:24403147
Advanced age dissociates dual functions of the perirhinal cortex.
Burke, Sara N; Maurer, Andrew P; Nematollahi, Saman; Uprety, Ajay; Wallace, Jenelle L; Barnes, Carol A
2014-01-08
The perirhinal cortex (PRC) is proposed to both represent high-order sensory information and maintain those representations across delays. These cognitive processes are required for recognition memory, which declines during normal aging. Whether or not advanced age affects the ability of PRC principal cells to support these dual roles, however, is not known. The current experiment recorded PRC neurons as young and aged rats traversed a track. When objects were placed on the track, a subset of the neurons became active at discrete locations adjacent to objects. Importantly, the aged rats had a lower proportion of neurons that were activated by objects. Once PRC activity patterns in the presence of objects were established, however, both age groups maintained these representations across delays up to 2 h. These data support the hypothesis that age-associated deficits in stimulus recognition arise from impairments in high-order stimulus representation rather than difficulty in sustaining stable activity patterns over time.
van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W
2010-01-22
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
Nelissen, Ellis; Prickaerts, Jos; Blokland, Arjan
2018-06-01
It is well known that stress affects memory performance. However, there still appears to be inconstancy in literature about how acute stress affects the different stages of memory: acquisition, consolidation and retrieval. In this study, we exposed rats to acute stress and measured the effect on memory performance in the object recognition task as a measure for episodic memory. Stress was induced 30 min prior to the learning phase to affect acquisition, directly after the learning phase to affect consolidation, or 30 min before the retrieval phase to affect retrieval. Additionally, we induced stress both 30 min prior to the learning phase and 30 min prior to the retrieval phase to test whether the effects were related to state-dependency. As expected, we found that acute stress did not affect acquisition but had a negative impact on retrieval. To our knowledge, we are the first to show that early consolidation was negatively affected by acute stress. We also show that stress does not have a state-dependent effect on memory. Copyright © 2018 Elsevier B.V. All rights reserved.
García-Capdevila, Sílvia; Portell-Cortés, Isabel; Torras-Garcia, Meritxell; Coll-Andreu, Margalida; Costa-Miserachs, David
2009-09-14
The effect of long-term voluntary exercise (running wheel) on anxiety-like behaviour (plus maze and open field) and learning and memory processes (object recognition and two-way active avoidance) was examined on Wistar rats. Because major individual differences in running wheel behaviour were observed, the data were analysed considering the exercising animals both as a whole and grouped according to the time spent in the running wheel (low, high, and very-high running). Although some variables related to anxiety-like behaviour seem to reflect an anxiogenic compatible effect, the view of the complete set of variables could be interpreted as an enhancement of defensive and risk assessment behaviours in exercised animals, without major differences depending on the exercise level. Effects on learning and memory processes were dependent on task and level of exercise. Two-way avoidance was not affected either in the acquisition or in the retention session, while the retention of object recognition task was affected. In this latter task, an enhancement in low running subjects and impairment in high and very-high running animals were observed.
PCANet: A Simple Deep Learning Baseline for Image Classification?
Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi
2015-12-01
In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.
NASA Astrophysics Data System (ADS)
Siswanto, Didik
2017-12-01
School as a place to study require a medium of learning. Instructional media containinginformation about the lessons that will be used by teachers to convey a lesson. School early childhood education Al-Kindy Pekanbaru interms of learning the letter hijaiyah still use conventional learning media. But with the conventionalmedia is not very attractive to use, so the need for an exciting learning medium that can make childrenbecome interested in learningThe purpose of this study was to create a Media Learning Introduction Letter Hijaiyahmultimedia form and benefit from the introduction of letters Hijaiyah Learning Media is a renewal of themedium of learning in School early childhood education Al-Kindy Pekanbaru.In this study the authors tried to make the learning application that contains the basicknowledge of letters hijaiyah dsertai with animation, audio and explanation how to read the letters inorder to complete the learning media letters hijaiyah more interactive.
The Initial Development of Object Knowledge by a Learning Robot
Modayil, Joseph; Kuipers, Benjamin
2008-01-01
We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that form spatio-temporal clusters of sensory experience, percepts that represent properties for the tracked objects, classes that support efficient generalization from past experience, and actions that reliably change object percepts. We evaluate how well this intrinsically acquired object knowledge can be used to solve externally specified tasks including object recognition and achieving goals that require both planning and continuous control. PMID:19953188
ERIC Educational Resources Information Center
Ojo, Olugbenga David; Olakulehin, Felix Kayode
2006-01-01
This paper examined the nature of open and distance learning institutions as organizations where synergy of efforts of all personnel is required in order to achieve the aims and objectives of the institution. It explored the huge infrastructural and personnel requirements of distance learning institutions, especially at inception, and the…
NASA Astrophysics Data System (ADS)
Zou, Jie; Gattani, Abhishek
2005-01-01
When completely automated systems don't yield acceptable accuracy, many practical pattern recognition systems involve the human either at the beginning (pre-processing) or towards the end (handling rejects). We believe that it may be more useful to involve the human throughout the recognition process rather than just at the beginning or end. We describe a methodology of interactive visual recognition for human-centered low-throughput applications, Computer Assisted Visual InterActive Recognition (CAVIAR), and discuss the prospects of implementing CAVIAR over the Internet. The novelty of CAVIAR is image-based interaction through a domain-specific parameterized geometrical model, which reduces the semantic gap between humans and computers. The user may interact with the computer anytime that she considers its response unsatisfactory. The interaction improves the accuracy of the classification features by improving the fit of the computer-proposed model. The computer makes subsequent use of the parameters of the improved model to refine not only its own statistical model-fitting process, but also its internal classifier. The CAVIAR methodology was applied to implement a flower recognition system. The principal conclusions from the evaluation of the system include: 1) the average recognition time of the CAVIAR system is significantly shorter than that of the unaided human; 2) its accuracy is significantly higher than that of the unaided machine; 3) it can be initialized with as few as one training sample per class and still achieve high accuracy; and 4) it demonstrates a self-learning ability. We have also implemented a Mobile CAVIAR system, where a pocket PC, as a client, connects to a server through wireless communication. The motivation behind a mobile platform for CAVIAR is to apply the methodology in a human-centered pervasive environment, where the user can seamlessly interact with the system for classifying field-data. Deploying CAVIAR to a networked mobile platform poses the challenge of classifying field images and programming under constraints of display size, network bandwidth, processor speed, and memory size. Editing of the computer-proposed model is performed on the handheld while statistical model fitting and classification take place on the server. The possibility that the user can easily take several photos of the object poses an interesting information fusion problem. The advantage of the Internet is that the patterns identified by different users can be pooled together to benefit all peer users. When users identify patterns with CAVIAR in a networked setting, they also collect training samples and provide opportunities for machine learning from their intervention. CAVIAR implemented over the Internet provides a perfect test bed for, and extends, the concept of Open Mind Initiative proposed by David Stork. Our experimental evaluation focuses on human time, machine and human accuracy, and machine learning. We devoted much effort to evaluating the use of our image-based user interface and on developing principles for the evaluation of interactive pattern recognition system. The Internet architecture and Mobile CAVIAR methodology have many applications. We are exploring in the directions of teledermatology, face recognition, and education.
Hopkins, Michael E.; Nitecki, Roni; Bucci, David J.
2011-01-01
It is well established that physical exercise can enhance hippocampal-dependent forms of learning and memory in laboratory animals, commensurate with increases in hippocampal neural plasticity (BDNF mRNA/protein, neurogenesis, LTP). However, very little is known about the effects of exercise on other, non-spatial forms of learning and memory. In addition, there has been little investigation of the duration of the effects of exercise on behavior or plasticity. Likewise, few studies have compared the effects of exercising during adulthood versus adolescence. This is particularly important since exercise may capitalize on the peak of neural plasticity observed during adolescence, resulting in a different pattern of behavioral and neurobiological effects. The present study addressed these gaps in the literature by comparing the effects of 4 weeks of voluntary exercise (wheel running) during adulthood or adolescence on novel object recognition and BDNF levels in the perirhinal cortex (PER) and hippocampus (HP). Exercising during adulthood improved object recognition memory when rats were tested immediately after 4 weeks of exercise, an effect that was accompanied by increased BDNF levels in PER and HP. When rats were tested again 2 weeks after exercise ended, the effects of exercise on recognition memory and BDNF levels were no longer present. Exercising during adolescence had a very different pattern of effects. First, both exercising and non-exercising rats could discriminate between novel and familiar objects immediately after the exercise regimen ended; furthermore there was no group difference in BDNF levels. Two or four weeks later, however, rats that had previously exercised as adolescents could still discriminate between novel and familiar objects, while non-exercising rats could not. Moreover, the formerly exercising rats exhibited higher levels of BDNF in PER compared to HP, while the reverse was true in the non-exercising rats. These findings reveal a novel interaction between exercise, development, and medial temporal lobe memory systems. PMID:21839807
Markant, Julie; Worden, Michael S.; Amso, Dima
2015-01-01
Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive driving eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location will boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, Rafal, & Choate, 1985; Posner, 1980) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. PMID:25701278
Deep learning based hand gesture recognition in complex scenes
NASA Astrophysics Data System (ADS)
Ni, Zihan; Sang, Nong; Tan, Cheng
2018-03-01
Recently, region-based convolutional neural networks(R-CNNs) have achieved significant success in the field of object detection, but their accuracy is not too high for small objects and similar objects, such as the gestures. To solve this problem, we present an online hard example testing(OHET) technology to evaluate the confidence of the R-CNNs' outputs, and regard those outputs with low confidence as hard examples. In this paper, we proposed a cascaded networks to recognize the gestures. Firstly, we use the region-based fully convolutional neural network(R-FCN), which is capable of the detection for small object, to detect the gestures, and then use the OHET to select the hard examples. To enhance the accuracy of the gesture recognition, we re-classify the hard examples through VGG-19 classification network to obtain the final output of the gesture recognition system. Through the contrast experiments with other methods, we can see that the cascaded networks combined with the OHET reached to the state-of-the-art results of 99.3% mAP on small and similar gestures in complex scenes.
Behavioral model of visual perception and recognition
NASA Astrophysics Data System (ADS)
Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.
1993-09-01
In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.
Comparison of Object Recognition Behavior in Human and Monkey
Rajalingham, Rishi; Schmidt, Kailyn
2015-01-01
Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324
LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval
NASA Astrophysics Data System (ADS)
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
2013-01-01
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio
2009-02-01
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.
Conversion of short-term to long-term memory in the novel object recognition paradigm
Moore, Shannon J.; Deshpande, Kaivalya; Stinnett, Gwen S.; Seasholtz, Audrey F.; Murphy, Geoffrey G.
2013-01-01
It is well-known that stress can significantly impact learning; however, whether this effect facilitates or impairs the resultant memory depends on the characteristics of the stressor. Investigation of these dynamics can be confounded by the role of the stressor in motivating performance in a task. Positing a cohesive model of the effect of stress on learning and memory necessitates elucidating the consequences of stressful stimuli independently from task-specific functions. Therefore, the goal of this study was to examine the effect of manipulating a task-independent stressor (elevated light level) on short-term and long-term memory in the novel object recognition paradigm. Short-term memory was elicited in both low light and high light conditions, but long-term memory specifically required high light conditions during the acquisition phase (familiarization trial) and was independent of the light level during retrieval (test trial). Additionally, long-term memory appeared to be independent of stress-mediated glucocorticoid release, as both low and high light produced similar levels of plasma corticosterone, which further did not correlate with subsequent memory performance. Finally, both short-term and long-term memory showed no savings between repeated experiments suggesting that this novel object recognition paradigm may be useful for longitudinal studies, particularly when investigating treatments to stabilize or enhance weak memories in neurodegenerative diseases or during age-related cognitive decline. PMID:23835143
Conversion of short-term to long-term memory in the novel object recognition paradigm.
Moore, Shannon J; Deshpande, Kaivalya; Stinnett, Gwen S; Seasholtz, Audrey F; Murphy, Geoffrey G
2013-10-01
It is well-known that stress can significantly impact learning; however, whether this effect facilitates or impairs the resultant memory depends on the characteristics of the stressor. Investigation of these dynamics can be confounded by the role of the stressor in motivating performance in a task. Positing a cohesive model of the effect of stress on learning and memory necessitates elucidating the consequences of stressful stimuli independently from task-specific functions. Therefore, the goal of this study was to examine the effect of manipulating a task-independent stressor (elevated light level) on short-term and long-term memory in the novel object recognition paradigm. Short-term memory was elicited in both low light and high light conditions, but long-term memory specifically required high light conditions during the acquisition phase (familiarization trial) and was independent of the light level during retrieval (test trial). Additionally, long-term memory appeared to be independent of stress-mediated glucocorticoid release, as both low and high light produced similar levels of plasma corticosterone, which further did not correlate with subsequent memory performance. Finally, both short-term and long-term memory showed no savings between repeated experiments suggesting that this novel object recognition paradigm may be useful for longitudinal studies, particularly when investigating treatments to stabilize or enhance weak memories in neurodegenerative diseases or during age-related cognitive decline. Copyright © 2013 Elsevier Inc. All rights reserved.
Colloff, Melissa F; Flowe, Heather D
2016-06-01
False face recognition rates are sometimes higher when faces are learned while under the influence of alcohol. Alcohol myopia theory (AMT) proposes that acute alcohol intoxication during face learning causes people to attend to only the most salient features of a face, impairing the encoding of less salient facial features. Yet, there is currently no direct evidence to support this claim. Our objective was to test whether acute alcohol intoxication impairs face learning by causing subjects to attend to a salient (i.e., distinctive) facial feature over other facial features, as per AMT. We employed a balanced placebo design (N = 100). Subjects in the alcohol group were dosed to achieve a blood alcohol concentration (BAC) of 0.06 %, whereas the no alcohol group consumed tonic water. Alcohol expectancy was controlled. Subjects studied faces with or without a distinctive feature (e.g., scar, piercing). An old-new recognition test followed. Some of the test faces were "old" (i.e., previously studied), and some were "new" (i.e., not previously studied). We varied whether the new test faces had a previously studied distinctive feature versus other familiar characteristics. Intoxicated and sober recognition accuracy was comparable, but subjects in the alcohol group made more positive identifications overall compared to the no alcohol group. The results are not in keeping with AMT. Rather, a more general cognitive mechanism appears to underlie false face recognition in intoxicated subjects. Specifically, acute alcohol intoxication during face learning results in more liberal choosing, perhaps because of an increased reliance on familiarity.
ERIC Educational Resources Information Center
Downes, Stephen
2005-01-01
When compared with, say, blogging, the deployment of learning objects has been slow indeed. While blog aggregation services are recording millions of blogs and hundreds of millions of blog posts, academic learning object repositories number their resources only in the thousands, and even major corporate repositories have only one or two million…
A cultural side effect: learning to read interferes with identity processing of familiar objects
Kolinsky, Régine; Fernandes, Tânia
2014-01-01
Based on the neuronal recycling hypothesis (Dehaene and Cohen, 2007), we examined whether reading acquisition has a cost for the recognition of non-linguistic visual materials. More specifically, we checked whether the ability to discriminate between mirror images, which develops through literacy acquisition, interferes with object identity judgments, and whether interference strength varies as a function of the nature of the non-linguistic material. To these aims we presented illiterate, late literate (who learned to read at adult age), and early literate adults with an orientation-independent, identity-based same-different comparison task in which they had to respond “same” to both physically identical and mirrored or plane-rotated images of pictures of familiar objects (Experiment 1) or of geometric shapes (Experiment 2). Interference from irrelevant orientation variations was stronger with plane rotations than with mirror images, and stronger with geometric shapes than with objects. Illiterates were the only participants almost immune to mirror variations, but only for familiar objects. Thus, the process of unlearning mirror-image generalization, necessary to acquire literacy in the Latin alphabet, has a cost for a basic function of the visual ventral object recognition stream, i.e., identification of familiar objects. This demonstrates that neural recycling is not just an adaptation to multi-use but a process of at least partial exaptation. PMID:25400605
Rules and construction effects in learning the argument structure of verbs.
Demuth, Katherine; Machobane, Malillo; Moloi, Francina
2003-11-01
Theorists of language acquisition have long debated the means by which children learn the argument structure of verbs (e.g. Bowerman, 1974, 1990; Pinker, 1984, 1989; Tomasello, 1992). Central to this controversy has been the possible role of verb semantics, especially in learning which verbs undergo dative-shift alternation in languages like English. The learning problem is somewhat simplified in Bantu double object constructions, where all applicative verbs show the same order of postverbal objects. However, Bantu languages differ as to what that order is, some placing the benefactive argument first, and others placing the animate argument first. Learning the language-specific word-order restrictions on Bantu double object applicative constructions is therefore more akin to setting a parameter (cf. Hyams, 1986). This study examined 100 three- to eight-year-old children's knowledge of word order restrictions in Sesotho double object applicatives. Performance on forced choice elicited production tasks found that four-year-olds showed evidence of rule learning, although eight-year-olds had not yet attained adult levels of performance. Further investigation found lexical construction effects for three-year-olds. These findings suggest that learning the argument structure of verbs, even when lexical semantics is not involved, may be more sensitive to lexical construction effects than previously thought.
Christie, Lori-Ann; Saunders, Richard C.; Kowalska, Danuta, M.; MacKay, William A.; Head, Elizabeth; Cotman, Carl W.; Milgram, Norton W.
2014-01-01
To examine the effects of rhinal and dorsolateral prefrontal cortex lesions on object and spatial recognition memory in canines, we used a protocol in which both an object (delayed non-matching to sample, or DNMS) and a spatial (delayed non-matching to position or DNMP) recognition task were administered daily. The tasks used similar procedures such that only the type of stimulus information to be remembered differed. Rhinal cortex (RC) lesions produced a selective deficit on the DNMS task, both in retention of the task rules at short delays and in object recognition memory. By contrast, performance on the DNMP task remained intact at both short and long delay intervals in RC animals. Subjects who received dorsolateral prefrontal cortex (dlPFC) lesions were impaired on a spatial task at a short, 5-sec delay, suggesting disrupted retention of the general task rules, however, this impairment was transient; long-term spatial memory performance was unaffected in dlPFC subjects. The present results provide support for the involvement of the RC in object, but not visuospatial, processing and recognition memory, whereas the dlPFC appears to mediate retention of a non-matching rule. These findings support theories of functional specialization within the medial temporal lobe and frontal cortex and suggest that rhinal and dorsolateral prefrontal cortices in canines are functionally similar to analogous regions in other mammals. PMID:18792072
Building machines that learn and think like people.
Lake, Brenden M; Ullman, Tomer D; Tenenbaum, Joshua B; Gershman, Samuel J
2017-01-01
Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
Han, Ren-Wen; Zhang, Rui-San; Xu, Hong-Jiao; Chang, Min; Peng, Ya-Li; Wang, Rui
2013-07-01
Neuropeptide S (NPS), the endogenous ligand of NPSR, has been shown to promote arousal and anxiolytic-like effects. According to the predominant distribution of NPSR in brain tissues associated with learning and memory, NPS has been reported to modulate cognitive function in rodents. Here, we investigated the role of NPS in memory formation, and determined whether NPS could mitigate memory impairment induced by selective N-methyl-D-aspartate receptor antagonist MK801, muscarinic cholinergic receptor antagonist scopolamine or Aβ₁₋₄₂ in mice, using novel object and object location recognition tasks. Intracerebroventricular (i.c.v.) injection of 1 nmol NPS 5 min after training not only facilitated object recognition memory formation, but also prolonged memory retention in both tasks. The improvement of object recognition memory induced by NPS could be blocked by the selective NPSR antagonist SHA 68, indicating pharmacological specificity. Then, we found that i.c.v. injection of NPS reversed memory disruption induced by MK801, scopolamine or Aβ₁₋₄₂ in both tasks. In summary, our results indicate that NPS facilitates memory formation and prolongs the retention of memory through activation of the NPSR, and mitigates amnesia induced by blockage of glutamatergic or cholinergic system or by Aβ₁₋₄₂, suggesting that NPS/NPSR system may be a new target for enhancing memory and treating amnesia. Copyright © 2013 Elsevier Ltd. All rights reserved.
Goh, Jinzhong Jeremy; Manahan-Vaughan, Denise
2013-02-01
Learning-facilitated synaptic plasticity describes the ability of hippocampal synapses to respond with persistent plasticity to afferent stimulation when coupled with a spatial learning event, whereby the afferent stimulation normally produces short-term plasticity or no change in synaptic strength if given in the absence of novel learning. Recently, it was reported that in the mouse hippocampus intrinsic long-term depression (LTD > 24 h) occurs when test-pulse afferent stimulation is coupled with a novel spatial learning. It is not known to what extent this phenomenon shares molecular properties with synaptic plasticity that is typically induced by means of patterned electrical afferent stimulation. In previous work, we showed that a novel spatial object recognition task facilitates LTD at the Schaffer collateral-CA1 synapse of freely behaving adult mice, whereas reexposure to the familiar spatial configuration ∼24 h later elicited no such facilitation. Here we report that treatment with the NMDA receptor antagonist, (±)-3-(2-Carboxypiperazin-4-yl)-propanephosphonic acid (CPP), or antagonism of metabotropic glutamate (mGlu) receptor, mGlu5, using 2-methyl-6-(phenylethynyl) pyridine (MPEP), completely prevented LTD under the novel learning conditions. Behavioral assessment during re-exposure after application of the antagonists revealed that the animals did not remember the object during novel exposure and treated them as if they were novel. Under these circumstances, where the acquisition of novel spatial information was involved, LTD was facilitated. Our data support that the endogenous LTD that is enabled through novel spatial learning in adult mice is critically dependent on the activation of both the NMDA receptors and mGlu5. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus
2017-05-01
For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
Extraction of edge-based and region-based features for object recognition
NASA Astrophysics Data System (ADS)
Coutts, Benjamin; Ravi, Srinivas; Hu, Gongzhu; Shrikhande, Neelima
1993-08-01
One of the central problems of computer vision is object recognition. A catalogue of model objects is described as a set of features such as edges and surfaces. The same features are extracted from the scene and matched against the models for object recognition. Edges and surfaces extracted from the scenes are often noisy and imperfect. In this paper algorithms are described for improving low level edge and surface features. Existing edge extraction algorithms are applied to the intensity image to obtain edge features. Initial edges are traced by following directions of the current contour. These are improved by using corresponding depth and intensity information for decision making at branch points. Surface fitting routines are applied to the range image to obtain planar surface patches. An algorithm of region growing is developed that starts with a coarse segmentation and uses quadric surface fitting to iteratively merge adjacent regions into quadric surfaces based on approximate orthogonal distance regression. Surface information obtained is returned to the edge extraction routine to detect and remove fake edges. This process repeats until no more merging or edge improvement can take place. Both synthetic (with Gaussian noise) and real images containing multiple object scenes have been tested using the merging criteria. Results appeared quite encouraging.
Age- and sex-related disturbance in a battery of sensorimotor and cognitive tasks in Kunming mice.
Chen, Gui-Hai; Wang, Yue-Ju; Zhang, Li-Qun; Zhou, Jiang-Ning
2004-12-15
A battery of tasks, i.e. beam walking, open field, tightrope, radial six-arm water maze (RAWM), novel-object recognition and olfactory discrimination, was used to determine whether there was age- and sex-related memory deterioration in Kunming (KM) mice, and whether these tasks are independent or correlated with each other. Two age groups of KM mice were used: a younger group (7-8 months old, 12 males and 11 females) and an older group (17-18 months old, 12 males and 12 females). The results showed that the spatial learning ability and memory in the RAWM were lower in older female KM mice relative to younger female mice and older male mice. Consistent with this, in the novel-object recognition task, a non-spatial cognitive task, older female mice but not older male mice had impairment of short-term memory. In olfactory discrimination, another non-spatial task, the older mice retained this ability. Interestingly, female mice performed better than males, especially in the younger group. The older females exhibited sensorimotor impairment in the tightrope task and low locomotor activity in the open-field task. Moreover, older mice spent a longer time in the peripheral squares of the open-field than younger ones. The non-spatial cognitive performance in the novel-object recognition and olfactory discrimination tasks was related to performance in the open-field, whereas the spatial cognitive performance in the RAWM was not related to performance in any of the three sensorimotor tasks. These results suggest that disturbance of spatial learning and memory, as well as selective impairment of non-spatial learning and memory, existed in older female KM mice.
Exploiting core knowledge for visual object recognition.
Schurgin, Mark W; Flombaum, Jonathan I
2017-03-01
Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Cadieu, Charles F.; Hong, Ha; Yamins, Daniel L. K.; Pinto, Nicolas; Ardila, Diego; Solomon, Ethan A.; Majaj, Najib J.; DiCarlo, James J.
2014-01-01
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. PMID:25521294
Expression of HIV-Tat protein is associated with learning and memory deficits in the mouse
Carey, Amanda N.; Sypek, Elizabeth I.; Singh, Harminder D.; Kaufman, Marc J.; McLaughlin, Jay P.
2012-01-01
HIV-Tat protein has been implicated in the pathogenesis of HIV-1 neurological complications (i.e., neuroAIDS), but direct demonstrations of the effects of Tat on behavior are limited. GT-tg mice with a doxycycline (Dox)-inducible and brain-selective tat gene coding for Tat protein were used to test the hypothesis that the activity of Tat in brain is sufficient to impair learning and memory processes. Western blot analysis of GT-tg mouse brains demonstrated an increase in Tat antibody labeling that seemed to be dependent on the dose and duration of Dox pretreatment. Dox-treated GT-tg mice tested in the Barnes maze demonstrated longer latencies to find an escape hole and displayed deficits in probe trial performance, versus uninduced GT-tg littermates, suggesting Tat-induced impairments of spatial learning and memory. Reversal learning was also impaired in Tat-induced mice. Tat-induced mice additionally demonstrated long-lasting (up to one month) deficiencies in novel object recognition learning and memory performance. Furthermore, novel object recognition impairment was dependent on the dose and duration of Dox exposure, suggesting that Tat exposure progressively mediated deficits. These experiments provide evidence that Tat protein expression is sufficient to mediate cognitive abnormalities seen in HIV-infected individuals. Moreover, the genetically engineered GT-tg mouse may be useful for improving our understanding of the neurological underpinnings of neuroAIDS-related behaviors. PMID:22197678
Interidentity amnesia for neutral, episodic information in dissociative identity disorder.
Huntjens, Rafaële J C; Postma, Albert; Peters, Madelon L; Woertman, Liesbeth; van der Hart, Onno
2003-05-01
Interidentity amnesia is considered a hallmark of dissociative identity disorder (DID) in clinical practice. In this study, objective methods of testing episodic memory transfer between identities were used. Tests of both recall (interference paradigm) and recognition were used. A sample of 31 DID patients was included. Additionally, 50 control subjects participated, half functioning as normal controls and the other half simulating interidentity amnesia. Twenty-one patients subjectively reported complete one-way amnesia for the learning episode. However, objectively, neither recall nor recognition scores of patients were different from those of normal controls. It is suggested that clinical models of amnesia in DID may be specified to exclude episodic memory impairments for emotionally neutral material.
NASA Astrophysics Data System (ADS)
Kozoderov, V. V.; Kondranin, T. V.; Dmitriev, E. V.
2017-12-01
The basic model for the recognition of natural and anthropogenic objects using their spectral and textural features is described in the problem of hyperspectral air-borne and space-borne imagery processing. The model is based on improvements of the Bayesian classifier that is a computational procedure of statistical decision making in machine-learning methods of pattern recognition. The principal component method is implemented to decompose the hyperspectral measurements on the basis of empirical orthogonal functions. Application examples are shown of various modifications of the Bayesian classifier and Support Vector Machine method. Examples are provided of comparing these classifiers and a metrical classifier that operates on finding the minimal Euclidean distance between different points and sets in the multidimensional feature space. A comparison is also carried out with the " K-weighted neighbors" method that is close to the nonparametric Bayesian classifier.
Learning during processing Word learning doesn’t wait for word recognition to finish
Apfelbaum, Keith S.; McMurray, Bob
2017-01-01
Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082
Scaling up spike-and-slab models for unsupervised feature learning.
Goodfellow, Ian J; Courville, Aaron; Bengio, Yoshua
2013-08-01
We describe the use of two spike-and-slab models for modeling real-valued data, with an emphasis on their applications to object recognition. The first model, which we call spike-and-slab sparse coding (S3C), is a preexisting model for which we introduce a faster approximate inference algorithm. We introduce a deep variant of S3C, which we call the partially directed deep Boltzmann machine (PD-DBM) and extend our S3C inference algorithm for use on this model. We describe learning procedures for each. We demonstrate that our inference procedure for S3C enables scaling the model to unprecedented large problem sizes, and demonstrate that using S3C as a feature extractor results in very good object recognition performance, particularly when the number of labeled examples is low. We show that the PD-DBM generates better samples than its shallow counterpart, and that unlike DBMs or DBNs, the PD-DBM may be trained successfully without greedy layerwise training.
A rodent model for the study of invariant visual object recognition
Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.
2009-01-01
The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704
Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images.
Udupa, Jayaram K; Odhner, Dewey; Zhao, Liming; Tong, Yubing; Matsumoto, Monica M S; Ciesielski, Krzysztof C; Falcao, Alexandre X; Vaideeswaran, Pavithra; Ciesielski, Victoria; Saboury, Babak; Mohammadianrasanani, Syedmehrdad; Sin, Sanghun; Arens, Raanan; Torigian, Drew A
2014-07-01
To make Quantitative Radiology (QR) a reality in radiological practice, computerized body-wide Automatic Anatomy Recognition (AAR) becomes essential. With the goal of building a general AAR system that is not tied to any specific organ system, body region, or image modality, this paper presents an AAR methodology for localizing and delineating all major organs in different body regions based on fuzzy modeling ideas and a tight integration of fuzzy models with an Iterative Relative Fuzzy Connectedness (IRFC) delineation algorithm. The methodology consists of five main steps: (a) gathering image data for both building models and testing the AAR algorithms from patient image sets existing in our health system; (b) formulating precise definitions of each body region and organ and delineating them following these definitions; (c) building hierarchical fuzzy anatomy models of organs for each body region; (d) recognizing and locating organs in given images by employing the hierarchical models; and (e) delineating the organs following the hierarchy. In Step (c), we explicitly encode object size and positional relationships into the hierarchy and subsequently exploit this information in object recognition in Step (d) and delineation in Step (e). Modality-independent and dependent aspects are carefully separated in model encoding. At the model building stage, a learning process is carried out for rehearsing an optimal threshold-based object recognition method. The recognition process in Step (d) starts from large, well-defined objects and proceeds down the hierarchy in a global to local manner. A fuzzy model-based version of the IRFC algorithm is created by naturally integrating the fuzzy model constraints into the delineation algorithm. The AAR system is tested on three body regions - thorax (on CT), abdomen (on CT and MRI), and neck (on MRI and CT) - involving a total of over 35 organs and 130 data sets (the total used for model building and testing). The training and testing data sets are divided into equal size in all cases except for the neck. Overall the AAR method achieves a mean accuracy of about 2 voxels in localizing non-sparse blob-like objects and most sparse tubular objects. The delineation accuracy in terms of mean false positive and negative volume fractions is 2% and 8%, respectively, for non-sparse objects, and 5% and 15%, respectively, for sparse objects. The two object groups achieve mean boundary distance relative to ground truth of 0.9 and 1.5 voxels, respectively. Some sparse objects - venous system (in the thorax on CT), inferior vena cava (in the abdomen on CT), and mandible and naso-pharynx (in neck on MRI, but not on CT) - pose challenges at all levels, leading to poor recognition and/or delineation results. The AAR method fares quite favorably when compared with methods from the recent literature for liver, kidneys, and spleen on CT images. We conclude that separation of modality-independent from dependent aspects, organization of objects in a hierarchy, encoding of object relationship information explicitly into the hierarchy, optimal threshold-based recognition learning, and fuzzy model-based IRFC are effective concepts which allowed us to demonstrate the feasibility of a general AAR system that works in different body regions on a variety of organs and on different modalities. Copyright © 2014 Elsevier B.V. All rights reserved.
Large-scale weakly supervised object localization via latent category learning.
Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve
2015-04-01
Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.
Advances in the behavioural testing and network imaging of rodent recognition memory
Kinnavane, Lisa; Albasser, Mathieu M.; Aggleton, John P.
2015-01-01
Research into object recognition memory has been galvanised by the introduction of spontaneous preference tests for rodents. The standard task, however, contains a number of inherent shortcomings that reduce its power. Particular issues include the problem that individual trials are time consuming, so limiting the total number of trials in any condition. In addition, the spontaneous nature of the behaviour and the variability between test objects add unwanted noise. To combat these issues, the ‘bow-tie maze’ was introduced. Although still based on the spontaneous preference of novel over familiar stimuli, the ability to give multiple trials within a session without handling the rodents, as well as using the same objects as both novel and familiar samples on different trials, overcomes key limitations in the standard task. Giving multiple trials within a single session also creates new opportunities for functional imaging of object recognition memory. A series of studies are described that examine the expression of the immediate-early gene, c-fos. Object recognition memory is associated with increases in perirhinal cortex and area Te2 c-fos activity. When rats explore novel objects the pathway from the perirhinal cortex to lateral entorhinal cortex, and then to the dentate gyrus and CA3, is engaged. In contrast, when familiar objects are explored the pathway from the perirhinal cortex to lateral entorhinal cortex, and then to CA1, takes precedence. The switch to the perforant pathway (novel stimuli) from the temporoammonic pathway (familiar stimuli) may assist the enhanced associative learning promoted by novel stimuli. PMID:25106740
Bruining, Hilgo; Matsui, Asuka; Oguro-Ando, Asami; Kahn, René S; Van't Spijker, Heleen M; Akkermans, Guus; Stiedl, Oliver; van Engeland, Herman; Koopmans, Bastijn; van Lith, Hein A; Oppelaar, Hugo; Tieland, Liselotte; Nonkes, Lourens J; Yagi, Takeshi; Kaneko, Ryosuke; Burbach, J Peter H; Yamamoto, Nobuhiko; Kas, Martien J
2015-10-01
Quantitative genetic analysis of basic mouse behaviors is a powerful tool to identify novel genetic phenotypes contributing to neurobehavioral disorders. Here, we analyzed genetic contributions to single-trial, long-term social and nonsocial recognition and subsequently studied the functional impact of an identified candidate gene on behavioral development. Genetic mapping of single-trial social recognition was performed in chromosome substitution strains, a sophisticated tool for detecting quantitative trait loci (QTL) of complex traits. Follow-up occurred by generating and testing knockout (KO) mice of a selected QTL candidate gene. Functional characterization of these mice was performed through behavioral and neurological assessments across developmental stages and analyses of gene expression and brain morphology. Chromosome substitution strain 14 mapping studies revealed an overlapping QTL related to long-term social and object recognition harboring Pcdh9, a cell-adhesion gene previously associated with autism spectrum disorder. Specific long-term social and object recognition deficits were confirmed in homozygous (KO) Pcdh9-deficient mice, while heterozygous mice only showed long-term social recognition impairment. The recognition deficits in KO mice were not associated with alterations in perception, multi-trial discrimination learning, sociability, behavioral flexibility, or fear memory. Rather, KO mice showed additional impairments in sensorimotor development reflected by early touch-evoked biting, rotarod performance, and sensory gating deficits. This profile emerged with structural changes in deep layers of sensory cortices, where Pcdh9 is selectively expressed. This behavior-to-gene study implicates Pcdh9 in cognitive functions required for long-term social and nonsocial recognition. This role is supported by the involvement of Pcdh9 in sensory cortex development and sensorimotor phenotypes. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Revisiting the earliest electrophysiological correlate of familiar face recognition.
Huang, Wanyi; Wu, Xia; Hu, Liping; Wang, Lei; Ding, Yulong; Qu, Zhe
2017-10-01
The present study used event-related potentials (ERPs) to reinvestigate the earliest face familiarity effect (FFE: ERP differences between familiar and unfamiliar faces) that genuinely reflects cognitive processes underlying recognition of familiar faces in long-term memory. To trigger relatively early FFEs, participants were required to categorize upright and inverted famous faces and unknown faces in a task that placed high demand on face recognition. More importantly, to determine whether an observed FFE was linked to on-line face recognition, systematical investigation about the relationship between the FFE and behavioral performance of face recognition was conducted. The results showed significant FFEs on P1, N170, N250, and P300 waves. The FFEs on occipital P1 and N170 (<200ms) showed reversed polarities for upright and inverted faces, and were not correlated with any behavioral measure (accuracy, response time) or modulated by learning, indicating that they might merely reflect low-level visual differences between face sets. In contrast, the later FFEs on occipito-temporal N250 (~230ms) and centro-parietal P300 (~350ms) showed consistent polarities for upright and inverted faces. The N250 FFE was individually correlated with recognition speed for upright faces, and could be obtained for inverted faces through learning. The P300 FFE was also related to behavior in many aspects. These findings provide novel evidence supporting that cognitive discrimination of familiar and unfamiliar faces starts no less than 200ms after stimulus onset, and the familiarity effect on N250 may be the first electrophysiological correlate underlying recognition of familiar faces in long-term memory. Copyright © 2017 Elsevier B.V. All rights reserved.
A Study on Mobile Learning as a Learning Style in Modern Research Practice
ERIC Educational Resources Information Center
Joan, D. R. Robert
2013-01-01
Mobile learning is a kind of learning that takes place via a portable handheld electronic device. It also refers to learning via other kinds of mobile devices such as tablet computers, net-books and digital readers. The objective of mobile learning is to provide the learner the ability to assimilate learning anywhere and at anytime. Mobile devices…
Eguchi, Akihiro; Mender, Bedeho M. W.; Evans, Benjamin D.; Humphreys, Glyn W.; Stringer, Simon M.
2015-01-01
Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognize the whole object. PMID:26300766
Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.
2016-01-01
Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017
Mathematical Abstraction: Constructing Concept of Parallel Coordinates
NASA Astrophysics Data System (ADS)
Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.
2017-09-01
Mathematical abstraction is an important process in teaching and learning mathematics so pre-service mathematics teachers need to understand and experience this process. One of the theoretical-methodological frameworks for studying this process is Abstraction in Context (AiC). Based on this framework, abstraction process comprises of observable epistemic actions, Recognition, Building-With, Construction, and Consolidation called as RBC + C model. This study investigates and analyzes how pre-service mathematics teachers constructed and consolidated concept of Parallel Coordinates in a group discussion. It uses AiC framework for analyzing mathematical abstraction of a group of pre-service teachers consisted of four students in learning Parallel Coordinates concepts. The data were collected through video recording, students’ worksheet, test, and field notes. The result shows that the students’ prior knowledge related to concept of the Cartesian coordinate has significant role in the process of constructing Parallel Coordinates concept as a new knowledge. The consolidation process is influenced by the social interaction between group members. The abstraction process taken place in this group were dominated by empirical abstraction that emphasizes on the aspect of identifying characteristic of manipulated or imagined object during the process of recognizing and building-with.
Reinforcement learning in computer vision
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Burnaev, E. V.
2018-04-01
Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.
Learning effect of computerized cognitive tests in older adults
de Oliveira, Rafaela Sanches; Trezza, Beatriz Maria; Busse, Alexandre Leopold; Jacob-Filho, Wilson
2014-01-01
ABSTRACT Objective: To evaluate the learning effect of computerized cognitive testing in the elderly. Methods: Cross-sectional study with 20 elderly, 10 women and 10 men, with average age of 77.5 (±4.28) years. The volunteers performed two series of computerized cognitive tests in sequence and their results were compared. The applied tests were: Trail Making A and B, Spatial Recognition, Go/No Go, Memory Span, Pattern Recognition Memory and Reverse Span. Results: Based on the comparison of the results, learning effects were observed only in the Trail Making A test (p=0.019). Other tests performed presented no significant performance improvements. There was no correlation between learning effect and age (p=0.337) and education (p=0.362), as well as differences between genders (p=0.465). Conclusion: The computerized cognitive tests repeated immediately afterwards, for elderly, revealed no change in their performance, with the exception of the Trail Making test, demonstrating high clinical applicability, even in short intervals. PMID:25003917
Ramachers, Stefanie; Brouwer, Susanne; Fikkert, Paula
2017-01-01
In this study, Limburgian and Dutch 2.5- to 4-year-olds and adults took part in a word learning experiment. Following the procedure employed by Quam and Swingley (2010) and Singh et al. (2014), participants learned two novel word-object mappings. After training, word recognition was tested in correct pronunciation (CP) trials and mispronunciation (MP) trials featuring a pitch change. Since Limburgian is considered a restricted tone language, we expected that the pitch change would hinder word recognition in Limburgian, but not in non-tonal Dutch listeners. Contrary to our expectations, both Limburgian and Dutch children appeared to be sensitive to pitch changes in newly learned words, indicated by a significant decrease in target fixation in MP trials compared to CP trials. Limburgian and Dutch adults showed very strong naming effects in both trial types. The results are discussed against the background of the influence of the native prosodic system. PMID:29018382
Image processing and machine learning in the morphological analysis of blood cells.
Rodellar, J; Alférez, S; Acevedo, A; Molina, A; Merino, A
2018-05-01
This review focuses on how image processing and machine learning can be useful for the morphological characterization and automatic recognition of cell images captured from peripheral blood smears. The basics of the 3 core elements (segmentation, quantitative features, and classification) are outlined, and recent literature is discussed. Although red blood cells are a significant part of this context, this study focuses on malignant lymphoid cells and blast cells. There is no doubt that these technologies may help the cytologist to perform efficient, objective, and fast morphological analysis of blood cells. They may also help in the interpretation of some morphological features and may serve as learning and survey tools. Although research is still needed, it is important to define screening strategies to exploit the potential of image-based automatic recognition systems integrated in the daily routine of laboratories along with other analysis methodologies. © 2018 John Wiley & Sons Ltd.
Matsumoto, Narihisa; Eldridge, Mark A G; Saunders, Richard C; Reoli, Rachel; Richmond, Barry J
2016-01-06
In primates, visual recognition of complex objects depends on the inferior temporal lobe. By extension, categorizing visual stimuli based on similarity ought to depend on the integrity of the same area. We tested three monkeys before and after bilateral anterior inferior temporal cortex (area TE) removal. Although mildly impaired after the removals, they retained the ability to assign stimuli to previously learned categories, e.g., cats versus dogs, and human versus monkey faces, even with trial-unique exemplars. After the TE removals, they learned in one session to classify members from a new pair of categories, cars versus trucks, as quickly as they had learned the cats versus dogs before the removals. As with the dogs and cats, they generalized across trial-unique exemplars of cars and trucks. However, as seen in earlier studies, these monkeys with TE removals had difficulty learning to discriminate between two simple black and white stimuli. These results raise the possibility that TE is needed for memory of simple conjunctions of basic features, but that it plays only a small role in generalizing overall configural similarity across a large set of stimuli, such as would be needed for perceptual categorical assignment. The process of seeing and recognizing objects is attributed to a set of sequentially connected brain regions stretching forward from the primary visual cortex through the temporal lobe to the anterior inferior temporal cortex, a region designated area TE. Area TE is considered the final stage for recognizing complex visual objects, e.g., faces. It has been assumed, but not tested directly, that this area would be critical for visual generalization, i.e., the ability to place objects such as cats and dogs into their correct categories. Here, we demonstrate that monkeys rapidly and seemingly effortlessly categorize large sets of complex images (cats vs dogs, cars vs trucks), surprisingly, even after removal of area TE, leaving a puzzle about how this generalization is done. Copyright © 2016 the authors 0270-6474/16/360043-11$15.00/0.
Bidirectional Modulation of Recognition Memory
Ho, Jonathan W.; Poeta, Devon L.; Jacobson, Tara K.; Zolnik, Timothy A.; Neske, Garrett T.; Connors, Barry W.
2015-01-01
Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects. For example, animals and humans with perirhinal damage are unable to distinguish familiar from novel objects in recognition memory tasks. In the normal brain, perirhinal neurons respond to novelty and familiarity by increasing or decreasing firing rates. Recent work also implicates oscillatory activity in the low-beta and low-gamma frequency bands in sensory detection, perception, and recognition. Using optogenetic methods in a spontaneous object exploration (SOR) task, we altered recognition memory performance in rats. In the SOR task, normal rats preferentially explore novel images over familiar ones. We modulated exploratory behavior in this task by optically stimulating channelrhodopsin-expressing perirhinal neurons at various frequencies while rats looked at novel or familiar 2D images. Stimulation at 30–40 Hz during looking caused rats to treat a familiar image as if it were novel by increasing time looking at the image. Stimulation at 30–40 Hz was not effective in increasing exploration of novel images. Stimulation at 10–15 Hz caused animals to treat a novel image as familiar by decreasing time looking at the image, but did not affect looking times for images that were already familiar. We conclude that optical stimulation of PER at different frequencies can alter visual recognition memory bidirectionally. SIGNIFICANCE STATEMENT Recognition of novelty and familiarity are important for learning, memory, and decision making. Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects, but how novelty and familiarity are encoded and transmitted in the brain is not known. Perirhinal neurons respond to novelty and familiarity by changing firing rates, but recent work suggests that brain oscillations may also be important for recognition. In this study, we showed that stimulation of the PER could increase or decrease exploration of novel and familiar images depending on the frequency of stimulation. Our findings suggest that optical stimulation of PER at specific frequencies can predictably alter recognition memory. PMID:26424881
ERIC Educational Resources Information Center
Weisberg, Renee; Balajthy, Ernest
A study investigated transfer effects of training below average high school readers in the use of graphic organizers and summary writing on their recognition of compare/contrast text structure. Subjects, 32 high school students with below-expectancy standardized test scores, were placed in two groups: an experimental group (five males and 11…
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
Some of the thousand words a picture is worth.
Mandler, J M; Johnson, N S
1976-09-01
The effects of real-world schemata on recognition of complex pictures were studied. Two kinds of pictures were used: pictures of objects forming real-world scenes and unorganized collections of the same objects. The recognition test employed distractors that varied four types of information: inventory, spatial location, descriptive and spatial composition. Results emphasized the selective nature of schemata since superior recognition of one kind of information was offset by loss of another. Spatial location information was better recognized in real-world scenes and spatial composition information was better recognized in unorganized scenes. Organized and unorganized pictures did not differ with respect of inventory and descriptive information. The longer the pictures were studied, the longer subjects took to recognize them. Reaction time for hits, misses, and false alarms increased dramatically as presentation time increased from 5 to 60 sec. It was suggested that detection of a difference in a distractor terminated search, but that when no difference was detected, an exhaustive search of the available information took place.
Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M
2018-03-01
This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.
Levodopa enhances explicit new-word learning in healthy adults: a preliminary study.
Shellshear, Leanne; MacDonald, Anna D; Mahoney, Jeffrey; Finch, Emma; McMahon, Katie; Silburn, Peter; Nathan, Pradeep J; Copland, David A
2015-09-01
While the role of dopamine in modulating executive function, working memory and associative learning has been established; its role in word learning and language processing more generally is not clear. This preliminary study investigated the impact of increased synaptic dopamine levels on new-word learning ability in healthy young adults using an explicit learning paradigm. A double-blind, placebo-controlled, between-groups design was used. Participants completed five learning sessions over 1 week with levodopa or placebo administered at each session (five doses, 100 mg). Each session involved a study phase followed by a test phase. Test phases involved recall and recognition tests of the new (non-word) names previously paired with unfamiliar objects (half with semantic descriptions) during the study phase. The levodopa group showed superior recall accuracy for new words over five learning sessions compared with the placebo group and better recognition accuracy at a 1-month follow-up for words learnt with a semantic description. These findings suggest that dopamine boosts initial lexical acquisition and enhances longer-term consolidation of words learnt with semantic information, consistent with dopaminergic enhancement of semantic salience. Copyright © 2015 John Wiley & Sons, Ltd.
van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.
2010-01-01
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499
Social cognition in schizophrenia and healthy aging: differences and similarities.
Silver, Henry; Bilker, Warren B
2014-12-01
Social cognition is impaired in schizophrenia but it is not clear whether this is specific for the illness and whether emotion perception is selectively affected. To study this we examined the perception of emotional and non-emotional clues in facial expressions, a key social cognitive skill, in schizophrenia patients and old healthy individuals using young healthy individuals as reference. Tests of object recognition, visual orientation, psychomotor speed, and working memory were included to allow multivariate analysis taking into account other cognitive functions Schizophrenia patients showed impairments in recognition of identity and emotional facial clues compared to young and old healthy groups. Severity was similar to that for object recognition and visuospatial processing. Older and younger healthy groups did not differ from each other on these tests. Schizophrenia patients and old healthy individuals were similarly impaired in the ability to automatically learn new faces during the testing procedure (measured by the CSTFAC index) compared to young healthy individuals. Social cognition is distinctly impaired in schizophrenia compared to healthy aging. Further study is needed to identify the mechanisms of automatic social cognitive learning impairment in schizophrenia patients and healthy aging individuals and determine whether similar neural systems are affected. Copyright © 2014 Elsevier B.V. All rights reserved.
Matheson, Heath E; Familiar, Ariana M; Thompson-Schill, Sharon L
2018-03-02
Theories of embodied cognition propose that we recognize tools in part by reactivating sensorimotor representations of tool use in a process of simulation. If motor simulations play a causal role in tool recognition then performing a concurrent motor task should differentially modulate recognition of experienced vs. non-experienced tools. We sought to test the hypothesis that an incompatible concurrent motor task modulates conceptual processing of learned vs. non-learned objects by directly manipulating the embodied experience of participants. We trained one group to use a set of novel, 3-D printed tools under the pretense that they were preparing for an archeological expedition to Mars (manipulation group); we trained a second group to report declarative information about how the tools are stored (storage group). With this design, familiarity and visual attention to different object parts was similar for both groups, though their qualitative interactions differed. After learning, participants made familiarity judgments of auditorily presented tool names while performing a concurrent motor task or simply sitting at rest. We showed that familiarity judgments were facilitated by motor state-dependence; specifically, in the manipulation group, familiarity was facilitated by a concurrent motor task, whereas in the spatial group familiarity was facilitated while sitting at rest. These results are the first to directly show that manipulation experience differentially modulates conceptual processing of familiar vs. unfamiliar objects, suggesting that embodied representations contribute to recognizing tools.
Markant, Julie; Worden, Michael S; Amso, Dima
2015-04-01
Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location would boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone. To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, 1980; Posner, Rafal, & Choate, 1985) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. Copyright © 2015 Elsevier Inc. All rights reserved.
Chen, Guangdong; Lin, Xiaodong; Li, Gongying; Jiang, Diego; Lib, Zhiruo; Jiang, Ronghuan; Zhuo, Chuanjun
2017-01-01
The aim of the present study was to investigate the effects of a commonly-used atypical antipsychotic, risperidone, on alterations in spatial learning and in the hippocampal brain-derived neurotrophic factor (BDNF)-tyrosine receptor kinase B (TrkB) signalling system caused by acute dizocilpine maleate (MK-801) treatment. In experiment 1, adult male Sprague-Dawley rats subjected to acute treatment of either low-dose MK801 (0.1 mg/kg) or normal saline (vehicle) were tested for spatial object recognition and hippocampal expression levels of BDNF, TrkB and the phophorylation of TrkB (p-TrkB). We found that compared to the vehicle, MK-801 treatment impaired spatial object recognition of animals and downregulated the expression levels of p-TrkB. In experiment 2, MK-801- or vehicle-treated animals were further injected with risperidone (0.1 mg/kg) or vehicle before behavioural testing and sacrifice. Of note, we found that risperidone successfully reversed the deleterious effects of MK-801 on spatial object recognition and upregulated the hippocampal BDNF-TrkB signalling system. Collectively, the findings suggest that cognitive deficits from acute N-methyl-D-aspartate receptor blockade may be associated with the hypofunction of hippocampal BDNF-TrkB signalling system and that risperidone was able to reverse these alterations. PMID:28451387
Havranek, Tomas; Zatkova, Martina; Lestanova, Zuzana; Bacova, Zuzana; Mravec, Boris; Hodosy, Julius; Strbak, Vladimir; Bakos, Jan
2015-06-01
Brain oxytocin regulates a variety of social and affiliative behaviors and affects also learning and memory. However, mechanisms of its action at the level of neuronal circuits are not fully understood. The present study tests the hypothesis that molecular factors required for memory formation and synaptic plasticity, including brain-derived neurotrophic factor, neural growth factor, nestin, microtubule-associated protein 2 (MAP2), and synapsin I, are enhanced by central administration of oxytocin. We also investigated whether oxytocin enhances object recognition and acts as anxiolytic agent. Therefore, male Wistar rats were infused continuously with oxytocin (20 ng/µl) via an osmotic minipump into the lateral cerebral ventricle for 7 days; controls were infused with vehicle. The object recognition test, open field test, and elevated plus maze test were performed on the sixth, seventh, and eighth days from starting the infusion. No significant effects of oxytocin on anxious-like behavior were observed. The object recognition test showed that oxytocin-treated rats significantly preferred unknown objects. Oxytocin treatment significantly increased gene expression and protein levels of neurotrophins, MAP2, and synapsin I in the hippocampus. No changes were observed in nestin expression. Our results provide the first direct evidence implicating oxytocin as a regulator of brain plasticity at the level of changes of neuronal growth factors, cytoskeletal proteins, and behavior. The data support assumption that oxytocin is important for short-term hippocampus-dependent memory. © 2015 Wiley Periodicals, Inc.
Automatic Sound Generation for Spherical Objects Hitting Straight Beams Based on Physical Models.
ERIC Educational Resources Information Center
Rauterberg, M.; And Others
Sounds are the result of one or several interactions between one or several objects at a certain place and in a certain environment; the attributes of every interaction influence the generated sound. The following factors influence users in human/computer interaction: the organization of the learning environment, the content of the learning tasks,…
Hippocampus lesions induced deficits in social and spatial recognition in Octodon degus.
Uekita, Tomoko; Okanoya, Kazuo
2011-06-01
Previous studies of rodents reported that the hippocampus plays an important role in social behavior as well as spatial behavior. However, there are inconsistencies between reports of the effects of hippocampal lesions on social behavior. The present study sought to clarify the aspects of social behavior in which the hippocampus plays a role in the degu, Octodon degus, a social rodent. We examined the effects of hippocampal lesions on social behavior in the degu using familiar and novel partners. When placed in a familiar environment with a familiar partner after surgery, sham operation control (S.Cont) degus exhibited affinitive behavior longer compared with hippocampal lesioned (HPC) degus. In a novel environment, S.Cont degus exhibited longer aggressive behavior toward novel partners, and longer affinitive behavior with familiar partners compared with HPC degus. HPC degus did not show evidence of differentiation in social behavior, regardless of partner's novelty. The results of an anxiety test confirmed that these findings could not be attributed to changes in emotional state. We conducted an object-recognition test with the same subjects. HPC degus showed an impairment in spatial recognition but not object recognition. Taken together, these results suggest that the degu hippocampus plays an important role not only in spatial recognition but also social recognition. The changes in social behavior resulting from hippocampal lesions were interpreted as due to an impairment of social recognition rather than an impairment in novelty detection. Copyright © 2011 Elsevier B.V. All rights reserved.
An adaptive deep Q-learning strategy for handwritten digit recognition.
Qiao, Junfei; Wang, Gongming; Li, Wenjing; Chen, Min
2018-02-22
Handwritten digits recognition is a challenging problem in recent years. Although many deep learning-based classification algorithms are studied for handwritten digits recognition, the recognition accuracy and running time still need to be further improved. In this paper, an adaptive deep Q-learning strategy is proposed to improve accuracy and shorten running time for handwritten digit recognition. The adaptive deep Q-learning strategy combines the feature-extracting capability of deep learning and the decision-making of reinforcement learning to form an adaptive Q-learning deep belief network (Q-ADBN). First, Q-ADBN extracts the features of original images using an adaptive deep auto-encoder (ADAE), and the extracted features are considered as the current states of Q-learning algorithm. Second, Q-ADBN receives Q-function (reward signal) during recognition of the current states, and the final handwritten digits recognition is implemented by maximizing the Q-function using Q-learning algorithm. Finally, experimental results from the well-known MNIST dataset show that the proposed Q-ADBN has a superiority to other similar methods in terms of accuracy and running time. Copyright © 2018 Elsevier Ltd. All rights reserved.
Lexical leverage: Category knowledge boosts real-time novel word recognition in two-year- olds
Borovsky, Arielle; Ellis, Erica M.; Evans, Julia L.; Elman, Jeffrey L.
2016-01-01
Recent research suggests that infants tend to add words to their vocabulary that are semantically related to other known words, though it is not clear why this pattern emerges. In this paper, we explore whether infants to leverage their existing vocabulary and semantic knowledge when interpreting novel label-object mappings in real-time. We initially identified categorical domains for which individual 24-month-old infants have relatively higher and lower levels of knowledge, irrespective of overall vocabulary size. Next, we taught infants novel words in these higher and lower knowledge domains and then asked if their subsequent real-time recognition of these items varied as a function of their category knowledge. While our participants successfully acquired the novel label -object mappings in our task, there were important differences in the way infants recognized these words in real time. Namely, infants showed more robust recognition of high (vs. low) domain knowledge words. These findings suggest that dense semantic structure facilitates early word learning and real-time novel word recognition. PMID:26452444
ERIC Educational Resources Information Center
Brücknerová, Karla; Novotný, Petr
2017-01-01
This article interprets data from qualitative research into intergenerational learning (IGL) among teachers at Czech primary and secondary schools. The objective of the text is to answer the question: "What are teachers of different generations learning from one another in schools and in what ways does this learning take place?" Drawing…
Creating objects and object categories for studying perception and perceptual learning.
Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay
2012-11-02
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties. Many innovative and useful methods currently exist for creating novel objects and object categories (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Two Pathways to Stimulus Encoding in Category Learning?
Davis, Tyler; Love, Bradley C.; Maddox, W. Todd
2008-01-01
Category learning theorists tacitly assume that stimuli are encoded by a single pathway. Motivated by theories of object recognition, we evaluate a dual-pathway account of stimulus encoding. The part-based pathway establishes mappings between sensory input and symbols that encode discrete stimulus features, whereas the image-based pathway applies holistic templates to sensory input. Our experiments use rule-plus-exception structures in which one exception item in each category violates a salient regularity and must be distinguished from other items. In Experiment 1, we find that discrete representations are crucial for recognition of exceptions following brief training. Experiments 2 and 3 involve multi-session training regimens designed to encourage either part or image-based encoding. We find that both pathways are able to support exception encoding, but have unique characteristics. We speculate that one advantage of the part-based pathway is the ability to generalize across domains, whereas the image-based pathway provides faster and more effortless recognition. PMID:19460948
Goulart, B K; de Lima, M N M; de Farias, C B; Reolon, G K; Almeida, V R; Quevedo, J; Kapczinski, F; Schröder, N; Roesler, R
2010-06-02
The non-competitive N-methyl-d-aspartate (NMDA) glutamate receptor antagonist ketamine has been shown to produce cognitive deficits. However, the effects of ketamine on the consolidation phase of memory remain poorly characterized. Here we show that systemic administration of ketamine immediately after training dose-dependently impairs long-term retention of memory for a novel object recognition (NOR) task in rats. Control experiments showed that the impairing effects of ketamine could not be attributed to an influence on memory retrieval or sensorimotor effects. In addition, ketamine prevented the increase in hippocampal brain-derived neurotrophic factor (BDNF) levels induced by NOR learning. Our results show for the first time that ketamine disrupts the consolidation phase of long-term recognition memory. In addition, the findings suggest that the amnestic effects of ketamine might be at least partially mediated by an influence on BDNF signaling in the hippocampus. Copyright 2010 IBRO. Published by Elsevier Ltd. All rights reserved.
Attention during memory retrieval enhances future remembering.
Dudukovic, Nicole M; Dubrow, Sarah; Wagner, Anthony D
2009-10-01
Memory retrieval is a powerful learning event that influences whether an experience will be remembered in the future. Although retrieval can succeed in the presence of distraction, dividing attention during retrieval may reduce the power of remembering as an encoding event. In the present experiments, participants studied pictures of objects under full attention and then engaged in item recognition and source memory retrieval under full or divided attention. Two days later, a second recognition and source recollection test assessed the impact of attention during initial retrieval on long-term retention. On this latter test, performance was superior for items that had been tested initially under full versus divided attention. More importantly, even when items were correctly recognized on the first test, divided attention reduced the likelihood of subsequent recognition on the second test. The same held true for source recollection. Additionally, foils presented during the first test were also less likely to be later recognized if they had been encountered initially under divided attention. These findings demonstrate that attentive retrieval is critical for learning through remembering.
Cross-label Suppression: a Discriminative and Fast Dictionary Learning with Group Regularization.
Wang, Xiudong; Gu, Yuantao
2017-05-10
This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose crosslabel suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used `0-norm or `1- norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.
Event Recognition Based on Deep Learning in Chinese Texts
Zhang, Yajun; Liu, Zongtian; Zhou, Wen
2016-01-01
Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%. PMID:27501231
Event Recognition Based on Deep Learning in Chinese Texts.
Zhang, Yajun; Liu, Zongtian; Zhou, Wen
2016-01-01
Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.
Baldominos, Alejandro; Saez, Yago; Isasi, Pedro
2018-04-23
Human activity recognition is a challenging problem for context-aware systems and applications. It is gaining interest due to the ubiquity of different sensor sources, wearable smart objects, ambient sensors, etc. This task is usually approached as a supervised machine learning problem, where a label is to be predicted given some input data, such as the signals retrieved from different sensors. For tackling the human activity recognition problem in sensor network environments, in this paper we propose the use of deep learning (convolutional neural networks) to perform activity recognition using the publicly available OPPORTUNITY dataset. Instead of manually choosing a suitable topology, we will let an evolutionary algorithm design the optimal topology in order to maximize the classification F1 score. After that, we will also explore the performance of committees of the models resulting from the evolutionary process. Results analysis indicates that the proposed model was able to perform activity recognition within a heterogeneous sensor network environment, achieving very high accuracies when tested with new sensor data. Based on all conducted experiments, the proposed neuroevolutionary system has proved to be able to systematically find a classification model which is capable of outperforming previous results reported in the state-of-the-art, showing that this approach is useful and improves upon previously manually-designed architectures.
2018-01-01
Human activity recognition is a challenging problem for context-aware systems and applications. It is gaining interest due to the ubiquity of different sensor sources, wearable smart objects, ambient sensors, etc. This task is usually approached as a supervised machine learning problem, where a label is to be predicted given some input data, such as the signals retrieved from different sensors. For tackling the human activity recognition problem in sensor network environments, in this paper we propose the use of deep learning (convolutional neural networks) to perform activity recognition using the publicly available OPPORTUNITY dataset. Instead of manually choosing a suitable topology, we will let an evolutionary algorithm design the optimal topology in order to maximize the classification F1 score. After that, we will also explore the performance of committees of the models resulting from the evolutionary process. Results analysis indicates that the proposed model was able to perform activity recognition within a heterogeneous sensor network environment, achieving very high accuracies when tested with new sensor data. Based on all conducted experiments, the proposed neuroevolutionary system has proved to be able to systematically find a classification model which is capable of outperforming previous results reported in the state-of-the-art, showing that this approach is useful and improves upon previously manually-designed architectures. PMID:29690587
Tinsley, C J; Narduzzo, K E; Ho, J W; Barker, G R; Brown, M W; Warburton, E C
2009-09-01
The aim was to investigate the role of calcium-calmodulin-dependent protein kinase (CAMK)II in object recognition memory. The performance of rats in a preferential object recognition test was examined after local infusion of the CAMKII inhibitors KN-62 or autocamtide-2-related inhibitory peptide (AIP) into the perirhinal cortex. KN-62 or AIP infused after acquisition impaired memory tested at 24 h, indicating an involvement of CAMKII in the consolidation of recognition memory. Memory was impaired when KN-62 was infused at 20 min after acquisition or when AIP was infused at 20, 40, 60 or 100 min after acquisition. The time-course of CAMKII activation in rats was further examined by immunohistochemical staining for phospho-CAMKII(Thre286)alpha at 10, 40, 70 and 100 min following the viewing of novel and familiar images. At 70 min, processing novel images resulted in more phospho-CAMKII(Thre286)alpha-stained neurons in the perirhinal cortex than did the processing of familiar images, consistent with the viewing of novel images increasing the activity of CAMKII at this time. This difference was eliminated by prior infusion of AIP. These findings establish that CAMKII is active within the perirhinal region between approximately 20 and 100 min following learning and then returns to baseline. Thus, increased CAMKII activity is essential for the consolidation of long-term object recognition memory but continuation of that increased activity throughout the 24 h memory delay is not necessary for maintenance of the memory.
A bio-inspired system for spatio-temporal recognition in static and video imagery
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas
2007-04-01
This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.
Do Capuchin Monkeys (Cebus apella) Diagnose Causal Relations in the Absence of a Direct Reward?
Edwards, Brian J.; Rottman, Benjamin M.; Shankar, Maya; Betzler, Riana; Chituc, Vladimir; Rodriguez, Ricardo; Silva, Liara; Wibecan, Leah; Widness, Jane; Santos, Laurie R.
2014-01-01
We adapted a method from developmental psychology [1] to explore whether capuchin monkeys (Cebus apella) would place objects on a “blicket detector” machine to diagnose causal relations in the absence of a direct reward. Across five experiments, monkeys could place different objects on the machine and obtain evidence about the objects’ causal properties based on whether each object “activated” the machine. In Experiments 1–3, monkeys received both audiovisual cues and a food reward whenever the machine activated. In these experiments, monkeys spontaneously placed objects on the machine and succeeded at discriminating various patterns of statistical evidence. In Experiments 4 and 5, we modified the procedure so that in the learning trials, monkeys received the audiovisual cues when the machine activated, but did not receive a food reward. In these experiments, monkeys failed to test novel objects in the absence of an immediate food reward, even when doing so could provide critical information about how to obtain a reward in future test trials in which the food reward delivery device was reattached. The present studies suggest that the gap between human and animal causal cognition may be in part a gap of motivation. Specifically, we propose that monkey causal learning is motivated by the desire to obtain a direct reward, and that unlike humans, monkeys do not engage in learning for learning’s sake. PMID:24586347
Zhao, Yu; Ge, Fangfei; Liu, Tianming
2018-07-01
fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.
Extending the Pedagogy of Mobility
ERIC Educational Resources Information Center
Hedberg, John G.
2014-01-01
Direct student experience of the real organism, object, place or environment is recognised by teachers as having powerful potential for high-quality learning. Mobile technologies offer a way for students to capture their authentic learning experiences, but rendering this rich experience into explicit and highly situated learning contexts for…
Tian, Jia; Geng, Fei; Gao, Feng; Chen, Yi-Hua; Liu, Ji-Hong; Wu, Jian-Lin; Lan, Yu-Jie; Zeng, Yuan-Ning; Li, Xiao-Wen; Yang, Jian-Ming; Gao, Tian-Ming
2017-08-01
Hippocampal function is important for learning and memory, and dysfunction of the hippocampus has been linked to the pathophysiology of neuropsychiatric diseases such as schizophrenia. Neuregulin1 (NRG1) and ErbB4, two susceptibility genes for schizophrenia, reportedly modulate long-term potentiation (LTP) at hippocampal Schaffer collateral (SC)-CA1 synapses. However, little is known regarding the contribution of hippocampal NRG1/ErbB4 signaling to learning and memory function. Here, quantitative real-time PCR and Western blotting were used to assess the mRNA and protein levels of NRG1 and ErbB4. Pharmacological and genetic approaches were used to manipulate NRG1/ErbB4 signaling, following which learning and memory behaviors were evaluated using the Morris water maze, Y-maze test, and the novel object recognition test. Spatial learning was found to reduce hippocampal NRG1 and ErbB4 expression. The blockade of NRG1/ErbB4 signaling in hippocampal CA1, either by neutralizing endogenous NRG1 or inhibiting/ablating ErbB4 receptor activity, enhanced hippocampus-dependent spatial learning, spatial working memory, and novel object recognition memory. Accordingly, administration of exogenous NRG1 impaired those functions. More importantly, the specific ablation of ErbB4 in parvalbumin interneurons also improved learning and memory performance. The manipulation of NRG1/ErbB4 signaling in the present study revealed that NRG1/ErbB4 activity in the hippocampus is critical for learning and memory. These findings might provide novel insights on the pathophysiological mechanisms of schizophrenia and a new target for the treatment of Alzheimer's disease, which is characterized by a progressive decline in cognitive function.
NASA Technical Reports Server (NTRS)
Wolf, Jared J.
1977-01-01
The following research was discussed: (1) speech signal processing; (2) automatic speech recognition; (3) continuous speech understanding; (4) speaker recognition; (5) speech compression; (6) subjective and objective evaluation of speech communication system; (7) measurement of the intelligibility and quality of speech when degraded by noise or other masking stimuli; (8) speech synthesis; (9) instructional aids for second-language learning and for training of the deaf; and (10) investigation of speech correlates of psychological stress. Experimental psychology, control systems, and human factors engineering, which are often relevant to the proper design and operation of speech systems are described.
Deep kernel learning method for SAR image target recognition
NASA Astrophysics Data System (ADS)
Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao
2017-10-01
With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.
Health needs: the interface between the discourse of health professionals and victimized women1
de Oliveira, Rebeca Nunes Guedes; da Fonseca, Rosa Maria Godoy Serpa
2015-01-01
Objective: to understand the limits and the evaluative possibilities of the Family Health Strategy regarding the recognition of the health needs of women who experience violence. Method: a study with a qualitative approach, grounded in the perspective of gender, and which adopted health needs as the analytical category. The data were collected through interviews with health professionals and women who made use of a health service, and were analyzed using the method of discourse analysis. Results: the meeting between the discourses of women who use the services and the professionals of the health service revealed, as the interface, human needs, as in the example of autonomy and of bonds. The understanding regarding the needs was limited to the recognition of health problems of physical and psychological natures, just as the predominance of the recognition of needs for maintaining life in the light of essentially human needs was revealed in the professionals' discourses as an important limitation of the practices. Conclusion: emphasis is placed on the perspective of gender as a tool which must be aggregated to the routine of the professional practices in health so as to confirm or deny the transformative character of the care in place regarding the recognition and confronting of the women's health needs. PMID:26039301
Asymmetry of Neuronal Combinatorial Codes Arises from Minimizing Synaptic Weight Change.
Leibold, Christian; Monsalve-Mercado, Mauro M
2016-08-01
Synaptic change is a costly resource, particularly for brain structures that have a high demand of synaptic plasticity. For example, building memories of object positions requires efficient use of plasticity resources since objects can easily change their location in space and yet we can memorize object locations. But how should a neural circuit ideally be set up to integrate two input streams (object location and identity) in case the overall synaptic changes should be minimized during ongoing learning? This letter provides a theoretical framework on how the two input pathways should ideally be specified. Generally the model predicts that the information-rich pathway should be plastic and encoded sparsely, whereas the pathway conveying less information should be encoded densely and undergo learning only if a neuronal representation of a novel object has to be established. As an example, we consider hippocampal area CA1, which combines place and object information. The model thereby provides a normative account of hippocampal rate remapping, that is, modulations of place field activity by changes of local cues. It may as well be applicable to other brain areas (such as neocortical layer V) that learn combinatorial codes from multiple input streams.
ERIC Educational Resources Information Center
Pooley, Robert C.; Golub, Lester S.
Emphasizing the behavioral and social aspects of language as a foundation for instruction, 16 concepts for learning the structure of English in grades 7-9 are outlined in an attempt to set down in logical order the basic concepts involved in the understanding of the English language. The concepts begin with a recognition of the social purposes of…
NASA Astrophysics Data System (ADS)
Maas, Christian; Schmalzl, Jörg
2013-08-01
Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.
Peer-to-Peer Recognition of Learning in Open Education
ERIC Educational Resources Information Center
Schmidt, Jan Philipp; Geith, Christine; Haklev, Stian; Thierstein, Joel
2009-01-01
Recognition in education is the acknowledgment of learning achievements. Accreditation is certification of such recognition by an institution, an organization, a government, a community, etc. There are a number of assessment methods by which learning can be evaluated (exam, practicum, etc.) for the purpose of recognition and accreditation, and…
Prince, Toni-Moi; Wimmer, Mathieu; Choi, Jennifer; Havekes, Robbert; Aton, Sara; Abel, Ted
2014-01-01
Sleep deprivation disrupts hippocampal function and plasticity. In particular, long-term memory consolidation is impaired by sleep deprivation, suggesting that a specific critical period exists following learning during which sleep is necessary. To elucidate the impact of sleep deprivation on long-term memory consolidation and synaptic plasticity, long-term memory was assessed when mice were sleep deprived following training in the hippocampus-dependent object place recognition task. We found that 3 hours of sleep deprivation significantly impaired memory when deprivation began 1 hour after training. In contrast, 3 hours of deprivation beginning immediately post-training did not impair spatial memory. Furthermore, a 3-hour sleep deprivation beginning 1 hour after training impaired hippocampal long-term potentiation (LTP), whereas sleep deprivation immediately after training did not affect LTP. Together, our findings define a specific 3-hour critical period, extending from 1 to 4 hours after training, during which sleep deprivation impairs hippocampal function. PMID:24380868
Invariant visual object recognition: a model, with lighting invariance.
Rolls, Edmund T; Stringer, Simon M
2006-01-01
How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.
The Place of E-Learning in Africa's Institutions of Higher Learning
ERIC Educational Resources Information Center
Nafukho, Fredrick Muyia
2007-01-01
The paper seeks to accomplish four objectives. The first is to examine the need for e-learning in Africa's institutions of higher learning. The second is to discuss the policy, institutional, pedagogical, copyright, and quality assurance issues that need to be addressed. The third is to critically examine the advantages and disadvantages of…
An update on the role of the 5-hydroxytryptamine6 receptor in cognitive function.
Fone, Kevin C F
2008-11-01
As the 5-hydroxytryptamine(6) (5-HT(6)) receptor is almost exclusively expressed in the CNS, particularly in areas associated with learning and memory, many studies have examined its role in cognitive function in the rodent, as reviewed herein. Most studies, in healthy adult rats, report that 5-HT(6) receptor antagonists enhance retention of spatial learning in the Morris water maze, improve consolidation in autoshaping tasks and reverse natural forgetting in object recognition. Antagonists appear to facilitate both cholinergic and glutamatergic neurotransmission, reversing scopolamine- and NMDA receptor antagonist-induced memory impairments. Recent reports show that the 5-HT(6) receptor antagonist, PRX-07034, restores the impairment of novel object recognition produced in rats reared in social isolation, a neurodevelopmental model producing behavioural changes similar to several core symptoms seen in schizophrenia. The 5-HT(6) receptor antagonist, Ro 04-6790, modestly improved reversal learning in isolation reared but not group-housed controls in the water maze. Ro 04-6790 also improved novel object discrimination both in adult rats that received chronic intermittent phencyclidine and drug-naïve 18-month-old rats. However, more information on their effect in animal models of schizophrenia and Alzheimer's disease is required. Several selective high-affinity 5-HT(6) receptor agonists developed recently also improve object discrimination and extra-dimensional set-shifting behaviour. Thus both 5-HT(6) receptor agonist and antagonist compounds show promise as pro-cognitive agents in pre-clinical studies but the explanation for their paradoxical analogous effect is currently unclear, and is discussed in this article.
ERIC Educational Resources Information Center
Lee, Inah; Kim, Jangjin
2010-01-01
Hippocampal-dependent tasks often involve specific associations among stimuli (including egocentric information), and such tasks are therefore prone to interference from irrelevant task strategies before a correct strategy is found. Using an object-place paired-associate task, we investigated changes in neural firing patterns in the hippocampus in…
Place Names: Making the Basics of Geography Fun to Learn.
ERIC Educational Resources Information Center
Vogeler, Ingolf
1988-01-01
Arguing that students need to have knowledge about places and regions to understand current and past world affairs, a college-level geography course (University of Wisconsin Eau Claire) which teaches physical and cultural place names is described. Presents course objectives, topics, and activities and states that it serves student needs and…
Slow feature analysis: unsupervised learning of invariances.
Wiskott, Laurenz; Sejnowski, Terrence J
2002-04-01
Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.
Volumetric segmentation of range images for printed circuit board inspection
NASA Astrophysics Data System (ADS)
Van Dop, Erik R.; Regtien, Paul P. L.
1996-10-01
Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.
NASA Astrophysics Data System (ADS)
Madokoro, H.; Tsukada, M.; Sato, K.
2013-07-01
This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.
Evidence for view-invariant face recognition units in unfamiliar face learning.
Etchells, David B; Brooks, Joseph L; Johnston, Robert A
2017-05-01
Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.
Sleep Enhances Recognition Memory for Conspecifics as Bound into Spatial Context
Sawangjit, Anuck; Kelemen, Eduard; Born, Jan; Inostroza, Marion
2017-01-01
Social memory refers to the fundamental ability of social species to recognize their conspecifics in quite different contexts. Sleep has been shown to benefit consolidation, especially of hippocampus-dependent episodic memory whereas effects of sleep on social memory are less well studied. Here, we examined the effect of sleep on memory for conspecifics in rats. To discriminate interactions between the consolidation of social memory and of spatial context during sleep, adult Long Evans rats performed on a social discrimination task in a radial arm maze. The Learning phase comprised three 10-min sampling sessions in which the rats explored a juvenile rat presented at a different arm of the maze in each session. Then the rats were allowed to sleep (n = 18) or stayed awake (n = 18) for 120 min. During the following 10-min Test phase, the familiar juvenile rat (of the Learning phase) was presented along with a novel juvenile rat, each rat at an opposite arm of the maze. Significant social recognition memory, as indicated by preferential exploration of the novel over the familiar conspecific, occurred only after post-learning sleep, but not after wakefulness. Sleep, compared with wakefulness, significantly enhanced social recognition during the first minute of the Test phase. However, memory expression depended on the spatial configuration: Significant social recognition memory emerged only after sleep when the rat encountered the novel conspecific at a place different from that of the familiar juvenile in the last sampling session before sleep. Though unspecific retrieval-related effects cannot entirely be excluded, our findings suggest that sleep, rather than independently enhancing social and spatial aspects of memory, consolidates social memory by acting on an episodic representation that binds the memory of the conspecific together with the spatial context in which it was recently encountered. PMID:28270755
Pattern Recognition Using Artificial Neural Network: A Review
NASA Astrophysics Data System (ADS)
Kim, Tai-Hoon
Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, artificial neural network techniques theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system using ANN and identify research topics and applications which are at the forefront of this exciting and challenging field.
Gold, Carl A; Marchant, Natalie L; Koutstaal, Wilma; Schacter, Daniel L; Budson, Andrew E
2007-09-20
The presence or absence of conceptual information in pictorial stimuli may explain the mixed findings of previous studies of false recognition in patients with mild Alzheimer's disease (AD). To test this hypothesis, 48 patients with AD were compared to 48 healthy older adults on a recognition task first described by Koutstaal et al. [Koutstaal, W., Reddy, C., Jackson, E. M., Prince, S., Cendan, D. L., & Schacter D. L. (2003). False recognition of abstract versus common objects in older and younger adults: Testing the semantic categorization account. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 499-510]. Participants studied and were tested on their memory for categorized ambiguous pictures of common objects. The presence of conceptual information at study and/or test was manipulated by providing or withholding disambiguating semantic labels. Analyses focused on testing two competing theories. The semantic encoding hypothesis, which posits that the inter-item perceptual details are not encoded by AD patients when conceptual information is present in the stimuli, was not supported by the findings. In contrast, the conceptual fluency hypothesis was supported. Enhanced conceptual fluency at test dramatically shifted AD patients to a more liberal response bias, raising their false recognition. These results suggest that patients with AD rely on the fluency of test items in making recognition memory decisions. We speculate that AD patients' over reliance upon fluency may be attributable to (1) dysfunction of the hippocampus, disrupting recollection, and/or (2) dysfunction of prefrontal cortex, disrupting post-retrieval processes.
Enhanced learning of natural visual sequences in newborn chicks.
Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W
2016-07-01
To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.
Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.
Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi
2014-02-01
This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.
Three learning phases for radial-basis-function networks.
Schwenker, F; Kestler, H A; Palm, G
2001-05-01
In this paper, learning algorithms for radial basis function (RBF) networks are discussed. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initialization of the MLP's parameters, an RBF network may be trained in many different ways. We categorize these RBF training methods into one-, two-, and three-phase learning schemes. Two-phase RBF learning is a very common learning scheme. The two layers of an RBF network are learnt separately; first the RBF layer is trained, including the adaptation of centers and scaling parameters, and then the weights of the output layer are adapted. RBF centers may be trained by clustering, vector quantization and classification tree algorithms, and the output layer by supervised learning (through gradient descent or pseudo inverse solution). Results from numerical experiments of RBF classifiers trained by two-phase learning are presented in three completely different pattern recognition applications: (a) the classification of 3D visual objects; (b) the recognition hand-written digits (2D objects); and (c) the categorization of high-resolution electrocardiograms given as a time series (ID objects) and as a set of features extracted from these time series. In these applications, it can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three-phase learning in RBF networks. A practical advantage of two- and three-phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vector (SV) learning in RBF networks is a different learning approach. SV learning can be considered, in this context of learning, as a special type of one-phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including k-nearest-neighbor, learning vector quantization and RBF classifiers trained through two-phase, three-phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three-phase learning are superior to the results of two-phase learning, but SV learning often leads to complex network structures, since the number of support vectors is not a small fraction of the total number of data points.
Schneider, Tomasz; Ziòłkowska, Barbara; Gieryk, Agnieszka; Tyminska, Anna; Przewłocki, Ryszard
2007-09-01
It has been suggested that behavioral aberrations observed in autism could be the result of dysfunction of the neuroregulatory role performed by the endogenous opioid peptides. Many of those aberrations have been recently modeled in rats exposed to valproic acid (VPA) on the 12th day of gestation (VPA rats). The aim of the present study was to elucidate functioning of the enkephalinergic system, one of the endogenous opioid peptide systems strongly involved in emotional responses, in VPA rats using both biochemical and behavioral methods. In situ hybridization was used to measure proenkephalin mRNA expression in adult VPA rats' central nucleus of the amygdala, the dorsal striatum, and the nucleus accumbens. Additional groups of animals were examined in a conditioned place aversion to naloxone, the elevated plus maze, and object recognition tests to assess their basal hedonic tone, anxiety, learning and memory, respectively. Prenatal exposure to VPA decreased proenkephalin mRNA expression in the dorsal striatum and the nucleus accumbens but not in the central nucleus of the amygdala. It also increased anxiety and attenuated conditioned place aversion to naloxone but had no impact on learning and memory. The present results suggest that prenatal exposure to VPA may lead to the decreased activity of the striatal enkephalinergic system and in consequence to increased anxiety and disregulated basal hedonic tone observed in VPA rats. Presented results are discussed in light of interactions between enkephalinergic, GABAergic, and dopaminergic systems in the striatum and mesolimbic areas of the brain.
Pedagogical Tools to Address Clinical Anatomy and Athletic Training Student Learning Styles
ERIC Educational Resources Information Center
Mazerolle, Stephanie; Yeargin, Susan
2010-01-01
Context: A thorough knowledge of anatomy is needed in four of the six domains of athletic training: prevention, injury/condition recognition, immediate care, and treatment/rehabilitation. Students with a solid foundation can achieve competency in these specific domains. Objective: To provide educators with pedagogical tools to promote a deeper…
ERIC Educational Resources Information Center
Hasselblad, Judith
The format for this curriculum guide, written for nurse practitioner faculty, consists of learning objectives, content outline, teaching methodology suggestions, references and recommended readings. Part 1 of the guide, Recognition of Early and Chronic Alcoholism, deals with features of alcoholism such as epidemiological data and theories,…
Harnessing Spatial Thinking to Support STEM Learning. OECD Education Working Papers, No. 161
ERIC Educational Resources Information Center
Newcombe, Nora
2017-01-01
Spatial intelligence concerns the locations of objects, their shapes, their relations, and the paths they take as they move. Recognition of spatial skills enriches the traditional educational focus on developing literacy and numerical skills to include a cognitive domain particularly relevant to achievement in science, technology, engineering and…
Organizational learning contributes to guidance for managing wildland fires for multiple objectives
Tom Zimmerman; Tim Sexton
2010-01-01
Since the inception of organized fire suppression in the early 1900s, wildland fire management has dramatically evolved in operational complexity; ecological significance; social, economic, and political magnitude; areas and timing of application; and recognition of potentially serious consequences. Throughout the past 100 years, fire management has matured from a...
Diversity in School: A Brazilian Educational Policy against Homophobia
ERIC Educational Resources Information Center
Carrara, Sergio; Nascimento, Marcos; Duque, Aline; Tramontano, Lucas
2016-01-01
Diversity in School is a Brazilian initiative that seeks to increase understanding, recognition, respect, and value social and cultural differences through offering an e-learning course on gender, sexuality, and ethnic relations for teachers and school administrators in the public school system. The course and its objectives aim to enable staff…
Effects of prolonged agmatine treatment in aged male Sprague-Dawley rats.
Rushaidhi, M; Zhang, H; Liu, P
2013-03-27
Increasing evidence suggests that altered arginine metabolism contributes to cognitive decline during ageing. Agmatine, decarboxylated arginine, has a variety of pharmacological effects, including the modulation of behavioural function. A recent study demonstrated the beneficial effects of short-term agmatine treatment in aged rats. The present study investigated how intraperitoneal administration of agmatine (40mg/kg, once daily) over 4-6weeks affected behavioural function and neurochemistry in aged Sprague-Dawley rats. Aged rats treated with saline displayed significantly reduced exploratory activity in the open field, impaired spatial learning and memory in the water maze and object recognition memory relative to young rats. Prolonged agmatine treatment improved animals' performance in the reversal test of the water maze and object recognition memory test, and significantly suppressed age-related elevation in nitric oxide synthase activity in the dentate gyrus of the hippocampus and prefrontal cortex. However, this prolonged supplementation was unable to improve exploratory activity and spatial reference learning and memory in aged rats. These findings further demonstrate that exogenous agmatine selectively improves behavioural function in aged rats. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Ngwenya, Laura B.; Mazumder, Sarmistha; Porter, Zachary R.; Oswald, Duane J.
2018-01-01
Cognitive deficits after traumatic brain injury (TBI) are debilitating and contribute to the morbidity and loss of productivity of over 10 million people worldwide. Cell transplantation has been linked to enhanced cognitive function after experimental traumatic brain injury, yet the mechanism of recovery is poorly understood. Since the hippocampus is a critical structure for learning and memory, supports adult neurogenesis, and is particularly vulnerable after TBI, we hypothesized that stem cell transplantation after TBI enhances cognitive recovery by modulation of endogenous hippocampal neurogenesis. We performed lateral fluid percussion injury (LFPI) in adult mice and transplanted embryonic stem cell-derived neural progenitor cells (NPC). Our data confirm an injury-induced cognitive deficit in novel object recognition, a hippocampal-dependent learning task, which is reversed one week after NPC transplantation. While LFPI alone promotes hippocampal neurogenesis, as revealed by doublecortin immunolabeling of immature neurons, subsequent NPC transplantation prevents increased neurogenesis and is not associated with morphological maturation of endogenous injury-induced immature neurons. Thus, NPC transplantation enhances cognitive recovery early after LFPI without a concomitant increase in neuron numbers or maturation. PMID:29531536
Object instance recognition using motion cues and instance specific appearance models
NASA Astrophysics Data System (ADS)
Schumann, Arne
2014-03-01
In this paper we present an object instance retrieval approach. The baseline approach consists of a pool of image features which are computed on the bounding boxes of a query object track and compared to a database of tracks in order to find additional appearances of the same object instance. We improve over this simple baseline approach in multiple ways: 1) we include motion cues to achieve improved robustness to viewpoint and rotation changes, 2) we include operator feedback to iteratively re-rank the resulting retrieval lists and 3) we use operator feedback and location constraints to train classifiers and learn an instance specific appearance model. We use these classifiers to further improve the retrieval results. The approach is evaluated on two popular public datasets for two different applications. We evaluate person re-identification on the CAVIAR shopping mall surveillance dataset and vehicle instance recognition on the VIVID aerial dataset and achieve significant improvements over our baseline results.
THE LIMITED EFFECT OF COINCIDENT ORIENTATION ON THE CHOICE OF INTRINSIC AXIS (.).
Li, Jing; Su, Wei
2015-06-01
The allocentric system computes and represents general object-to-object spatial relationships to provide a spatial frame of reference other than the egocentric system. The intrinsic frame-of-reference system theory, which suggests people learn the locations of objects based upon an intrinsic axis, is important in research about the allocentric system. The purpose of the current study was to determine whether the effect of coincident orientation on the choice of intrinsic axis was limited. Two groups of participants (24 men, 24 women; M age = 24 yr., SD = 2) encoded different spatial layouts in which the objects shared the coincident orientation of 315° and 225° separately at learning perspective (0°). The response pattern of partial-scene-recognition task following learning reflected different strategies for choosing the intrinsic axis under different conditions. Under the 315° object-orientation condition, the objects' coincident orientation was as important as the symmetric axis in the choice of the intrinsic axis. However, participants were more likely to choose the symmetric axis as the intrinsic axis under the 225° object-orientation condition. The results suggest the effect of coincident orientation on the choice of intrinsic axis is limited.
Rural science education as social justice
NASA Astrophysics Data System (ADS)
Eppley, Karen
2017-03-01
What part can science education play in the dismantling of obstacles to social justice in rural places? In this Forum contribution, I use "Learning in and about Rural Places: Connections and Tensions Between Students' Everyday Experiences and Environmental Quality Issues in their Community"(Zimmerman and Weible 2016) to explicitly position rural education as a project of social justice that seeks full participatory parity for rural citizens. Fraser's (2009) conceptualization of social justice in rural education requires attention to the just distribution of resources, the recognition of the inherent capacities of rural people, and the right to equal participation in democratic processes that lead to opportunities to make decisions affecting local, regional, and global lives. This Forum piece considers the potential of place-based science education to contribute to this project.
Effects of Pictorial Cues on Reaching Depend on the Distinctiveness of Target Objects
Himmelbach, Marc
2013-01-01
There is an ongoing debate under what conditions learned object sizes influence visuomotor control under preserved stereovision. Using meaningful objects (matchboxes of locally well-known brands in the UK) a previous study has nicely shown that the recognition of these objects influences action programming by means of reach amplitude and grasp pre-shaping even under binocular vision. Using the same paradigm, we demonstrated that short-term learning of colour-size associations was not sufficient to induce any visuomotor effects under binocular viewing conditions. Now we used the same matchboxes, for which the familiarity effect was shown in the UK, with German participants who have never seen these objects before. We addressed the question whether simply a high degree of distinctness, or whether instead actual prior familiarity of these objects, are required to affect motor computations. We found that under monocular and binocular viewing conditions the learned size and location influenced the amplitude of the reaching component significantly. In contrast, the maximum grip aperture remained unaffected for binocular vision. We conclude that visual distinctness is sufficient to form reliable associations in short-term learning to influence reaching even for preserved stereovision. Grasp pre-shaping instead seems to be less susceptible to such perceptual effects. PMID:23382882
Cross-domain expression recognition based on sparse coding and transfer learning
NASA Astrophysics Data System (ADS)
Yang, Yong; Zhang, Weiyi; Huang, Yong
2017-05-01
Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.
ERIC Educational Resources Information Center
Marcinowski, Emily C.; Campbell, Julie Marie
2017-01-01
Object construction involves organizing multiple objects into a unified structure (e.g., stacking blocks into a tower) and may provide infants with unique spatial information. Because object construction entails placing objects in spatial locations relative to one another, infants can acquire information about spatial relations during construction…
Odors as effective retrieval cues for stressful episodes.
Wiemers, Uta S; Sauvage, Magdalena M; Wolf, Oliver T
2014-07-01
Olfactory information seems to play a special role in memory due to the fast and direct processing of olfactory information in limbic areas like the amygdala and the hippocampus. This has led to the assumption that odors can serve as effective retrieval cues for autobiographic memories, especially emotional memories. The current study sought to investigate whether an olfactory cue can serve as an effective retrieval cue for memories of a stressful episode. A total of 95 participants were exposed to a psychosocial stressor or a well matching but not stressful control condition. During both conditions were visual objects present, either bound to the situation (central objects) or not (peripheral objects). Additionally, an ambient odor was present during both conditions. The next day, participants engaged in an unexpected object recognition task either under the influence of the same odor as was present during encoding (congruent odor) or another odor (non-congruent odor). Results show that stressed participants show a better memory for all objects and especially for central visual objects if recognition took place under influence of the congruent odor. An olfactory cue thus indeed seems to be an effective retrieval cue for stressful memories. Copyright © 2013 Elsevier Inc. All rights reserved.
Rosselli, Federica B.; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide
2015-01-01
In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning. PMID:25814936
Knowing Me, Knowing Who? Getting to Know Your Students' Preferred Learning Style
ERIC Educational Resources Information Center
Reed, Julian A.; Banks, Aaron L.; Carlisle, Cynthia S.
2004-01-01
Recognizing each student's preferred learning style not only enhances the teaching and learning experience, but helps make the gymnasium a fun place to learn new skills and be physically active. This article addresses three objectives that form a pedagogical strategy with the potential to "get to know" the students in a more personal way. First,…
ERIC Educational Resources Information Center
Holzinger, Andreas; Kickmeier-Rust, Michael D.; Wassertheurer, Sigi; Hessinger, Michael
2009-01-01
Objective: Since simulations are often accepted uncritically, with excessive emphasis being placed on technological sophistication at the expense of underlying psychological and educational theories, we evaluated the learning performance of simulation software, in order to gain insight into the proper use of simulations for application in medical…
Effects of Personal Learning Devices and Their Usages on Student Learning and Engagement
ERIC Educational Resources Information Center
Labrensz, Jonathan; Ayebo, Abraham
2018-01-01
The objective of this study was to investigate the effects of using Personal Learning Devices as interactive white boards on students' learning and engagement. The study took place in an Algebra 2 classroom during the 2015-2016 school year. Baseline scores were gathered in the fall of 2015 and control and experimental scores were gathered in the…
ERIC Educational Resources Information Center
Zheng, Dongping; Schmidt, Matthew; Hu, Ying; Liu, Min; Hsu, Jesse
2017-01-01
The purpose of this research was to explore the relationships between design, learning, and translanguaging in a 3D collaborative virtual learning environment for adolescent learners of Chinese and English. We designed an open-ended space congruent with ecological and dialogical perspectives on second language acquisition. In such a space,…
ERIC Educational Resources Information Center
Sean, Michael; Ihanainen, Pekka
2015-01-01
This paper proposed a method for developing capacity for lifelong learning in open spaces, defined here as places without predefined learning structures or objectives, through the cultivation of aesthetic literacy. This discussion will be situated within fieldwork performed by the authors in Helsinki, Finland, and Tallinn, Estonia, in 2013. Based…
Characterizing age-related decline of recognition memory and brain activation profile in mice.
Belblidia, Hassina; Leger, Marianne; Abdelmalek, Abdelouadoud; Quiedeville, Anne; Calocer, Floriane; Boulouard, Michel; Jozet-Alves, Christelle; Freret, Thomas; Schumann-Bard, Pascale
2018-06-01
Episodic memory decline is one of the earlier deficits occurring during normal aging in humans. The question of spatial versus non-spatial sensitivity to age-related memory decline is of importance for a full understanding of these changes. Here, we characterized the effect of normal aging on both non-spatial (object) and spatial (object location) memory performances as well as on associated neuronal activation in mice. Novel-object (NOR) and object-location (OLR) recognition tests, respectively assessing the identity and spatial features of object memory, were examined at different ages. We show that memory performances in both tests were altered by aging as early as 15 months of age: NOR memory was partially impaired whereas OLR memory was found to be fully disrupted at 15 months of age. Brain activation profiles were assessed for both tests using immunohistochemical detection of c-Fos (neuronal activation marker) in 3and 15 month-old mice. Normal performances in NOR task by 3 month-old mice were associated to an activation of the hippocampus and a trend towards an activation in the perirhinal cortex, in a way that did significantly differ with 15 month-old mice. During OLR task, brain activation took place in the hippocampus in 3 month-old but not significantly in 15 month-old mice, which were fully impaired at this task. These differential alterations of the object- and object-location recognition memory may be linked to differential alteration of the neuronal networks supporting these tasks. Copyright © 2018 Elsevier Inc. All rights reserved.
The roles of scene priming and location priming in object-scene consistency effects
Heise, Nils; Ansorge, Ulrich
2014-01-01
Presenting consistent objects in scenes facilitates object recognition as compared to inconsistent objects. Yet the mechanisms by which scenes influence object recognition are still not understood. According to one theory, consistent scenes facilitate visual search for objects at expected places. Here, we investigated two predictions following from this theory: If visual search is responsible for consistency effects, consistency effects could be weaker (1) with better-primed than less-primed object locations, and (2) with less-primed than better-primed scenes. In Experiments 1 and 2, locations of objects were varied within a scene to a different degree (one, two, or four possible locations). In addition, object-scene consistency was studied as a function of progressive numbers of repetitions of the backgrounds. Because repeating locations and backgrounds could facilitate visual search for objects, these repetitions might alter the object-scene consistency effect by lowering of location uncertainty. Although we find evidence for a significant consistency effect, we find no clear support for impacts of scene priming or location priming on the size of the consistency effect. Additionally, we find evidence that the consistency effect is dependent on the eccentricity of the target objects. These results point to only small influences of priming to object-scene consistency effects but all-in-all the findings can be reconciled with a visual-search explanation of the consistency effect. PMID:24910628
Building Knowledge through Portfolio Learning in Prior Learning Assessment and Recognition
ERIC Educational Resources Information Center
Conrad, Dianne
2008-01-01
It is important for academic credibility that the process of prior learning assessment and recognition (PLAR) keeps learning and knowledge as its foundational tenets. Doing so ensures PLAR's recognition as a fertile ground for learners' cognitive and personal growth. In many postsecondary venues, PLAR is often misunderstood and confused with…
ERIC Educational Resources Information Center
Bjornavold, Jens
Policies and practices in the areas of identification, assessment, and recognition of nonformal learning in the European Union (EU) were reviewed. The review focused on national and EU-level experiences regarding the following areas and issues: recognition of the contextual nature of learning; identification of methodological requirements for…
Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors
Li, Frédéric; Nisar, Muhammad Adeel; Köping, Lukas; Grzegorzek, Marcin
2018-01-01
Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches—in particular deep-learning based—have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM) to obtain features characterising both short- and long-term time dependencies in the data. PMID:29495310
Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors.
Li, Frédéric; Shirahama, Kimiaki; Nisar, Muhammad Adeel; Köping, Lukas; Grzegorzek, Marcin
2018-02-24
Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches-in particular deep-learning based-have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM) to obtain features characterising both short- and long-term time dependencies in the data.
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay
2012-01-01
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis. PMID:23149420
Is spacing really the “friend of induction”?
Verkoeijen, Peter P. J. L.; Bouwmeester, Samantha
2014-01-01
Inductive learning takes place when people learn a new concept or category by observing a variety of exemplars. Kornell and Bjork (2008) asked participants to learn new painting styles either by presenting different paintings of the same artist consecutively (massed presentation) or by mixing paintings of different artists (spaced presentation). In their second experiment, Kornell and Bjork (2008) showed with a final style recognition test, that spacing resulted in better inductive learning than massing. Also, by using this style recognition test, they ruled out the possibility that spacing merely resulted in a better memory for the labels of the newly learned painting styles. The findings from Kornell and Bjork’s (2008) second experiment are important because they show that the benefit of spaced learning generalizes to complex learning tasks and outcomes, and that it is not confined to rote memory learning. However, the findings from Kornell and Bjork’s (2008) second experiment have never been replicated. In the present study we performed an exact and high-powered replication of Kornell and Bjork’s (2008) second experiment with a Web-based sample. Such a replication contributes to establish the reliability of the original finding and hence to more conclusive evidence of the spacing effect in inductive learning. The findings from the present replication attempt revealed a medium-sized advantage of spacing over massing in inductive learning, which was comparable to the original effect in the experiment by Kornell and Bjork (2008). Also, the 95% confidence intervals (CI) of the effect sizes from both experiments overlapped considerably. Hence, the findings from the present replication experiment and the original experiment clearly reinforce each other. PMID:24744742
3-Dimensional Scene Perception during Active Electrolocation in a Weakly Electric Pulse Fish
von der Emde, Gerhard; Behr, Katharina; Bouton, Béatrice; Engelmann, Jacob; Fetz, Steffen; Folde, Caroline
2010-01-01
Weakly electric fish use active electrolocation for object detection and orientation in their environment even in complete darkness. The African mormyrid Gnathonemus petersii can detect object parameters, such as material, size, shape, and distance. Here, we tested whether individuals of this species can learn to identify 3-dimensional objects independently of the training conditions and independently of the object's position in space (rotation-invariance; size-constancy). Individual G. petersii were trained in a two-alternative forced-choice procedure to electrically discriminate between a 3-dimensional object (S+) and several alternative objects (S−). Fish were then tested whether they could identify the S+ among novel objects and whether single components of S+ were sufficient for recognition. Size-constancy was investigated by presenting the S+ together with a larger version at different distances. Rotation-invariance was tested by rotating S+ and/or S− in 3D. Our results show that electrolocating G. petersii could (1) recognize an object independently of the S− used during training. When only single components of a complex S+ were offered, recognition of S+ was more or less affected depending on which part was used. (2) Object-size was detected independently of object distance, i.e. fish showed size-constancy. (3) The majority of the fishes tested recognized their S+ even if it was rotated in space, i.e. these fishes showed rotation-invariance. (4) Object recognition was restricted to the near field around the fish and failed when objects were moved more than about 4 cm away from the animals. Our results indicate that even in complete darkness our G. petersii were capable of complex 3-dimensional scene perception using active electrolocation. PMID:20577635
Long, Chengjiang; Hua, Gang; Kapoor, Ashish
2015-01-01
We present a noise resilient probabilistic model for active learning of a Gaussian process classifier from crowds, i.e., a set of noisy labelers. It explicitly models both the overall label noise and the expertise level of each individual labeler with two levels of flip models. Expectation propagation is adopted for efficient approximate Bayesian inference of our probabilistic model for classification, based on which, a generalized EM algorithm is derived to estimate both the global label noise and the expertise of each individual labeler. The probabilistic nature of our model immediately allows the adoption of the prediction entropy for active selection of data samples to be labeled, and active selection of high quality labelers based on their estimated expertise to label the data. We apply the proposed model for four visual recognition tasks, i.e., object category recognition, multi-modal activity recognition, gender recognition, and fine-grained classification, on four datasets with real crowd-sourced labels from the Amazon Mechanical Turk. The experiments clearly demonstrate the efficacy of the proposed model. In addition, we extend the proposed model with the Predictive Active Set Selection Method to speed up the active learning system, whose efficacy is verified by conducting experiments on the first three datasets. The results show our extended model can not only preserve a higher accuracy, but also achieve a higher efficiency. PMID:26924892
Automated Recognition of 3D Features in GPIR Images
NASA Technical Reports Server (NTRS)
Park, Han; Stough, Timothy; Fijany, Amir
2007-01-01
A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.
Wolff, J Gerard
2014-01-01
The SP theory of intelligence aims to simplify and integrate concepts in computing and cognition, with information compression as a unifying theme. This article is about how the SP theory may, with advantage, be applied to the understanding of natural vision and the development of computer vision. Potential benefits include an overall simplification of concepts in a universal framework for knowledge and seamless integration of vision with other sensory modalities and other aspects of intelligence. Low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in the manner of the run-length encoding technique for information compression. The concept of multiple alignment in the SP theory may be applied to the recognition of objects, and to scene analysis, with a hierarchy of parts and sub-parts, at multiple levels of abstraction, and with family-resemblance or polythetic categories. The theory has potential for the unsupervised learning of visual objects and classes of objects, and suggests how coherent concepts may be derived from fragments. As in natural vision, both recognition and learning in the SP system are robust in the face of errors of omission, commission and substitution. The theory suggests how, via vision, we may piece together a knowledge of the three-dimensional structure of objects and of our environment, it provides an account of how we may see things that are not objectively present in an image, how we may recognise something despite variations in the size of its retinal image, and how raster graphics and vector graphics may be unified. And it has things to say about the phenomena of lightness constancy and colour constancy, the role of context in recognition, ambiguities in visual perception, and the integration of vision with other senses and other aspects of intelligence.
Signed reward prediction errors drive declarative learning.
De Loof, Esther; Ergo, Kate; Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom
2018-01-01
Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.
On-chip learning of hyper-spectral data for real time target recognition
NASA Technical Reports Server (NTRS)
Duong, T. A.; Daud, T.; Thakoor, A.
2000-01-01
As the focus of our present paper, we have used the cascade error projection (CEP) learning algorithm (shown to be hardware-implementable) with on-chip learning (OCL) scheme to obtain three orders of magnitude speed-up in target recognition compared to software-based learning schemes. Thus, it is shown, real time learning as well as data processing for target recognition can be achieved.
Fardell, Joanna E; Vardy, Janette; Johnston, Ian N
2013-10-17
Previous animal studies have examined the potential for cytostatic drugs to induce learning and memory deficits in laboratory animals but, to date, there is no pre-clinical evidence that taxanes have the potential to cause cognitive impairment. Therefore our aim was to explore the short- and long-term cognitive effects of different dosing schedules of the taxane docetaxel (DTX) on laboratory rodents. Healthy male hooded Wistar rats were treated with DTX (6 mg/kg, 10mg/kg) or physiological saline (control), once a week for 3 weeks (Experiment 1) or once only (10mg/kg; Experiment 2). Cognitive function was assessed using the novel object recognition (NOR) task and spatial water maze (WM) task 1 to 3 weeks after treatment and again 4 months after treatment. Shortly after DTX treatment, rats perform poorly on NOR regardless of treatment regimen. Treatment with a single injection of 10mg/kg DTX does not appear to induce sustained deficits in object recognition or peripheral neuropathy. Overall these findings show that treatment with the taxane DTX in the absence of cancer and other anti-cancer treatments causes cognitive impairment in healthy rodents. Copyright © 2013 Elsevier Inc. All rights reserved.
Reverse control for humanoid robot task recognition.
Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul
2012-12-01
Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.
Sparse and redundant representations for inverse problems and recognition
NASA Astrophysics Data System (ADS)
Patel, Vishal M.
Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.
Spoerer, Courtney J; Eguchi, Akihiro; Stringer, Simon M
2016-02-01
In order to develop transformation invariant representations of objects, the visual system must make use of constraints placed upon object transformation by the environment. For example, objects transform continuously from one point to another in both space and time. These two constraints have been exploited separately in order to develop translation and view invariance in a hierarchical multilayer model of the primate ventral visual pathway in the form of continuous transformation learning and temporal trace learning. We show for the first time that these two learning rules can work cooperatively in the model. Using these two learning rules together can support the development of invariance in cells and help maintain object selectivity when stimuli are presented over a large number of locations or when trained separately over a large number of viewing angles. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Fuzzy support vector machines for adaptive Morse code recognition.
Yang, Cheng-Hong; Jin, Li-Cheng; Chuang, Li-Yeh
2006-11-01
Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, facilitating mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. Therefore, an adaptive automatic recognition method with a high recognition rate is needed. The proposed system uses both fuzzy support vector machines and the variable-degree variable-step-size least-mean-square algorithm to achieve these objectives. We apply fuzzy memberships to each point, and provide different contributions to the decision learning function for support vector machines. Statistical analyses demonstrated that the proposed method elicited a higher recognition rate than other algorithms in the literature.
Ruiz, Sergio; Lee, Sangkyun; Soekadar, Surjo R; Caria, Andrea; Veit, Ralf; Kircher, Tilo; Birbaumer, Niels; Sitaram, Ranganatha
2013-01-01
Real-time functional magnetic resonance imaging (rtfMRI) is a novel technique that has allowed subjects to achieve self-regulation of circumscribed brain regions. Despite its anticipated therapeutic benefits, there is no report on successful application of this technique in psychiatric populations. The objectives of the present study were to train schizophrenia patients to achieve volitional control of bilateral anterior insula cortex on multiple days, and to explore the effect of learned self-regulation on face emotion recognition (an extensively studied deficit in schizophrenia) and on brain network connectivity. Nine patients with schizophrenia were trained to regulate the hemodynamic response in bilateral anterior insula with contingent rtfMRI neurofeedback, through a 2-weeks training. At the end of the training stage, patients performed a face emotion recognition task to explore behavioral effects of learned self-regulation. A learning effect in self-regulation was found for bilateral anterior insula, which persisted through the training. Following successful self-regulation, patients recognized disgust faces more accurately and happy faces less accurately. Improvements in disgust recognition were correlated with levels of self-activation of right insula. RtfMRI training led to an increase in the number of the incoming and outgoing effective connections of the anterior insula. This study shows for the first time that patients with schizophrenia can learn volitional brain regulation by rtfMRI feedback training leading to changes in the perception of emotions and modulations of the brain network connectivity. These findings open the door for further studies of rtfMRI in severely ill psychiatric populations, and possible therapeutic applications. Copyright © 2011 Wiley Periodicals, Inc.
Software for Partly Automated Recognition of Targets
NASA Technical Reports Server (NTRS)
Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark; Selinsky, T.
2002-01-01
The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user's tendencies while the user is selecting targets and to increase the user's productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.
Perna, Judith Camats; Wotjak, Carsten T; Stork, Oliver; Engelmann, Mario
2015-05-01
The present study was designed to further investigate the nature of stimuli and the timing of their presentation, which can induce retroactive interference with social recognition memory in mice. In accordance with our previous observations, confrontation with an unfamiliar conspecific juvenile 3h and 6h, but not 22 h, after the initial learning session resulted in retroactive interference. The same effect was observed with the exposure to both enantiomers of the monomolecular odour carvone, and with a novel object. Exposure to a loud tone (12 KHz, 90 dB) caused retroactive interference at 6h, but not 3h and 22 h, after sampling. Our data show that retroactive interference of social recognition memory can be induced by exposing the experimental subjects to the defined stimuli presented <22 h after learning in their home cage. The distinct interference triggered by the tone presentation at 6h after sampling may be linked to the intrinsic aversiveness of the loud tone and suggests that at this time point memory consolidation is particularly sensitive to stress. Copyright © 2015 Elsevier Inc. All rights reserved.
Visual Recognition of the Elderly Concerning Risks of Falling or Stumbling Indoors in the Home
Katsura, Toshiki; Miura, Norio; Hoshino, Akiko; Usui, Kanae; Takahashi, Yasuro; Hisamoto, Seiichi
2011-01-01
Objective: The objective of this study was to verify the recognition of dangers and obstacles within a house in the elderly when walking based on analyses of gaze point fixation. Materials and Methods: The rate of recognizing indoor dangers was compared among 30 elderly, 14 middle-aged and 11 young individuals using the Eye Mark Recorder. Results: 1) All of the elderly, middle-aged and young individuals showed a high recognition rate of 100% or near 100% when ascending outdoor steps but a low rate of recognizing obstacles placed on the steps. They showed a recognition rate of about 60% when descending steps from residential premises to the street. The rate of recognizing middle steps in the elderly was significantly lower than that in younger and middle-aged individuals. Regarding recognition indoors, when ascending stairs, all of the elderly, middle-aged and young individuals showed a high recognition rate of nearly 100%. When descending stairs, they showed a recognition rate of 70-90%. However, although the recognition rate in the elderly was lower than in younger and middle-aged individuals, no significant difference was observed. 2) When moving indoors, all of the elderly, middle-aged and young individuals showed a recognition rate of 70%-80%. The recognition rate was high regarding obstacles such as floors, televisions and chests of drawers but low for obstacles in the bathroom and steps on the path. The rate of recognizing steps of doorsills forming the division between a Japanese-style room and corridor as well as obstacles in a Japanese-style room was low, and the rate in the elderly was low, being 40% or less. Conclusion: The rate of recognizing steps of doorsills as well as obstacles in a Japanese-style room was lower in the elderly in comparison with middle-aged or young individuals. PMID:25648876
Involving a young person in the development of a digital resource in nurse education.
Fenton, Gaynor
2014-01-01
Health policies across western societies have embedded the need for service user and carer perspectives in service design and delivery of educational programmes. There is a growing recognition of the need to include the perspectives of children and young people as service users in the design and delivery of child focused educational programmes. Digital storytelling provides a strategy for student nurses to gain insight into the lived experiences of children and young people. Engaging with these stories enables students to develop an understanding of a young persons' experience of healthcare. This paper outlines a project that developed a digital learning object based upon a young person's experience of cancer and student evaluations of the digital learning object as a teaching and learning strategy. Over 80% of students rated the digital learning object as interesting and were motivated to explore its content. In addition, the evaluation highlighted that listening to the young person's experiences of her treatment regimes was informative and assisted understanding of a patients' perspective of care delivery. Copyright © 2013 Elsevier Ltd. All rights reserved.
A new selective developmental deficit: Impaired object recognition with normal face recognition.
Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley
2011-05-01
Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual recognition. Copyright © 2010 Elsevier Srl. All rights reserved.
The Eyes Know Time: A Novel Paradigm to Reveal the Development of Temporal Memory
ERIC Educational Resources Information Center
Pathman, Thanujeni; Ghetti, Simona
2014-01-01
Temporal memory in 7-year-olds, 10-year-olds, and young adults (N = 78) was examined introducing a novel eye-movement paradigm. Participants learned object sequences and were tested under three conditions: temporal order, temporal context, and recognition. Age-related improvements in accuracy were found across conditions; accuracy in the temporal…
ERIC Educational Resources Information Center
Jessberger, Sebastian; Clark, Robert E.; Broadbent, Nicola J.; Clemenson, Gregory D., Jr.; Consiglio, Antonella; Lie, D. Chichung; Squire, Larry R.; Gage, Fred H.
2009-01-01
New granule cells are born throughout life in the dentate gyrus of the hippocampal formation. Given the fundamental role of the hippocampus in processes underlying certain forms of learning and memory, it has been speculated that newborn granule cells contribute to cognition. However, previous strategies aiming to causally link newborn neurons…
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140
Hasanein, Parisa; Teimuri Far, Massoud
2015-04-01
Cannabinoid and endocannabinoid systems have been implicated in several physiological functions including modulation of cognition. In this study we evaluated the effects and interaction between fatty-acid amide hydrolase (FAAH) inhibitor URB597 and CB1 receptor agonist WIN55, 212-2 on memory using object recognition and passive avoidance learning (PAL) tests. Learning and memory impairment was induced by WIN 55, 212-2 administration (1mg/kg, i.p.) 30min before the acquisition trial. URB597 (0.1, 0.3 and 1mg/kg, i.p.) or SR141716A (1mg/kg, i.p.) was injected to rats 10min before WIN 55, 212-2 or URB597 respectively. URB597 (0.3 and 1mg/kg) but not 0.1mg/kg induced higher discrimination index (DI) in object recognition test and enhanced memory acquisition in PAL test. The cognitive enhancing effect of URB597 was blocked by a CB1 receptor antagonist, SR141716A which at this dose alone had no effect on cognition. WIN55, 212-2 caused cognition deficits in both tests. URB597 (0.3 and 1mg/kg) treatment could alleviate the negative influence of WIN 55, 212-2 on cognition and memory. These results indicate URB597 potential to protect against memory deficits induced by cannabinoid. Therefore, in combination with URB597 beneficial effects, this study suggests that URB597 has recognition and acquisition memory enhancing effects. It may also constitute a novel approach for the treatment of cannabinoid induced memory deficits and lead to a better understanding of the brain mechanisms underlying cognition. Copyright © 2015 Elsevier Inc. All rights reserved.
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.
Flexible Learning Strategies in First through Fourth-Year Courses
ERIC Educational Resources Information Center
Cassidy, Alice; Fu, Guopeng; Valley, Will; Lomas, Cyprien; Jovel, Eduardo; Riseman, Andrew
2016-01-01
Flexible Learning (FL) is a pedagogical approach allowing for flexibility of time, place, and audience, including but not solely focused on the use of technologies. We describe Flexible Learning as a pedagogical approach in four courses framed by three key themes: 1) objectives and aspects of course design, 2) evaluation and assessment, and 3)…
Corticosterone mediates some but not other behavioural changes induced by prenatal stress in rats.
Salomon, S; Bejar, C; Schorer-Apelbaum, D; Weinstock, M
2011-02-01
The effect of daily varied stress from days 13-21 of gestation in Wistar rats was investigated by tests of learning and memory and anxiogenic behaviour in the 60-day-old offspring of both sexes. Prenatal stress decreased the anogenital distance in males at 1 day of age. Anxiogenic behaviour in the elevated plus maze was seen in prenatally-stressed rats of both genders. There was no significant gender difference in the rate of spatial learning in the Morris water maze but prenatal stress only slowed that of males. In the object recognition test with an inter-trial interval of 40 min, females but not males, discriminated between a familiar and novel object. Prenatal stress did not affect object discrimination in females but feminised that in males. Maternal adrenalectomy with replacement of basal corticosterone levels in the drinking fluid prevented all of the above effects of prenatal stress in the offspring. To mimic the peak corticosterone levels and time course of elevation in response to stress, corticosterone (3 mg/kg) was injected twice (0 and 30 min) on days 13-16 and once on days 17-20 of gestation to adrenalectomised mothers. This treatment re-instated anxiogenic behaviour similar to that induced by prenatal stress, indicating that it is mediated by exposure of the foetal brain to raised levels of corticosterone. However, steroid administration to adrenalectomised dams did not decrease anogenital distance, feminise object recognition memory or slow spatial learning in their male offspring. The findings indicate that other adrenal hormones are necessary to induce these effects of prenatal stress. © 2011 The Authors. Journal of Neuroendocrinology © 2011 Blackwell Publishing Ltd.
Baxter, Mark G; Gaffan, David; Kyriazis, Diana A; Mitchell, Anna S
2007-10-17
The orbital prefrontal cortex is thought to be involved in behavioral flexibility in primates, and human neuroimaging studies have identified orbital prefrontal activation during episodic memory encoding. The goal of the present study was to ascertain whether deficits in strategy implementation and episodic memory that occur after ablation of the entire prefrontal cortex can be ascribed to damage to the orbital prefrontal cortex. Rhesus monkeys were preoperatively trained on two behavioral tasks, the performance of both of which is severely impaired by the disconnection of frontal cortex from inferotemporal cortex. In the strategy implementation task, monkeys were required to learn about two categories of objects, each associated with a different strategy that had to be performed to obtain food reward. The different strategies had to be applied flexibly to optimize the rate of reward delivery. In the scene memory task, monkeys learned 20 new object-in-place discrimination problems in each session. Monkeys were tested on both tasks before and after bilateral ablation of orbital prefrontal cortex. These lesions impaired new scene learning but had no effect on strategy implementation. This finding supports a role for the orbital prefrontal cortex in memory but places limits on the involvement of orbital prefrontal cortex in the representation and implementation of behavioral goals and strategies.
Bio-inspired approach for intelligent unattended ground sensors
NASA Astrophysics Data System (ADS)
Hueber, Nicolas; Raymond, Pierre; Hennequin, Christophe; Pichler, Alexander; Perrot, Maxime; Voisin, Philippe; Moeglin, Jean-Pierre
2015-05-01
Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.
Good work - how is it recognised by the nurse?
Christiansen, Bjørg
2008-06-01
The aim of this paper is to shed light on how nurses describe situations that reflect achievement and provide confirmation that they have done good work. Nurses' recognition of good work does not seem to have been the object of direct investigation, but is indirectly reflected in studies focusing on nurses' perceptions on work environments and the multifaceted nature of nursing. However, acknowledging high-quality performance in professional nurses can facilitate nurses in maintaining and strengthening the goals and values of the profession. This in turn can help nurses shoulder the multifaceted responsibilities they have to patients and next of kin. This paper is part of the Professional Learning in a Changing Society project, Institute of Educational Research, University of Oslo, funded by the Research Council of Norway. The project involves four professional groups. This paper, however, focuses on a group of 10 nurses, nine of whom work in hospitals and one in an outpatient clinic. A qualitative approach was chosen to gain insight into how nurses, as well as the other professional groups in the project, engage in processes of knowledge production and quality assurance work. Data presented in this paper derive from semi-structured in-depth interviews conducted during spring 2005 and focuses on the recognition of good work. The following themes were identified as essential in confirming that one did good work: securing fundamental needs of patients and next of kin; managing the flow of responsibilities; positive feedback. CONCLUSIONS. Good work seems to be related to specific situations and a sense of achievement by the respondents. Recognition of good work is not only rewarding and enjoyable; it may also serve as a source of consciousness raising for professional and ethical guidelines in the work place.
Toward Sustainable Communities: Problems And Prerequisites Of Developing Sustainably
This presentation is intended to explain to the community why the PLACES program was developed and how it can meet local and institutional objectives. Our hope is that this application will help develop the PLACES program and foster learning between Germany and the US. The appl...
Muscillo, Rossana; Conforto, Silvia; Schmid, Maurizio; Caselli, Paolo; D'Alessio, Tommaso
2007-01-01
In the context of tele-monitoring, great interest is presently devoted to physical activity, mainly of elderly or people with disabilities. In this context, many researchers studied the recognition of activities of daily living by using accelerometers. The present work proposes a novel algorithm for activity recognition that considers the variability in movement speed, by using dynamic programming. This objective is realized by means of a matching and recognition technique that determines the distance between the signal input and a set of previously defined templates. Two different approaches are here presented, one based on Dynamic Time Warping (DTW) and the other based on the Derivative Dynamic Time Warping (DDTW). The algorithm was applied to the recognition of gait, climbing and descending stairs, using a biaxial accelerometer placed on the shin. The results on DDTW, obtained by using only one sensor channel on the shin showed an average recognition score of 95%, higher than the values obtained with DTW (around 85%). Both DTW and DDTW consistently show higher classification rate than classical Linear Time Warping (LTW).
Local structure preserving sparse coding for infrared target recognition
Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa
2017-01-01
Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions. PMID:28323824
An object memory bias induced by communicative reference.
Marno, Hanna; Davelaar, Eddy J; Csibra, Gergely
2016-01-01
In humans, a good proportion of knowledge, including knowledge about objects and object kinds, is acquired via social learning by direct communication from others. If communicative signals raise the expectation of social learning about objects, intrinsic (permanent) features that support object recognition are relevant to store into memory, while extrinsic (accidental) object properties can be ignored. We investigated this hypothesis by instructing participants to memorise shape-colour associations that constituted either an extrinsic object property (the colour of the box that contained the object, Experiment 1) or an intrinsic one (the colour of the object, Experiment 2). Compared to a non-communicative context, communicative presentation of the objects impaired participants' performance when they recalled extrinsic object properties, while their incidental memory of the intrinsic shape-colour associations was not affected. Communicative signals had no effect on performance when the task required the memorisation of intrinsic object properties. The negative effect of communicative reference on the memory of extrinsic properties was also confirmed in Experiment 3, where this property was object location. Such a memory bias suggests that referent objects in communication tend to be seen as representatives of their kind rather than as individuals. Copyright © 2015 Elsevier B.V. All rights reserved.
Towards Real-Time Speech Emotion Recognition for Affective E-Learning
ERIC Educational Resources Information Center
Bahreini, Kiavash; Nadolski, Rob; Westera, Wim
2016-01-01
This paper presents the voice emotion recognition part of the FILTWAM framework for real-time emotion recognition in affective e-learning settings. FILTWAM (Framework for Improving Learning Through Webcams And Microphones) intends to offer timely and appropriate online feedback based upon learner's vocal intonations and facial expressions in order…
ICPR-2016 - International Conference on Pattern Recognition
Learning for Scene Understanding" Speakers ICPR2016 PAPER AWARDS Best Piero Zamperoni Student Paper -Paced Dictionary Learning for Cross-Domain Retrieval and Recognition Xu, Dan; Song, Jingkuan; Alameda discussions on recent advances in the fields of Pattern Recognition, Machine Learning and Computer Vision, and
3D interactive augmented reality-enhanced digital learning systems for mobile devices
NASA Astrophysics Data System (ADS)
Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie
2013-03-01
With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.
Learning object-to-class kernels for scene classification.
Zhang, Lei; Zhen, Xiantong; Shao, Ling
2014-08-01
High-level image representations have drawn increasing attention in visual recognition, e.g., scene classification, since the invention of the object bank. The object bank represents an image as a response map of a large number of pretrained object detectors and has achieved superior performance for visual recognition. In this paper, based on the object bank representation, we propose the object-to-class (O2C) distances to model scene images. In particular, four variants of O2C distances are presented, and with the O2C distances, we can represent the images using the object bank by lower-dimensional but more discriminative spaces, called distance spaces, which are spanned by the O2C distances. Due to the explicit computation of O2C distances based on the object bank, the obtained representations can possess more semantic meanings. To combine the discriminant ability of the O2C distances to all scene classes, we further propose to kernalize the distance representation for the final classification. We have conducted extensive experiments on four benchmark data sets, UIUC-Sports, Scene-15, MIT Indoor, and Caltech-101, which demonstrate that the proposed approaches can significantly improve the original object bank approach and achieve the state-of-the-art performance.
Scene recognition based on integrating active learning with dictionary learning
NASA Astrophysics Data System (ADS)
Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen
2018-04-01
Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.
Third-Graders Learn about Fractions Using Virtual Manipulatives: A Classroom Study
ERIC Educational Resources Information Center
Reimer, Kelly; Moyer, Patricia S.
2005-01-01
With recent advances in computer technology, it is no surprise that the manipulation of objects in mathematics classrooms now includes the manipulation of objects on the computer screen. These objects, referred to as "virtual manipulatives," are essentially replicas of physical manipulatives placed on the World Wide Web in the form of computer…
Fast neutron irradiation deteriorates hippocampus-related memory ability in adult mice.
Yang, Miyoung; Kim, Hwanseong; Kim, Juhwan; Kim, Sung-Ho; Kim, Jong-Choon; Bae, Chun-Sik; Kim, Joong-Sun; Shin, Taekyun; Moon, Changjong
2012-03-01
Object recognition memory and contextual fear conditioning task performance in adult C57BL/6 mice exposed to cranial fast neutron irradiation (0.8 Gy) were examined to evaluate hippocampus-related behavioral dysfunction following acute exposure to relatively low doses of fast neutrons. In addition, hippocampal neurogenesis changes in adult murine brain after cranial irradiation were analyzed using the neurogenesis immunohistochemical markers Ki-67 and doublecortin (DCX). In the object recognition memory test and contextual fear conditioning, mice trained 1 and 7 days after irradiation displayed significant memory deficits compared to the sham-irradiated controls. The number of Ki-67- and DCX-positive cells decreased significantly 24 h post-irradiation. These results indicate that acute exposure of the adult mouse brain to a relatively low dose of fast neutrons interrupts hippocampal functions, including learning and memory, possibly by inhibiting neurogenesis.
Effects of exposure to facial expression variation in face learning and recognition.
Liu, Chang Hong; Chen, Wenfeng; Ward, James
2015-11-01
Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.
Huang, Tao; Li, Xiao-yu; Jin, Rui; Ku, Jing; Xu, Sen-miao; Xu, Meng-ling; Wu, Zhen-zhong; Kong, De-guo
2015-04-01
The present paper put forward a non-destructive detection method which combines semi-transmission hyperspectral imaging technology with manifold learning dimension reduction algorithm and least squares support vector machine (LSSVM) to recognize internal and external defects in potatoes simultaneously. Three hundred fifteen potatoes were bought in farmers market as research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images of normal external defects (bud and green rind) and internal defect (hollow heart) potatoes. In order to conform to the actual production, defect part is randomly put right, side and back to the acquisition probe when the hyperspectral images of external defects potatoes are acquired. The average spectrums (390-1,040 nm) were extracted from the region of interests for spectral preprocessing. Then three kinds of manifold learning algorithm were respectively utilized to reduce the dimension of spectrum data, including supervised locally linear embedding (SLLE), locally linear embedding (LLE) and isometric mapping (ISOMAP), the low-dimensional data gotten by manifold learning algorithms is used as model input, Error Correcting Output Code (ECOC) and LSSVM were combined to develop the multi-target classification model. By comparing and analyzing results of the three models, we concluded that SLLE is the optimal manifold learning dimension reduction algorithm, and the SLLE-LSSVM model is determined to get the best recognition rate for recognizing internal and external defects potatoes. For test set data, the single recognition rate of normal, bud, green rind and hollow heart potato reached 96.83%, 86.96%, 86.96% and 95% respectively, and he hybrid recognition rate was 93.02%. The results indicate that combining the semi-transmission hyperspectral imaging technology with SLLE-LSSVM is a feasible qualitative analytical method which can simultaneously recognize the internal and external defects potatoes and also provide technical reference for rapid on-line non-destructive detecting of the internal and external defects potatoes.
Wang, Kai; Lu, Jun-Mei; Xing, Zhen-He; Zhao, Qian-Ru; Hu, Lin-Qi; Xue, Lei; Zhang, Jie; Mei, Yan-Ai
2017-01-01
Mounting evidence suggests that exposure to radiofrequency electromagnetic radiation (RF-EMR) can influence learning and memory in rodents. In this study, we examined the effects of single exposure to 1.8 GHz RF-EMR for 30 min on subsequent recognition memory in mice, using the novel object recognition task (NORT). RF-EMR exposure at an intensity of >2.2 W/kg specific absorption rate (SAR) power density induced a significant density-dependent increase in NORT index with no corresponding changes in spontaneous locomotor activity. RF-EMR exposure increased dendritic-spine density and length in hippocampal and prefrontal cortical neurons, as shown by Golgi staining. Whole-cell recordings in acute hippocampal and medial prefrontal cortical slices showed that RF-EMR exposure significantly altered the resting membrane potential and action potential frequency, and reduced the action potential half-width, threshold, and onset delay in pyramidal neurons. These results demonstrate that exposure to 1.8 GHz RF-EMR for 30 min can significantly increase recognition memory in mice, and can change dendritic-spine morphology and neuronal excitability in the hippocampus and prefrontal cortex. The SAR in this study (3.3 W/kg) was outside the range encountered in normal daily life, and its relevance as a potential therapeutic approach for disorders associated with recognition memory deficits remains to be clarified. PMID:28303965
Automatic target recognition and detection in infrared imagery under cluttered background
NASA Astrophysics Data System (ADS)
Gundogdu, Erhan; Koç, Aykut; Alatan, A. Aydın.
2017-10-01
Visual object classification has long been studied in visible spectrum by utilizing conventional cameras. Since the labeled images has recently increased in number, it is possible to train deep Convolutional Neural Networks (CNN) with significant amount of parameters. As the infrared (IR) sensor technology has been improved during the last two decades, labeled images extracted from IR sensors have been started to be used for object detection and recognition tasks. We address the problem of infrared object recognition and detection by exploiting 15K images from the real-field with long-wave and mid-wave IR sensors. For feature learning, a stacked denoising autoencoder is trained in this IR dataset. To recognize the objects, the trained stacked denoising autoencoder is fine-tuned according to the binary classification loss of the target object. Once the training is completed, the test samples are propagated over the network, and the probability of the test sample belonging to a class is computed. Moreover, the trained classifier is utilized in a detect-by-classification method, where the classification is performed in a set of candidate object boxes and the maximum confidence score in a particular location is accepted as the score of the detected object. To decrease the computational complexity, the detection step at every frame is avoided by running an efficient correlation filter based tracker. The detection part is performed when the tracker confidence is below a pre-defined threshold. The experiments conducted on the real field images demonstrate that the proposed detection and tracking framework presents satisfactory results for detecting tanks under cluttered background.
Deep Neural Networks as a Computational Model for Human Shape Sensitivity
Op de Beeck, Hans P.
2016-01-01
Theories of object recognition agree that shape is of primordial importance, but there is no consensus about how shape might be represented, and so far attempts to implement a model of shape perception that would work with realistic stimuli have largely failed. Recent studies suggest that state-of-the-art convolutional ‘deep’ neural networks (DNNs) capture important aspects of human object perception. We hypothesized that these successes might be partially related to a human-like representation of object shape. Here we demonstrate that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. We show that these models explain human shape judgments for several benchmark behavioral and neural stimulus sets on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments. As a whole, these results indicate that convolutional neural networks not only learn physically correct representations of object categories but also develop perceptually accurate representational spaces of shapes. An even more complete model of human object representations might be in sight by training deep architectures for multiple tasks, which is so characteristic in human development. PMID:27124699
SLurtles: Supporting Constructionist Learning in "Second Life"
ERIC Educational Resources Information Center
Girvan, Carina; Tangney, Brendan; Savage, Timothy
2013-01-01
Constructionism places an emphasis on the process of constructing shareable artefacts. Many virtual worlds, such as "Second Life", provide learners with tools for the construction of objects and hence may facilitate in-world constructionist learning experiences. However, the construction tools available present learners with a significant barrier…
Takeda, A; Suzuki, M; Tempaku, M; Ohashi, K; Tamano, H
2015-09-24
Physiological significance of synaptic Zn(2+) signaling was examined in the CA1 of young rats. In vivo CA1 long-term potentiation (LTP) was induced using a recording electrode attached to a microdialysis probe and the recording region was locally perfused with artificial cerebrospinal fluid (ACSF) via the microdialysis probe. In vivo CA1 LTP was inhibited under perfusion with CaEDTA and ZnAF-2DA, extracellular and intracellular Zn(2+) chelators, respectively, suggesting that the influx of extracellular Zn(2+) is required for in vivo CA1 LTP induction. The increase in intracellular Zn(2+) was chelated with intracellular ZnAF-2 in the CA1 1h after local injection of ZnAF-2DA into the CA1, suggesting that intracellular Zn(2+) signaling induced during learning is blocked with intracellular ZnAF-2 when the learning was performed 1h after ZnAF-2DA injection. Object recognition was affected when training of object recognition test was performed 1h after ZnAF-2DA injection. These data suggest that intracellular Zn(2+) signaling in the CA1 is required for object recognition memory via LTP. Surprisingly, in vivo CA1 LTP was affected under perfusion with 0.1-1μM ZnCl2, unlike the previous data that in vitro CA1 LTP was enhanced in the presence of 1-5μM ZnCl2. The influx of extracellular Zn(2+) into CA1 pyramidal cells has bidirectional action in CA1 LTP. The present study indicates that the degree of extracellular Zn(2+) influx into CA1 neurons is critical for LTP and cognitive performance. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Can, Mao Van; Tran, Anh Hai; Pham, Dam Minh; Dinh, Bao Quoc; Le, Quan Van; Nguyen, Ba Van; Nguyen, Mai Thanh Thi; Nguyen, Hai Xuan; Nguyen, Nhan Trung; Nishijo, Hisao
2018-03-25
Willughbeia cochinchinensis (WC) has been used in Vietnamese traditional medicine for the treatment of dementia as well as diarrhea, heartburn, and cutaneous abscess and as a diuretic. Alzheimer's disease (AD) is one of the most prevalent diseases in elderly individuals. Acetylcholinesterase (AChE) and butyrylcholinesterase (BChE) inhibitors have been widely used to treat patients with AD. In the present study, we investigated anti-AChE and anti-BChE activities of a natural product, WC, for its potential applications in therapies to prevent/treat dementia. First, compounds extracted from WC were tested for their AChE and BChE inhibitory activities in vitro. Second, in vivo behavioral experiments were performed to investigate the effects of WC at doses of 100, 150, and 200mg/kg on scopolamine (1.5mg/kg)-induced memory and cognitive deficits in mice. The behavior of mice treated with and without WC and/or scopolamine was tested using the Y-maze, Morris water maze, and novel object recognition task. The results of the in vitro assay demonstrated anti-AChE and anti-BChE activities of the compounds extracted from WC. The results of behavioral experiments showed that the administration of WC prevented 1) scopolamine-induced decrease in spontaneous alternation (%) behavior in the Y-maze, 2) scopolamine-induced deficits in spatial learning and memory in the Morris water maze, and 3) scopolamine-induced deficits in novel object recognition. These results indicate that WC prevents cognitive and memory deficits induced by scopolamine injection. Our findings suggest that WC may represent a novel candidate for the treatment of memory and cognitive deficits in humans with dementia. Copyright © 2017. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Lima, Licínio C.; Guimarães, Paula
2016-01-01
This paper focuses on recognition of prior learning as part of a national policy based on European Union guidelines for lifelong learning, and it explains how recognition of prior learning has been perceived since it was implemented in Portugal in 2000. Data discussed are the result of a mixed method research project that surveyed adult learners,…
Spaced Learning Enhances Subsequent Recognition Memory by Reducing Neural Repetition Suppression
Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi
2012-01-01
Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half under the massed learning condition (i.e., four consecutive repetitions with jittered interstimulus interval) and the other half under the spaced learning condition (i.e., the four repetitions were interleaved). Recognition memory tests afterward revealed a significant spacing effect: Participants recognized more items learned under the spaced learning condition than under the massed learning condition. Successful face memory encoding was associated with stronger activation in the bilateral fusiform gyrus, which showed a significant repetition suppression effect modulated by subsequent memory status and spaced learning. Specifically, remembered faces showed smaller repetition suppression than forgotten faces under both learning conditions, and spaced learning significantly reduced repetition suppression. These results suggest that spaced learning enhances recognition memory by reducing neural repetition suppression. PMID:20617892
Spaced learning enhances subsequent recognition memory by reducing neural repetition suppression.
Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi
2011-07-01
Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half under the massed learning condition (i.e., four consecutive repetitions with jittered interstimulus interval) and the other half under the spaced learning condition (i.e., the four repetitions were interleaved). Recognition memory tests afterward revealed a significant spacing effect: Participants recognized more items learned under the spaced learning condition than under the massed learning condition. Successful face memory encoding was associated with stronger activation in the bilateral fusiform gyrus, which showed a significant repetition suppression effect modulated by subsequent memory status and spaced learning. Specifically, remembered faces showed smaller repetition suppression than forgotten faces under both learning conditions, and spaced learning significantly reduced repetition suppression. These results suggest that spaced learning enhances recognition memory by reducing neural repetition suppression.
Oyanedel, Carlos N; Binder, Sonja; Kelemen, Eduard; Petersen, Kimberley; Born, Jan; Inostroza, Marion
2014-12-15
Our previous experiments showed that sleep in rats enhances consolidation of hippocampus dependent episodic-like memory, i.e. the ability to remember an event bound into specific spatio-temporal context. Here we tested the hypothesis that this enhancing effect of sleep is linked to the occurrence of slow oscillatory and spindle activity during slow wave sleep (SWS). Rats were tested on an episodic-like memory task and on three additional tasks covering separately the where (object place recognition), when (temporal memory), and what (novel object recognition) components of episodic memory. In each task, the sample phase (encoding) was followed by an 80-min retention interval that covered either a period of regular morning sleep or sleep deprivation. Memory during retrieval was tested using preferential exploration of novelty vs. familiarity. Consistent with previous findings, the rats which had slept during the retention interval showed significantly stronger episodic-like memory and spatial memory, and a trend of improved temporal memory (although not significant). Object recognition memory was similarly retained across sleep and sleep deprivation retention intervals. Recall of episodic-like memory was associated with increased slow oscillatory activity (0.85-2.0Hz) during SWS in the retention interval. Spatial memory was associated with increased proportions of SWS. Against our hypothesis, a relationship between spindle activity and episodic-like memory performance was not detected, but spindle activity was associated with object recognition memory. The results provide support for the role of SWS and slow oscillatory activity in consolidating hippocampus-dependent memory, the role of spindles in this process needs to be further examined. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Place recognition and heading retrieval are mediated by dissociable cognitive systems in mice.
Julian, Joshua B; Keinath, Alexander T; Muzzio, Isabel A; Epstein, Russell A
2015-05-19
A lost navigator must identify its current location and recover its facing direction to restore its bearings. We tested the idea that these two tasks--place recognition and heading retrieval--might be mediated by distinct cognitive systems in mice. Previous work has shown that numerous species, including young children and rodents, use the geometric shape of local space to regain their sense of direction after disorientation, often ignoring nongeometric cues even when they are informative. Notably, these experiments have almost always been performed in single-chamber environments in which there is no ambiguity about place identity. We examined the navigational behavior of mice in a two-chamber paradigm in which animals had to both recognize the chamber in which they were located (place recognition) and recover their facing direction within that chamber (heading retrieval). In two experiments, we found that mice used nongeometric features for place recognition, but simultaneously failed to use these same features for heading retrieval, instead relying exclusively on spatial geometry. These results suggest the existence of separate systems for place recognition and heading retrieval in mice that are differentially sensitive to geometric and nongeometric cues. We speculate that a similar cognitive architecture may underlie human navigational behavior.
Herring, Nicole R.; Schaefer, Tori L.; Gudelsky, Gary A.; Vorhees, Charles V.; Williams, Michael T.
2008-01-01
Rationale Methamphetamine (MA) has been implicated in cognitive deficits in humans after chronic use. Animal models of neurotoxic MA exposure reveal persistent damage to monoaminergic systems, but few associated cognitive effects. Objectives Since, questions have been raised about the typical neurotoxic dosing regimen used in animals and whether it adequately models human cumulative drug exposure, these experiments examined two different dosing regimens. Methods Rats were treated with one of two regimens, one the typical neurotoxic regimen (4 × 10 mg/kg every 2 h) and one based on pharmacokinetic modeling (Cho et al. 2001) designed to better represent accumulating plasma concentrations of MA as seen in human users (24 ×1.67 mg/kg once every 15 min); matched for total daily dose. In two separate experiments, dosing regimens were compared for their effects on markers of neurotoxicity or on behavior. Results On markers of neurotoxicity, MA showed decreased DA and 5-HT, and increased glial fibrillary acidic protein and increased corticosterone levels regardless of dosing regimen 3 days post-treatment. Behaviorally, MA-treated groups, regardless of dosing regimen, showed hypoactivity, increased initial hyperactivity to a subsequent MA challenge, impaired novel object recognition, impaired learning in a multiple-T water maze test of path integration, and no differences on spatial navigation or reference memory in the Morris water maze. After behavioral testing, reductions of DA and 5-HT remained. Conclusions MA treatment induces an effect on path integration learning not previously reported. Dosing regimen had no differential effects on behavior or neurotoxicity. PMID:18509623
Visual object recognition for automatic micropropagation of plants
NASA Astrophysics Data System (ADS)
Brendel, Thorsten; Schwanke, Joerg; Jensch, Peter F.
1994-11-01
Micropropagation of plants is done by cutting juvenile plants and placing them into special container-boxes with nutrient-solution where the pieces can grow up and be cut again several times. To produce high amounts of biomass it is necessary to do plant micropropagation by a robotic system. In this paper we describe parts of the vision system that recognizes plants and their particular cutting points. Therefore, it is necessary to extract elements of the plants and relations between these elements (for example root, stem, leaf). Different species vary in their morphological appearance, variation is also immanent in plants of the same species. Therefore, we introduce several morphological classes of plants from that we expect same recognition methods.
International Service Learning and Short-Term Business Study Abroad Programs: A Case Study
ERIC Educational Resources Information Center
Le, Quan V.; Raven, Peter V.; Chen, Stanley
2013-01-01
A service learning project was recently incorporated into a short-term business study abroad program. The main objective was to assess whether there is a place for service learning projects and how they should be integrated into the program. A combination of two surveys were used, one taken before the project and one after. Reflection papers were…
van Assche, Mitsouko; Kebets, Valeria; Lopez, Ursula; Saj, Arnaud; Goldstein, Rachel; Bernasconi, Françoise; Vuilleumier, Patrik; Assal, Frédéric
2016-01-01
The parahippocampal cortex (PHC) participates in both perception and memory. However, the way perceptual and memory processes cooperate when we navigate in our everyday life environment remains poorly understood. We studied a stroke patient presenting a brain lesion in the right PHC, which resulted in a mild and quantifiable topographic agnosia, and allowed us to investigate the role of this structure in overt place recognition. Photographs of personally familiar and unfamiliar places were displayed during functional magnetic resonance imaging (fMRI). Familiar places were either recognized or unrecognized by the patient and 6 age- and education-matched controls in a visual post-scan recognition test. In fMRI, recognized places were associated with a network comprising the fusiform gyrus in the intact side, but also the right anterior PHC, which included the lesion site. Moreover, this right PHC showed increased connectivity with the left homologous PHC in the intact hemisphere. By contrasting recognized with unrecognized familiar places, we replicate the finding of the joint involvement of the retrosplenial cortex, occipito-temporal areas, and posterior parietal cortex in place recognition. This study shows that the ability for left and right anterior PHC to communicate despite the neurological damage conditioned place recognition success in this patient. It further highlights a hemispheric asymmetry in this process, by showing the fundamental role of the right PHC in topographic agnosia.
iLab 20M: A Large-scale Controlled Object Dataset to Investigate Deep Learning
2016-07-01
and train) and anno - tate them with rotation labels. Alexnet is fine tuned on the training set. We set the learning rate for all the layers to 0.001...Azizpour, A. Razavian, J . Sullivan, A. Maki, and S. Carls- son. From generic to specific deep representations for visual recognition. In CVPR...113–120. IEEE, 2014. 2 [5] J . Bromley, J . W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. Säckinger, and R. Shah. Signature verifica- tion using
Lai, Ying-Hui; Tsao, Yu; Lu, Xugang; Chen, Fei; Su, Yu-Ting; Chen, Kuang-Chao; Chen, Yu-Hsuan; Chen, Li-Ching; Po-Hung Li, Lieber; Lee, Chin-Hui
2018-01-20
We investigate the clinical effectiveness of a novel deep learning-based noise reduction (NR) approach under noisy conditions with challenging noise types at low signal to noise ratio (SNR) levels for Mandarin-speaking cochlear implant (CI) recipients. The deep learning-based NR approach used in this study consists of two modules: noise classifier (NC) and deep denoising autoencoder (DDAE), thus termed (NC + DDAE). In a series of comprehensive experiments, we conduct qualitative and quantitative analyses on the NC module and the overall NC + DDAE approach. Moreover, we evaluate the speech recognition performance of the NC + DDAE NR and classical single-microphone NR approaches for Mandarin-speaking CI recipients under different noisy conditions. The testing set contains Mandarin sentences corrupted by two types of maskers, two-talker babble noise, and a construction jackhammer noise, at 0 and 5 dB SNR levels. Two conventional NR techniques and the proposed deep learning-based approach are used to process the noisy utterances. We qualitatively compare the NR approaches by the amplitude envelope and spectrogram plots of the processed utterances. Quantitative objective measures include (1) normalized covariance measure to test the intelligibility of the utterances processed by each of the NR approaches; and (2) speech recognition tests conducted by nine Mandarin-speaking CI recipients. These nine CI recipients use their own clinical speech processors during testing. The experimental results of objective evaluation and listening test indicate that under challenging listening conditions, the proposed NC + DDAE NR approach yields higher intelligibility scores than the two compared classical NR techniques, under both matched and mismatched training-testing conditions. When compared to the two well-known conventional NR techniques under challenging listening condition, the proposed NC + DDAE NR approach has superior noise suppression capabilities and gives less distortion for the key speech envelope information, thus, improving speech recognition more effectively for Mandarin CI recipients. The results suggest that the proposed deep learning-based NR approach can potentially be integrated into existing CI signal processors to overcome the degradation of speech perception caused by noise.
ERIC Educational Resources Information Center
Ghatala, Elizabeth S.; And Others
This study applied a frequency theory to measure the superiority of pictures over words in both discrimination learning and recognition memory tasks. Three groups of sixth grade students were given separate instructions before viewing slides of either common objects or words. The first group (control) was asked to study the items shown, the second…
The origin and function of mirror neurons: the missing link.
Lingnau, Angelika; Caramazza, Alfonso
2014-04-01
We argue, by analogy to the neural organization of the object recognition system, that demonstration of modulation of mirror neurons by associative learning does not imply absence of genetic adaptation. Innate connectivity defines the types of processes mirror neurons can participate in while allowing for extensive local plasticity. However, the proper function of these neurons remains to be worked out.
Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J
2009-06-01
As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.
Application of the SNoW machine learning paradigm to a set of transportation imaging problems
NASA Astrophysics Data System (ADS)
Paul, Peter; Burry, Aaron M.; Wang, Yuheng; Kozitsky, Vladimir
2012-01-01
Machine learning methods have been successfully applied to image object classification problems where there is clear distinction between classes and where a comprehensive set of training samples and ground truth are readily available. The transportation domain is an area where machine learning methods are particularly applicable, since the classification problems typically have well defined class boundaries and, due to high traffic volumes in most applications, massive roadway data is available. Though these classes tend to be well defined, the particular image noises and variations can be challenging. Another challenge is the extremely high accuracy typically required in most traffic applications. Incorrect assignment of fines or tolls due to imaging mistakes is not acceptable in most applications. For the front seat vehicle occupancy detection problem, classification amounts to determining whether one face (driver only) or two faces (driver + passenger) are detected in the front seat of a vehicle on a roadway. For automatic license plate recognition, the classification problem is a type of optical character recognition problem encompassing multiple class classification. The SNoW machine learning classifier using local SMQT features is shown to be successful in these two transportation imaging applications.
NASA Astrophysics Data System (ADS)
Sayekti, Retno
2018-03-01
The objective of this study is to find out student patterns and perceptions of using a MOODLE-based e-learning system that was first used in 2014. The methodology involved was a survey to 165 respondents comprising of several classes of various subjects. This study investigates the intensity of student’s use of e-learning; time choice; justification of time choice; span of time in using e-learning; choice of access place; medium or devices used to access e-learning; and activities conducted in e-learning. In terms of students’ perspectives, the author tried to find out students thought and feeling in using e-learning. Finally, the finding suggests that students tend to use various devices to access e-learning in any place that provide speed internet access. This study also revealed that most of students feel that the learning process becomes more effective and efficient by using e-learning compared to the traditional learning system in class.
Journal Writing and Learning: Reading between the Structural, Holistic, and Post-Structural Lines.
ERIC Educational Resources Information Center
Mannion, Greg
2001-01-01
Structural approaches to journal writing enable learners to manage subjectivity while seeking "objective truth." Holistic approaches attempt to synthesize ways of learning, giving a false sense of completion and inclusion. Poststructuralism places journal text in the context of discourses; through reflection and deconstruction, the…
Curating Cartographies of Knowledge: Reading Institutional Study Abroad Portfolio as Text
ERIC Educational Resources Information Center
Ficarra, Julie M.
2017-01-01
The overarching assumption within popular approaches to global learning is that it takes place either in classrooms at home or in the case of study abroad, in experiential learning environments overseas. Policies and programs are carefully crafted to respond to particular institutional goals and objectives towards internationalization. These…
Getting Results: Small Changes, Big Cohorts and Technology
ERIC Educational Resources Information Center
Kenney, Jacqueline L.
2012-01-01
This paper presents an example of constructive alignment in practice. Integrated technology supports were deployed to increase the consistency between learning objectives, activities and assessment and to foster student-centred, higher-order learning processes in the unit. Modifications took place over nine iterations of a second-year Marketing…
Object recognition through a multi-mode fiber
NASA Astrophysics Data System (ADS)
Takagi, Ryosuke; Horisaki, Ryoichi; Tanida, Jun
2017-04-01
We present a method of recognizing an object through a multi-mode fiber. A number of speckle patterns transmitted through a multi-mode fiber are provided to a classifier based on machine learning. We experimentally demonstrated binary classification of face and non-face targets based on the method. The measurement process of the experimental setup was random and nonlinear because a multi-mode fiber is a typical strongly scattering medium and any reference light was not used in our setup. Comparisons between three supervised learning methods, support vector machine, adaptive boosting, and neural network, are also provided. All of those learning methods achieved high accuracy rates at about 90% for the classification. The approach presented here can realize a compact and smart optical sensor. It is practically useful for medical applications, such as endoscopy. Also our study indicated a promising utilization of artificial intelligence, which has rapidly progressed, for reducing optical and computational costs in optical sensing systems.
Signed reward prediction errors drive declarative learning
Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom
2018-01-01
Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning–a quintessentially human form of learning–remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; “better-than-expected” signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli. PMID:29293493
Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E
2013-07-01
To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.
From fish to fashion: experimental and theoretical insights into the evolution of culture
Laland, K. N.; Atton, N.; Webster, M. M.
2011-01-01
Recent years have witnessed a re-evaluation of the cognitive capabilities of fishes, including with respect to social learning. Indeed, some of the best experimental evidence for animal traditions can be found in fishes. Laboratory experimental studies reveal that many fishes acquire dietary, food site and mating preferences, predator recognition and avoidance behaviour, and learn pathways, through copying1 other fishes. Concentrating on foraging behaviour, we will present the findings of laboratory experiments that reveal social learning, behavioural innovation, the diffusion of novel behaviour through populations and traditional use of food sites. Further studies reveal surprisingly complex social learning strategies deployed by sticklebacks. We will go on to place these observations of fish in a phylogenetic context, describing in which respects the learning and traditionality of fish are similar to, and differ from, that observed in other animals. We end by drawing on theoretical insights to suggest processes that may have played important roles in the evolution of the human cultural capability. PMID:21357218
Shahidi, Siamak; Asl, Sara Soleimani; Komaki, Alireza; Hashemi-Firouzi, Nasrin
2018-05-01
Alzheimer's disease (AD) is a neurodegenerative disorder characterized by memory impairment, neuronal death, and synaptic loss in the hippocampus. Long-term potentiation (LTP), a type of synaptic plasticity, occurs during learning and memory. Serotonin receptor type 7 (5-HTR7) activation is suggested as a possible therapeutic target for AD. The aim of the present study was to examine the effects of chronic treatment with the 5-HTR7 agonist, AS19, on cognitive function, memory, hippocampal plasticity, amyloid beta (Aβ) plaque accumulation, and apoptosis in an adult rat model of AD. AD was induced in rats using Aβ (single 1 μg/μL intracerebroventricular (icv) injection during surgery). The following experimental groups were included: control, sham-operated, Aβ + saline (1 μL icv for 30 days), and Aβ + AS19 (1 μg/μL icv for 30 days) groups. The animals were tested for cognition and memory performance using the novel object recognition and passive avoidance tests, respectively. Next, anesthetized rats were placed in a stereotaxic apparatus for electrode implantation, and field potentials were recorded in the hippocampal dentate gyrus. Lastly, brains were removed and Aβ plaques and neuronal apoptosis were evaluated using Congo red staining and TUNEL assay, respectively. Administration of AS19 in the Aβ rats increased the discrimination index of the novel object recognition test. Furthermore, AS19 treatment decreased time spent in the dark compartment during the passive avoidance test. AS19 also enhanced both the population spike (PS) amplitude and the field excitatory postsynaptic potential (fEPSP) slope evoked potentials of the LTP components. Aβ plaques and neuronal apoptosis were decreased in the AS19-treated Aβ rats. These results indicate that chronic treatment with a 5-HTR7 agonist can prevent Aβ-related impairments in cognition and memory performance by alleviating Aβ plaque accumulation and neuronal apoptosis, hence improving neuronal plasticity. AS19 may be useful as a therapeutic agent for AD.
Jurado-Berbel, Patricia; Costa-Miserachs, David; Torras-Garcia, Meritxell; Coll-Andreu, Margalida; Portell-Cortés, Isabel
2010-02-11
The present work examined whether post-training systemic epinephrine (EPI) is able to modulate short-term (3h) and long-term (24 h and 48 h) memory of standard object recognition, as well as long-term (24 h) memory of separate "what" (object identity) and "where" (object location) components of object recognition. Although object recognition training is associated to low arousal levels, all the animals received habituation to the training box in order to further reduce emotional arousal. Post-training EPI improved long-term (24 h and 48 h), but not short-term (3 h), memory in the standard object recognition task, as well as 24 h memory for both object identity and object location. These data indicate that post-training epinephrine: (1) facilitates long-term memory for standard object recognition; (2) exerts separate facilitatory effects on "what" (object identity) and "where" (object location) components of object recognition; and (3) is capable of improving memory for a low arousing task even in highly habituated rats.
ERIC Educational Resources Information Center
Mackey, Ellen; Dodd, Karen
2011-01-01
Following Beacroft & Dodd's (2009) audit of pain recognition and management within learning disability services in Surrey, it was recommended that learning disability services should receive training in pain recognition and management. Two hundred and seventy-five services were invited to participate, of which 197 services in Surrey accepted…
ERIC Educational Resources Information Center
Sheehy, Kieron
2005-01-01
Children with severe learning difficulties who fail to begin word recognition can learn to recognise pictures and symbols relatively easily. However, finding an effective means of using pictures to teach word recognition has proved problematic. This research explores the use of morphing software to support the transition from picture to word…
Experience moderates overlap between object and face recognition, suggesting a common ability
Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.
2014-01-01
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021
Experience moderates overlap between object and face recognition, suggesting a common ability.
Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E
2014-07-03
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.
Biological complexity and adaptability of simple mammalian olfactory memory systems.
Brennan, P; Keverne, E B
2015-03-01
Chemosensory systems play vital roles in the lives of most mammals, including the detection and identification of predators, as well as sex and reproductive status and the identification of individual conspecifics. All of these capabilities require a process of recognition involving a combination of innate (kairomonal/pheromonal) and learned responses. Across very different phylogenies, the mechanisms for pheromonal and odour learning have much in common. They are frequently associated with plasticity of GABA-ergic feedback at the initial level of processing the chemosensory information, which enhances its pattern separation capability. Association of odourant features into an odour object primarily involves anterior piriform cortex for non-social odours. However, the medial amygdala appears to be involved in both the recognition of social odours and their association with chemosensory information sensed by the vomeronasal system. Unusually not only the sensory neurons themselves, but also the GABA-ergic interneurons in the olfactory bulb are continually being replaced, with implications for the induction and maintenance of learned chemosensory responses. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Schlichter, Carol
1978-01-01
The final installment of a series of articles on the "Mushroom Place" learning center program, which involves creative thinking activities for young, gifted students, describes "Doing It the Hard Way," a performance task which involves the actual construction of objects from a selected set of materials in the absence of the usual project tools.…
Learning during Processing: Word Learning Doesn't Wait for Word Recognition to Finish
ERIC Educational Resources Information Center
Apfelbaum, Keith S.; McMurray, Bob
2017-01-01
Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed…
Hu, T H; Wan, L; Liu, T A; Wang, M W; Chen, T; Wang, Y H
2017-12-01
Deep learning and neural network models have been new research directions and hot issues in the fields of machine learning and artificial intelligence in recent years. Deep learning has made a breakthrough in the applications of image and speech recognitions, and also has been extensively used in the fields of face recognition and information retrieval because of its special superiority. Bone X-ray images express different variations in black-white-gray gradations, which have image features of black and white contrasts and level differences. Based on these advantages of deep learning in image recognition, we combine it with the research of bone age assessment to provide basic datum for constructing a forensic automatic system of bone age assessment. This paper reviews the basic concept and network architectures of deep learning, and describes its recent research progress on image recognition in different research fields at home and abroad, and explores its advantages and application prospects in bone age assessment. Copyright© by the Editorial Department of Journal of Forensic Medicine.
Generalization between canonical and non-canonical views in object recognition
Ghose, Tandra; Liu, Zili
2013-01-01
Viewpoint generalization in object recognition is the process that allows recognition of a given 3D object from many different viewpoints despite variations in its 2D projections. We used the canonical view effects as a foundation to empirically test the validity of a major theory in object recognition, the view-approximation model (Poggio & Edelman, 1990). This model predicts that generalization should be better when an object is first seen from a non-canonical view and then a canonical view than when seen in the reversed order. We also manipulated object similarity to study the degree to which this view generalization was constrained by shape details and task instructions (object vs. image recognition). Old-new recognition performance for basic and subordinate level objects was measured in separate blocks. We found that for object recognition, view generalization between canonical and non-canonical views was comparable for basic level objects. For subordinate level objects, recognition performance was more accurate from non-canonical to canonical views than the other way around. When the task was changed from object recognition to image recognition, the pattern of the results reversed. Interestingly, participants responded “old” to “new” images of “old” objects with a substantially higher rate than to “new” objects, despite instructions to the contrary, thereby indicating involuntary view generalization. Our empirical findings are incompatible with the prediction of the view-approximation theory, and argue against the hypothesis that views are stored independently. PMID:23283692
3D abnormal behavior recognition in power generation
NASA Astrophysics Data System (ADS)
Wei, Zhenhua; Li, Xuesen; Su, Jie; Lin, Jie
2011-06-01
So far most research of human behavior recognition focus on simple individual behavior, such as wave, crouch, jump and bend. This paper will focus on abnormal behavior with objects carrying in power generation. Such as using mobile communication device in main control room, taking helmet off during working and lying down in high place. Taking account of the color and shape are fixed, we adopted edge detecting by color tracking to recognize object in worker. This paper introduces a method, which using geometric character of skeleton and its angle to express sequence of three-dimensional human behavior data. Then adopting Semi-join critical step Hidden Markov Model, weighing probability of critical steps' output to reduce the computational complexity. Training model for every behavior, mean while select some skeleton frames from 3D behavior sample to form a critical step set. This set is a bridge linking 2D observation behavior with 3D human joints feature. The 3D reconstruction is not required during the 2D behavior recognition phase. In the beginning of recognition progress, finding the best match for every frame of 2D observed sample in 3D skeleton set. After that, 2D observed skeleton frames sample will be identified as a specifically 3D behavior by behavior-classifier. The effectiveness of the proposed algorithm is demonstrated with experiments in similar power generation environment.
Colour agnosia impairs the recognition of natural but not of non-natural scenes.
Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F
2007-03-01
Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.
Transfer Learning with Convolutional Neural Networks for SAR Ship Recognition
NASA Astrophysics Data System (ADS)
Zhang, Di; Liu, Jia; Heng, Wang; Ren, Kaijun; Song, Junqiang
2018-03-01
Ship recognition is the backbone of marine surveillance systems. Recent deep learning methods, e.g. Convolutional Neural Networks (CNNs), have shown high performance for optical images. Learning CNNs, however, requires a number of annotated samples to estimate numerous model parameters, which prevents its application to Synthetic Aperture Radar (SAR) images due to the limited annotated training samples. Transfer learning has been a promising technique for applications with limited data. To this end, a novel SAR ship recognition method based on CNNs with transfer learning has been developed. In this work, we firstly start with a CNNs model that has been trained in advance on Moving and Stationary Target Acquisition and Recognition (MSTAR) database. Next, based on the knowledge gained from this image recognition task, we fine-tune the CNNs on a new task to recognize three types of ships in the OpenSARShip database. The experimental results show that our proposed approach can obviously increase the recognition rate comparing with the result of merely applying CNNs. In addition, compared to existing methods, the proposed method proves to be very competitive and can learn discriminative features directly from training data instead of requiring pre-specification or pre-selection manually.
The effects of deep network topology on mortality prediction.
Hao Du; Ghassemi, Mohammad M; Mengling Feng
2016-08-01
Deep learning has achieved remarkable results in the areas of computer vision, speech recognition, natural language processing and most recently, even playing Go. The application of deep-learning to problems in healthcare, however, has gained attention only in recent years, and it's ultimate place at the bedside remains a topic of skeptical discussion. While there is a growing academic interest in the application of Machine Learning (ML) techniques to clinical problems, many in the clinical community see little incentive to upgrade from simpler methods, such as logistic regression, to deep learning. Logistic regression, after all, provides odds ratios, p-values and confidence intervals that allow for ease of interpretation, while deep nets are often seen as `black-boxes' which are difficult to understand and, as of yet, have not demonstrated performance levels far exceeding their simpler counterparts. If deep learning is to ever take a place at the bedside, it will require studies which (1) showcase the performance of deep-learning methods relative to other approaches and (2) interpret the relationships between network structure, model performance, features and outcomes. We have chosen these two requirements as the goal of this study. In our investigation, we utilized a publicly available EMR dataset of over 32,000 intensive care unit patients and trained a Deep Belief Network (DBN) to predict patient mortality at discharge. Utilizing an evolutionary algorithm, we demonstrate automated topology selection for DBNs. We demonstrate that with the correct topology selection, DBNs can achieve better prediction performance compared to several bench-marking methods.
Software for Partly Automated Recognition of Targets
NASA Technical Reports Server (NTRS)
Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark
2003-01-01
The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user s tendencies while the user is selecting targets and to increase the user s productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.
McMurray, Bob; Horst, Jessica S; Samuelson, Larissa K
2012-10-01
Classic approaches to word learning emphasize referential ambiguity: In naming situations, a novel word could refer to many possible objects, properties, actions, and so forth. To solve this, researchers have posited constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative in which referent selection is an online process and independent of long-term learning. We illustrate this theoretical approach with a dynamic associative model in which referent selection emerges from real-time competition between referents and learning is associative (Hebbian). This model accounts for a range of findings including the differences in expressive and receptive vocabulary, cross-situational learning under high degrees of ambiguity, accelerating (vocabulary explosion) and decelerating (power law) learning, fast mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between speed of processing and learning. Together it suggests that (a) association learning buttressed by dynamic competition can account for much of the literature; (b) familiar word recognition is subserved by the same processes that identify the referents of novel words (fast mapping); (c) online competition may allow the children to leverage information available in the task to augment performance despite slow learning; (d) in complex systems, associative learning is highly multifaceted; and (e) learning and referent selection, though logically distinct, can be subtly related. It suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E
2017-07-01
According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.
Sungur, A Özge; Jochner, Magdalena C E; Harb, Hani; Kılıç, Ayşe; Garn, Holger; Schwarting, Rainer K W; Wöhr, Markus
2017-08-01
Autism spectrum disorder (ASD) is a class of neurodevelopmental disorders characterized by persistent deficits in social communication/interaction, together with restricted/repetitive patterns of behavior. ASD is among the most heritable neuropsychiatric conditions, and while available evidence points to a complex set of genetic factors, the SHANK gene family has emerged as one of the most promising candidates. Here, we assessed ASD-related phenotypes with particular emphasis on social behavior and cognition in Shank1 mouse mutants in comparison to heterozygous and wildtype littermate controls across development in both sexes. While social approach behavior was evident in all experimental conditions and social recognition was only mildly affected by genotype, Shank1 -/- null mutant mice were severely impaired in object recognition memory. This effect was particularly prominent in juveniles, not due to impairments in object discrimination, and replicated in independent mouse cohorts. At the neurobiological level, object recognition deficits were paralleled by increased brain-derived neurotrophic factor (BDNF) protein expression in the hippocampus of Shank1 -/- mice; yet BDNF levels did not differ under baseline conditions. We therefore investigated changes in the epigenetic regulation of hippocampal BDNF expression and detected an enrichment of histone H3 acetylation at the Bdnf promoter1 in Shank1 -/- mice, consistent with increased learning-associated BDNF. Together, our findings indicate that Shank1 deletions lead to an aberrant cognitive phenotype characterized by severe impairments in object recognition memory and increased hippocampal BDNF levels, possibly due to epigenetic modifications. This result supports the link between ASD and intellectual disability, and suggests epigenetic regulation as a potential therapeutic target. © 2017 Wiley Periodicals, Inc.
Pérez-García, Georgina; Guzmán-Quevedo, Omar; Da Silva Aragão, Raquel; Bolaños-Jiménez, Francisco
2016-02-17
Numerous epidemiological studies indicate that malnutrition during in utero development and/or childhood induces long-lasting learning disabilities and enhanced susceptibility to develop psychiatric disorders. However, animal studies aimed to address this question have yielded inconsistent results due to the use of learning tasks involving negative or positive reinforces that interfere with the enduring changes in emotional reactivity and motivation produced by in utero and neonatal malnutrition. Consequently, the mechanisms underlying the learning deficits associated with malnutrition in early life remain unknown. Here we implemented a behavioural paradigm based on the combination of the novel object recognition and the novel object location tasks to define the impact of early protein-restriction on the behavioural, cellular and molecular basis of memory processing. Adult rats born to dams fed a low-protein diet during pregnancy and lactation, exhibited impaired encoding and consolidation of memory resulting from impaired pattern separation. This learning deficit was associated with reduced production of newly born hippocampal neurons and down regulation of BDNF gene expression. These data sustain the existence of a causal relationship between early malnutrition and impaired learning in adulthood and show that decreased adult neurogenesis is associated to the cognitive deficits induced by childhood exposure to poor nutrition.
Pérez-García, Georgina; Guzmán-Quevedo, Omar; Da Silva Aragão, Raquel; Bolaños-Jiménez, Francisco
2016-01-01
Numerous epidemiological studies indicate that malnutrition during in utero development and/or childhood induces long-lasting learning disabilities and enhanced susceptibility to develop psychiatric disorders. However, animal studies aimed to address this question have yielded inconsistent results due to the use of learning tasks involving negative or positive reinforces that interfere with the enduring changes in emotional reactivity and motivation produced by in utero and neonatal malnutrition. Consequently, the mechanisms underlying the learning deficits associated with malnutrition in early life remain unknown. Here we implemented a behavioural paradigm based on the combination of the novel object recognition and the novel object location tasks to define the impact of early protein-restriction on the behavioural, cellular and molecular basis of memory processing. Adult rats born to dams fed a low-protein diet during pregnancy and lactation, exhibited impaired encoding and consolidation of memory resulting from impaired pattern separation. This learning deficit was associated with reduced production of newly born hippocampal neurons and down regulation of BDNF gene expression. These data sustain the existence of a causal relationship between early malnutrition and impaired learning in adulthood and show that decreased adult neurogenesis is associated to the cognitive deficits induced by childhood exposure to poor nutrition. PMID:26882991
Motivation and Learning Strategies in the Use of ICTs among University Students
ERIC Educational Resources Information Center
Valentin, Alberto; Mateos, Pedro M.; Gonzalez-Tablas, Maria M.; Perez, Lourdes; Lopez, Estrella; Garcia, Inmaculada
2013-01-01
Within the European Higher Education Area (EHEA) considerable efforts are being made to promote the incorporation of Information and Communication Technology (ICTs) in Higher Education (HE), together with placing emphasis on the cognitive and motivational components underlying learning. The objectives of this research were to analyze: (a) the…
An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy
ERIC Educational Resources Information Center
Gamso, Nancy M.
2011-01-01
The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…