Jurado-Berbel, Patricia; Costa-Miserachs, David; Torras-Garcia, Meritxell; Coll-Andreu, Margalida; Portell-Cortés, Isabel
2010-02-11
The present work examined whether post-training systemic epinephrine (EPI) is able to modulate short-term (3h) and long-term (24 h and 48 h) memory of standard object recognition, as well as long-term (24 h) memory of separate "what" (object identity) and "where" (object location) components of object recognition. Although object recognition training is associated to low arousal levels, all the animals received habituation to the training box in order to further reduce emotional arousal. Post-training EPI improved long-term (24 h and 48 h), but not short-term (3 h), memory in the standard object recognition task, as well as 24 h memory for both object identity and object location. These data indicate that post-training epinephrine: (1) facilitates long-term memory for standard object recognition; (2) exerts separate facilitatory effects on "what" (object identity) and "where" (object location) components of object recognition; and (3) is capable of improving memory for a low arousing task even in highly habituated rats.
A new selective developmental deficit: Impaired object recognition with normal face recognition.
Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley
2011-05-01
Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual recognition. Copyright © 2010 Elsevier Srl. All rights reserved.
Generalization between canonical and non-canonical views in object recognition
Ghose, Tandra; Liu, Zili
2013-01-01
Viewpoint generalization in object recognition is the process that allows recognition of a given 3D object from many different viewpoints despite variations in its 2D projections. We used the canonical view effects as a foundation to empirically test the validity of a major theory in object recognition, the view-approximation model (Poggio & Edelman, 1990). This model predicts that generalization should be better when an object is first seen from a non-canonical view and then a canonical view than when seen in the reversed order. We also manipulated object similarity to study the degree to which this view generalization was constrained by shape details and task instructions (object vs. image recognition). Old-new recognition performance for basic and subordinate level objects was measured in separate blocks. We found that for object recognition, view generalization between canonical and non-canonical views was comparable for basic level objects. For subordinate level objects, recognition performance was more accurate from non-canonical to canonical views than the other way around. When the task was changed from object recognition to image recognition, the pattern of the results reversed. Interestingly, participants responded “old” to “new” images of “old” objects with a substantially higher rate than to “new” objects, despite instructions to the contrary, thereby indicating involuntary view generalization. Our empirical findings are incompatible with the prediction of the view-approximation theory, and argue against the hypothesis that views are stored independently. PMID:23283692
Peterson, M A; de Gelder, B; Rapcsak, S Z; Gerhardstein, P C; Bachoud-Lévi, A
2000-01-01
In three experiments we investigated whether conscious object recognition is necessary or sufficient for effects of object memories on figure assignment. In experiment 1, we examined a brain-damaged participant, AD, whose conscious object recognition is severely impaired. AD's responses about figure assignment do reveal effects from memories of object structure, indicating that conscious object recognition is not necessary for these effects, and identifying the figure-ground test employed here as a new implicit test of access to memories of object structure. In experiments 2 and 3, we tested a second brain-damaged participant, WG, for whom conscious object recognition was relatively spared. Nevertheless, effects from memories of object structure on figure assignment were not evident in WG's responses about figure assignment in experiment 2, indicating that conscious object recognition is not sufficient for effects of object memories on figure assignment. WG's performance sheds light on AD's performance, and has implications for the theoretical understanding of object memory effects on figure assignment.
Experience moderates overlap between object and face recognition, suggesting a common ability
Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.
2014-01-01
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021
Experience moderates overlap between object and face recognition, suggesting a common ability.
Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E
2014-07-03
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.
The role of color information on object recognition: a review and meta-analysis.
Bramão, Inês; Reis, Alexandra; Petersson, Karl Magnus; Faísca, Luís
2011-09-01
In this study, we systematically review the scientific literature on the effect of color on object recognition. Thirty-five independent experiments, comprising 1535 participants, were included in a meta-analysis. We found a moderate effect of color on object recognition (d=0.28). Specific effects of moderator variables were analyzed and we found that color diagnosticity is the factor with the greatest moderator effect on the influence of color in object recognition; studies using color diagnostic objects showed a significant color effect (d=0.43), whereas a marginal color effect was found in studies that used non-color diagnostic objects (d=0.18). The present study did not permit the drawing of specific conclusions about the moderator effect of the object recognition task; while the meta-analytic review showed that color information improves object recognition mainly in studies using naming tasks (d=0.36), the literature review revealed a large body of evidence showing positive effects of color information on object recognition in studies using a large variety of visual recognition tasks. We also found that color is important for the ability to recognize artifacts and natural objects, to recognize objects presented as types (line-drawings) or as tokens (photographs), and to recognize objects that are presented without surface details, such as texture or shadow. Taken together, the results of the meta-analysis strongly support the contention that color plays a role in object recognition. This suggests that the role of color should be taken into account in models of visual object recognition. Copyright © 2011 Elsevier B.V. All rights reserved.
Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image
NASA Astrophysics Data System (ADS)
Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti
2016-06-01
An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.
The development of newborn object recognition in fast and slow visual worlds
Wood, Justin N.; Wood, Samantha M. W.
2016-01-01
Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world. PMID:27097925
Method and System for Object Recognition Search
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor); Stubberud, Allen R. (Inventor)
2012-01-01
A method for object recognition using shape and color features of the object to be recognized. An adaptive architecture is used to recognize and adapt the shape and color features for moving objects to enable object recognition.
It Takes Two–Skilled Recognition of Objects Engages Lateral Areas in Both Hemispheres
Bilalić, Merim; Kiesel, Andrea; Pohl, Carsten; Erb, Michael; Grodd, Wolfgang
2011-01-01
Our object recognition abilities, a direct product of our experience with objects, are fine-tuned to perfection. Left temporal and lateral areas along the dorsal, action related stream, as well as left infero-temporal areas along the ventral, object related stream are engaged in object recognition. Here we show that expertise modulates the activity of dorsal areas in the recognition of man-made objects with clearly specified functions. Expert chess players were faster than chess novices in identifying chess objects and their functional relations. Experts' advantage was domain-specific as there were no differences between groups in a control task featuring geometrical shapes. The pattern of eye movements supported the notion that experts' extensive knowledge about domain objects and their functions enabled superior recognition even when experts were not directly fixating the objects of interest. Functional magnetic resonance imaging (fMRI) related exclusively the areas along the dorsal stream to chess specific object recognition. Besides the commonly involved left temporal and parietal lateral brain areas, we found that only in experts homologous areas on the right hemisphere were also engaged in chess specific object recognition. Based on these results, we discuss whether skilled object recognition does not only involve a more efficient version of the processes found in non-skilled recognition, but also qualitatively different cognitive processes which engage additional brain areas. PMID:21283683
Eye movements during object recognition in visual agnosia.
Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe
2012-07-01
This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yamashita, Wakayo; Wang, Gang; Tanaka, Keiji
2010-01-01
One usually fails to recognize an unfamiliar object across changes in viewing angle when it has to be discriminated from similar distractor objects. Previous work has demonstrated that after long-term experience in discriminating among a set of objects seen from the same viewing angle, immediate recognition of the objects across 30-60 degrees changes in viewing angle becomes possible. The capability for view-invariant object recognition should develop during the within-viewing-angle discrimination, which includes two kinds of experience: seeing individual views and discriminating among the objects. The aim of the present study was to determine the relative contribution of each factor to the development of view-invariant object recognition capability. Monkeys were first extensively trained in a task that required view-invariant object recognition (Object task) with several sets of objects. The animals were then exposed to a new set of objects over 26 days in one of two preparatory tasks: one in which each object view was seen individually, and a second that required discrimination among the objects at each of four viewing angles. After the preparatory period, we measured the monkeys' ability to recognize the objects across changes in viewing angle, by introducing the object set to the Object task. Results indicated significant view-invariant recognition after the second but not first preparatory task. These results suggest that discrimination of objects from distractors at each of several viewing angles is required for the development of view-invariant recognition of the objects when the distractors are similar to the objects.
Higher-Order Neural Networks Applied to 2D and 3D Object Recognition
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1994-01-01
A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.
Interactive object recognition assistance: an approach to recognition starting from target objects
NASA Astrophysics Data System (ADS)
Geisler, Juergen; Littfass, Michael
1999-07-01
Recognition of target objects in remotely sensed imagery required detailed knowledge about the target object domain as well as about mapping properties of the sensing system. The art of object recognition is to combine both worlds appropriately and to provide models of target appearance with respect to sensor characteristics. Common approaches to support interactive object recognition are either driven from the sensor point of view and address the problem of displaying images in a manner adequate to the sensing system. Or they focus on target objects and provide exhaustive encyclopedic information about this domain. Our paper discusses an approach to assist interactive object recognition based on knowledge about target objects and taking into account the significance of object features with respect to characteristics of the sensed imagery, e.g. spatial and spectral resolution. An `interactive recognition assistant' takes the image analyst through the interpretation process by indicating step-by-step the respectively most significant features of objects in an actual set of candidates. The significance of object features is expressed by pregenerated trees of significance, and by the dynamic computation of decision relevance for every feature at each step of the recognition process. In the context of this approach we discuss the question of modeling and storing the multisensorial/multispectral appearances of target objects and object classes as well as the problem of an adequate dynamic human-machine-interface that takes into account various mental models of human image interpretation.
DORSAL HIPPOCAMPAL PROGESTERONE INFUSIONS ENHANCE OBJECT RECOGNITION IN YOUNG FEMALE MICE
Orr, Patrick T.; Lewis, Michael C.; Frick, Karyn M.
2009-01-01
The effects of progesterone on memory are not nearly as well studied as the effects of estrogens. Although progesterone can reportedly enhance spatial and/or object recognition in female rodents when given immediately after training, previous studies have injected progesterone systemically, and therefore, the brain regions mediating this enhancement are not clear. As such, this study was designed to determine the role of the dorsal hippocampus in mediating the beneficial effect of progesterone on object recognition. Young ovariectomized C57BL/6 mice were trained in a hippocampal-dependent object recognition task utilizing two identical objects, and then immediately or 2 hrs afterwards, received bilateral dorsal hippocampal infusions of vehicle or 0.01, 0.1, or 1.0 μg/μl water-soluble progesterone. Forty-eight hours later, object recognition memory was tested using a previously explored object and a novel object. Relative to the vehicle group, memory for the familiar object was enhanced in all groups receiving immediate infusions of progesterone. Progesterone infusion delayed 2 hrs after training did not affect object recognition. These data suggest that the dorsal hippocampus may play a critical role in progesterone-induced enhancement of object recognition. PMID:19477194
O'Neil, Edward B; Watson, Hilary C; Dhillon, Sonya; Lobaugh, Nancy J; Lee, Andy C H
2015-09-01
Recent work has demonstrated that the perirhinal cortex (PRC) supports conjunctive object representations that aid object recognition memory following visual object interference. It is unclear, however, how these representations interact with other brain regions implicated in mnemonic retrieval and how congruent and incongruent interference influences the processing of targets and foils during object recognition. To address this, multivariate partial least squares was applied to fMRI data acquired during an interference match-to-sample task, in which participants made object or scene recognition judgments after object or scene interference. This revealed a pattern of activity sensitive to object recognition following congruent (i.e., object) interference that included PRC, prefrontal, and parietal regions. Moreover, functional connectivity analysis revealed a common pattern of PRC connectivity across interference and recognition conditions. Examination of eye movements during the same task in a separate study revealed that participants gazed more at targets than foils during correct object recognition decisions, regardless of interference congruency. By contrast, participants viewed foils more than targets for incorrect object memory judgments, but only after congruent interference. Our findings suggest that congruent interference makes object foils appear familiar and that a network of regions, including PRC, is recruited to overcome the effects of interference.
Graf, M; Kaping, D; Bülthoff, H H
2005-03-01
How do observers recognize objects after spatial transformations? Recent neurocomputational models have proposed that object recognition is based on coordinate transformations that align memory and stimulus representations. If the recognition of a misoriented object is achieved by adjusting a coordinate system (or reference frame), then recognition should be facilitated when the object is preceded by a different object in the same orientation. In the two experiments reported here, two objects were presented in brief masked displays that were in close temporal contiguity; the objects were in either congruent or incongruent picture-plane orientations. Results showed that naming accuracy was higher for congruent than for incongruent orientations. The congruency effect was independent of superordinate category membership (Experiment 1) and was found for objects with different main axes of elongation (Experiment 2). The results indicate congruency effects for common familiar objects even when they have dissimilar shapes. These findings are compatible with models in which object recognition is achieved by an adjustment of a perceptual coordinate system.
Niimi, Ryosuke; Yokosawa, Kazuhiko
2009-01-01
Visual recognition of three-dimensional (3-D) objects is relatively impaired for some particular views, called accidental views. For most familiar objects, the front and top views are considered to be accidental views. Previous studies have shown that foreshortening of the axes of elongation of objects in these views impairs recognition, but the influence of other possible factors is largely unknown. Using familiar objects without a salient axis of elongation, we found that a foreshortened symmetry plane of the object and low familiarity of the viewpoint accounted for the relatively worse recognition for front views and top views, independently of the effect of a foreshortened axis of elongation. We found no evidence that foreshortened front-back axes impaired recognition in front views. These results suggest that the viewpoint dependence of familiar object recognition is not a unitary phenomenon. The possible role of symmetry (either 2-D or 3-D) in familiar object recognition is also discussed.
Automatic anatomy recognition via multiobject oriented active shape models.
Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A
2010-12-01
This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.
The role of perceptual load in object recognition.
Lavie, Nilli; Lin, Zhicheng; Zokaei, Nahid; Thoma, Volker
2009-10-01
Predictions from perceptual load theory (Lavie, 1995, 2005) regarding object recognition across the same or different viewpoints were tested. Results showed that high perceptual load reduces distracter recognition levels despite always presenting distracter objects from the same view. They also showed that the levels of distracter recognition were unaffected by a change in the distracter object view under conditions of low perceptual load. These results were found both with repetition priming measures of distracter recognition and with performance on a surprise recognition memory test. The results support load theory proposals that distracter recognition critically depends on the level of perceptual load. The implications for the role of attention in object recognition theories are discussed. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Holdstock, J S; Mayes, A R; Roberts, N; Cezayirli, E; Isaac, C L; O'Reilly, R C; Norman, K A
2002-01-01
The claim that recognition memory is spared relative to recall after focal hippocampal damage has been disputed in the literature. We examined this claim by investigating object and object-location recall and recognition memory in a patient, YR, who has adult-onset selective hippocampal damage. Our aim was to identify the conditions under which recognition was spared relative to recall in this patient. She showed unimpaired forced-choice object recognition but clearly impaired recall, even when her control subjects found the object recognition task to be numerically harder than the object recall task. However, on two other recognition tests, YR's performance was not relatively spared. First, she was clearly impaired at an equivalently difficult yes/no object recognition task, but only when targets and foils were very similar. Second, YR was clearly impaired at forced-choice recognition of object-location associations. This impairment was also unrelated to difficulty because this task was no more difficult than the forced-choice object recognition task for control subjects. The clear impairment of yes/no, but not of forced-choice, object recognition after focal hippocampal damage, when targets and foils are very similar, is predicted by the neural network-based Complementary Learning Systems model of recognition. This model postulates that recognition is mediated by hippocampally dependent recollection and cortically dependent familiarity; thus hippocampal damage should not impair item familiarity. The model postulates that familiarity is ineffective when very similar targets and foils are shown one at a time and subjects have to identify which items are old (yes/no recognition). In contrast, familiarity is effective in discriminating which of similar targets and foils, seen together, is old (forced-choice recognition). Independent evidence from the remember/know procedure also indicates that YR's familiarity is normal. The Complementary Learning Systems model can also accommodate the clear impairment of forced-choice object-location recognition memory if it incorporates the view that the most complete convergence of spatial and object information, represented in different cortical regions, occurs in the hippocampus.
Fast neuromimetic object recognition using FPGA outperforms GPU implementations.
Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph
2013-08-01
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.
Recognition-induced forgetting is not due to category-based set size.
Maxcey, Ashleigh M
2016-01-01
What are the consequences of accessing a visual long-term memory representation? Previous work has shown that accessing a long-term memory representation via retrieval improves memory for the targeted item and hurts memory for related items, a phenomenon called retrieval-induced forgetting. Recently we found a similar forgetting phenomenon with recognition of visual objects. Recognition-induced forgetting occurs when practice recognizing an object during a two-alternative forced-choice task, from a group of objects learned at the same time, leads to worse memory for objects from that group that were not practiced. An alternative explanation of this effect is that category-based set size is inducing forgetting, not recognition practice as claimed by some researchers. This alternative explanation is possible because during recognition practice subjects make old-new judgments in a two-alternative forced-choice task, and are thus exposed to more objects from practiced categories, potentially inducing forgetting due to set-size. Herein I pitted the category-based set size hypothesis against the recognition-induced forgetting hypothesis. To this end, I parametrically manipulated the amount of practice objects received in the recognition-induced forgetting paradigm. If forgetting is due to category-based set size, then the magnitude of forgetting of related objects will increase as the number of practice trials increases. If forgetting is recognition induced, the set size of exemplars from any given category should not be predictive of memory for practiced objects. Consistent with this latter hypothesis, additional practice systematically improved memory for practiced objects, but did not systematically affect forgetting of related objects. These results firmly establish that recognition practice induces forgetting of related memories. Future directions and important real-world applications of using recognition to access our visual memories of previously encountered objects are discussed.
Mechanisms of object recognition: what we have learned from pigeons
Soto, Fabian A.; Wasserman, Edward A.
2014-01-01
Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons. PMID:25352784
Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E
2017-07-01
According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.
Purpura, Giulia; Cioni, Giovanni; Tinelli, Francesca
2018-07-01
Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.
Measuring the Speed of Newborn Object Recognition in Controlled Visual Worlds
ERIC Educational Resources Information Center
Wood, Justin N.; Wood, Samantha M. W.
2017-01-01
How long does it take for a newborn to recognize an object? Adults can recognize objects rapidly, but measuring object recognition speed in newborns has not previously been possible. Here we introduce an automated controlled-rearing method for measuring the speed of newborn object recognition in controlled visual worlds. We raised newborn chicks…
Poth, Christian H; Schneider, Werner X
2016-09-01
Rapid saccadic eye movements bring the foveal region of the eye's retina onto objects for high-acuity vision. Saccades change the location and resolution of objects' retinal images. To perceive objects as visually stable across saccades, correspondence between the objects before and after the saccade must be established. We have previously shown that breaking object correspondence across the saccade causes a decrement in object recognition (Poth, Herwig, & Schneider, 2015). Color and luminance can establish object correspondence, but it is unknown how these surface features contribute to transsaccadic visual processing. Here, we investigated whether changing the surface features color-and-luminance and color alone across saccades impairs postsaccadic object recognition. Participants made saccades to peripheral objects, which either maintained or changed their surface features across the saccade. After the saccade, participants briefly viewed a letter within the saccade target object (terminated by a pattern mask). Postsaccadic object recognition was assessed as participants' accuracy in reporting the letter. Experiment A used the colors green and red with different luminances as surface features, Experiment B blue and yellow with approximately the same luminances. Changing the surface features across the saccade deteriorated postsaccadic object recognition in both experiments. These findings reveal a link between object recognition and object correspondence relying on the surface features colors and luminance, which is currently not addressed in theories of transsaccadic perception. We interpret the findings within a recent theory ascribing this link to visual attention (Schneider, 2013).
Cognitive object recognition system (CORS)
NASA Astrophysics Data System (ADS)
Raju, Chaitanya; Varadarajan, Karthik Mahesh; Krishnamurthi, Niyant; Xu, Shuli; Biederman, Irving; Kelley, Troy
2010-04-01
We have developed a framework, Cognitive Object Recognition System (CORS), inspired by current neurocomputational models and psychophysical research in which multiple recognition algorithms (shape based geometric primitives, 'geons,' and non-geometric feature-based algorithms) are integrated to provide a comprehensive solution to object recognition and landmarking. Objects are defined as a combination of geons, corresponding to their simple parts, and the relations among the parts. However, those objects that are not easily decomposable into geons, such as bushes and trees, are recognized by CORS using "feature-based" algorithms. The unique interaction between these algorithms is a novel approach that combines the effectiveness of both algorithms and takes us closer to a generalized approach to object recognition. CORS allows recognition of objects through a larger range of poses using geometric primitives and performs well under heavy occlusion - about 35% of object surface is sufficient. Furthermore, geon composition of an object allows image understanding and reasoning even with novel objects. With reliable landmarking capability, the system improves vision-based robot navigation in GPS-denied environments. Feasibility of the CORS system was demonstrated with real stereo images captured from a Pioneer robot. The system can currently identify doors, door handles, staircases, trashcans and other relevant landmarks in the indoor environment.
Mitchnick, Krista A; Wideman, Cassidy E; Huff, Andrew E; Palmer, Daniel; McNaughton, Bruce L; Winters, Boyer D
2018-05-15
The capacity to recognize objects from different view-points or angles, referred to as view-invariance, is an essential process that humans engage in daily. Currently, the ability to investigate the neurobiological underpinnings of this phenomenon is limited, as few ethologically valid view-invariant object recognition tasks exist for rodents. Here, we report two complementary, novel view-invariant object recognition tasks in which rodents physically interact with three-dimensional objects. Prior to experimentation, rats and mice were given extensive experience with a set of 'pre-exposure' objects. In a variant of the spontaneous object recognition task, novelty preference for pre-exposed or new objects was assessed at various angles of rotation (45°, 90° or 180°); unlike control rodents, for whom the objects were novel, rats and mice tested with pre-exposed objects did not discriminate between rotated and un-rotated objects in the choice phase, indicating substantial view-invariant object recognition. Secondly, using automated operant touchscreen chambers, rats were tested on pre-exposed or novel objects in a pairwise discrimination task, where the rewarded stimulus (S+) was rotated (180°) once rats had reached acquisition criterion; rats tested with pre-exposed objects re-acquired the pairwise discrimination following S+ rotation more effectively than those tested with new objects. Systemic scopolamine impaired performance on both tasks, suggesting involvement of acetylcholine at muscarinic receptors in view-invariant object processing. These tasks present novel means of studying the behavioral and neural bases of view-invariant object recognition in rodents. Copyright © 2018 Elsevier B.V. All rights reserved.
Incidental Memory of Younger and Older Adults for Objects Encountered in a Real World Context
Qin, Xiaoyan; Bochsler, Tiana M.; Aizpurua, Alaitz; Cheong, Allen M. Y.; Koutstaal, Wilma; Legge, Gordon E.
2014-01-01
Effects of context on the perception of, and incidental memory for, real-world objects have predominantly been investigated in younger individuals, under conditions involving a single static viewpoint. We examined the effects of prior object context and object familiarity on both older and younger adults’ incidental memory for real objects encountered while they traversed a conference room. Recognition memory for context-typical and context-atypical objects was compared with a third group of unfamiliar objects that were not readily named and that had no strongly associated context. Both older and younger adults demonstrated a typicality effect, showing significantly lower 2-alternative-forced-choice recognition of context-typical than context-atypical objects; for these objects, the recognition of older adults either significantly exceeded, or numerically surpassed, that of younger adults. Testing-awareness elevated recognition but did not interact with age or with object type. Older adults showed significantly higher recognition for context-atypical objects than for unfamiliar objects that had no prior strongly associated context. The observation of a typicality effect in both age groups is consistent with preserved semantic schemata processing in aging. The incidental recognition advantage of older over younger adults for the context-typical and context-atypical objects may reflect aging-related differences in goal-related processing, with older adults under comparatively more novel circumstances being more likely to direct their attention to the external environment, or age-related differences in top-down effortful distraction regulation, with older individuals’ attention more readily captured by salient objects in the environment. Older adults’ reduced recognition of unfamiliar objects compared to context-atypical objects may reflect possible age differences in contextually driven expectancy violations. The latter finding underscores the theoretical and methodological value of including a third type of objects–that are comparatively neutral with respect to their contextual associations–to help differentiate between contextual integration effects (for schema-consistent objects) and expectancy violations (for schema-inconsistent objects). PMID:24941065
Lawson, Rebecca
2014-02-01
The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.
Representation of 3-Dimenstional Objects by the Rat Perirhinal Cortex
Burke, S.N.; Maurer, A.P.; Hartzell, A.L.; Nematollahi, S.; Uprety, A.; Wallace, J.L.; Barnes, C.A.
2012-01-01
The perirhinal cortex (PRC) is known to play an important role in object recognition. Little is known, however, regarding the activity of PRC neurons during the presentation of stimuli that are commonly used for recognition memory tasks in rodents, that is, 3-dimensional objects. Rats in the present study were exposed to 3-dimensional objects while they traversed a circular track for food reward. Under some behavioral conditions the track contained novel objects, familiar objects, or no objects. Approximately 38% of PRC neurons demonstrated ‘object fields’ (a selective increase in firing at the location of one or more objects). Although the rats spent more time exploring the objects when they were novel compared to familiar, indicating successful recognition memory, the proportion of object fields and the firing rates of PRC neurons were not affected by the rats’ previous experience with the objects. Together these data indicate that the activity of PRC cells is powerfully affected by the presence of objects while animals navigate through an environment, but under these conditions, the firing patterns are not altered by the relative novelty of objects during successful object recognition. PMID:22987680
Object Recognition and Localization: The Role of Tactile Sensors
Aggarwal, Achint; Kirchner, Frank
2014-01-01
Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This paper presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Particle Filter (BRICPPF) is based on an innovative combination of particle filters, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC) algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in ground and underwater environments using real hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses the BRICPPF for object sub-part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments. PMID:24553087
The Role of Perceptual Load in Object Recognition
ERIC Educational Resources Information Center
Lavie, Nilli; Lin, Zhicheng; Zokaei, Nahid; Thoma, Volker
2009-01-01
Predictions from perceptual load theory (Lavie, 1995, 2005) regarding object recognition across the same or different viewpoints were tested. Results showed that high perceptual load reduces distracter recognition levels despite always presenting distracter objects from the same view. They also showed that the levels of distracter recognition were…
Appearance-based face recognition and light-fields.
Gross, Ralph; Matthews, Iain; Baker, Simon
2004-04-01
Arguably the most important decision to be made when developing an object recognition algorithm is selecting the scene measurements or features on which to base the algorithm. In appearance-based object recognition, the features are chosen to be the pixel intensity values in an image of the object. These pixel intensities correspond directly to the radiance of light emitted from the object along certain rays in space. The set of all such radiance values over all possible rays is known as the plenoptic function or light-field. In this paper, we develop a theory of appearance-based object recognition from light-fields. This theory leads directly to an algorithm for face recognition across pose that uses as many images of the face as are available, from one upwards. All of the pixels, whichever image they come from, are treated equally and used to estimate the (eigen) light-field of the object. The eigen light-field is then used as the set of features on which to base recognition, analogously to how the pixel intensities are used in appearance-based face and object recognition.
Shape and Color Features for Object Recognition Search
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Duong, Vu A.; Stubberud, Allen R.
2012-01-01
A bio-inspired shape feature of an object of interest emulates the integration of the saccadic eye movement and horizontal layer in vertebrate retina for object recognition search where a single object can be used one at a time. The optimal computational model for shape-extraction-based principal component analysis (PCA) was also developed to reduce processing time and enable the real-time adaptive system capability. A color feature of the object is employed as color segmentation to empower the shape feature recognition to solve the object recognition in the heterogeneous environment where a single technique - shape or color - may expose its difficulties. To enable the effective system, an adaptive architecture and autonomous mechanism were developed to recognize and adapt the shape and color feature of the moving object. The bio-inspired object recognition based on bio-inspired shape and color can be effective to recognize a person of interest in the heterogeneous environment where the single technique exposed its difficulties to perform effective recognition. Moreover, this work also demonstrates the mechanism and architecture of the autonomous adaptive system to enable the realistic system for the practical use in the future.
Infant Visual Attention and Object Recognition
Reynolds, Greg D.
2015-01-01
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. PMID:25596333
Exploring the feasibility of traditional image querying tasks for industrial radiographs
NASA Astrophysics Data System (ADS)
Bray, Iliana E.; Tsai, Stephany J.; Jimenez, Edward S.
2015-08-01
Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.
An ERP Study on Self-Relevant Object Recognition
ERIC Educational Resources Information Center
Miyakoshi, Makoto; Nomura, Michio; Ohira, Hideki
2007-01-01
We performed an event-related potential study to investigate the self-relevance effect in object recognition. Three stimulus categories were prepared: SELF (participant's own objects), FAMILIAR (disposable and public objects, defined as objects with less-self-relevant familiarity), and UNFAMILIAR (others' objects). The participants' task was to…
Fields, Chris
2011-01-01
The perception of persisting visual objects is mediated by transient intermediate representations, object files, that are instantiated in response to some, but not all, visual trajectories. The standard object file concept does not, however, provide a mechanism sufficient to account for all experimental data on visual object persistence, object tracking, and the ability to perceive spatially disconnected stimuli as continuously existing objects. Based on relevant anatomical, functional, and developmental data, a functional model is constructed that bases visual object individuation on the recognition of temporal sequences of apparent center-of-mass positions that are specifically identified as trajectories by dedicated “trajectory recognition networks” downstream of the medial–temporal motion-detection area. This model is shown to account for a wide range of data, and to generate a variety of testable predictions. Individual differences in the recognition, abstraction, and encoding of trajectory information are expected to generate distinct object persistence judgments and object recognition abilities. Dominance of trajectory information over feature information in stored object tokens during early infancy, in particular, is expected to disrupt the ability to re-identify human and other individuals across perceptual episodes, and lead to developmental outcomes with characteristics of autism spectrum disorders. PMID:21716599
Nie, Haitao; Long, Kehui; Ma, Jun; Yue, Dan; Liu, Jinguo
2015-01-01
Partial occlusions, large pose variations, and extreme ambient illumination conditions generally cause the performance degradation of object recognition systems. Therefore, this paper presents a novel approach for fast and robust object recognition in cluttered scenes based on an improved scale invariant feature transform (SIFT) algorithm and a fuzzy closed-loop control method. First, a fast SIFT algorithm is proposed by classifying SIFT features into several clusters based on several attributes computed from the sub-orientation histogram (SOH), in the feature matching phase only features that share nearly the same corresponding attributes are compared. Second, a feature matching step is performed following a prioritized order based on the scale factor, which is calculated between the object image and the target object image, guaranteeing robust feature matching. Finally, a fuzzy closed-loop control strategy is applied to increase the accuracy of the object recognition and is essential for autonomous object manipulation process. Compared to the original SIFT algorithm for object recognition, the result of the proposed method shows that the number of SIFT features extracted from an object has a significant increase, and the computing speed of the object recognition processes increases by more than 40%. The experimental results confirmed that the proposed method performs effectively and accurately in cluttered scenes. PMID:25714094
Recognition-induced forgetting of faces in visual long-term memory.
Rugo, Kelsi F; Tamler, Kendall N; Woodman, Geoffrey F; Maxcey, Ashleigh M
2017-10-01
Despite more than a century of evidence that long-term memory for pictures and words are different, much of what we know about memory comes from studies using words. Recent research examining visual long-term memory has demonstrated that recognizing an object induces the forgetting of objects from the same category. This recognition-induced forgetting has been shown with a variety of everyday objects. However, unlike everyday objects, faces are objects of expertise. As a result, faces may be immune to recognition-induced forgetting. However, despite excellent memory for such stimuli, we found that faces were susceptible to recognition-induced forgetting. Our findings have implications for how models of human memory account for recognition-induced forgetting as well as represent objects of expertise and consequences for eyewitness testimony and the justice system.
Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
Parts and Relations in Young Children's Shape-Based Object Recognition
ERIC Educational Resources Information Center
Augustine, Elaine; Smith, Linda B.; Jones, Susan S.
2011-01-01
The ability to recognize common objects from sparse information about geometric shape emerges during the same period in which children learn object names and object categories. Hummel and Biederman's (1992) theory of object recognition proposes that the geometric shapes of objects have two components--geometric volumes representing major object…
Object, spatial and social recognition testing in a single test paradigm.
Lian, Bin; Gao, Jun; Sui, Nan; Feng, Tingyong; Li, Ming
2018-07-01
Animals have the ability to process information about an object or a conspecific's physical features and location, and alter its behavior when such information is updated. In the laboratory, the object, spatial and social recognition are often studied in separate tasks, making them unsuitable to study the potential dissociations and interactions among various types of recognition memories. The present study introduced a single paradigm to detect the object and spatial recognition, and social recognition of a familiar and novel conspecific. Specifically, male and female Sprague-Dawley adult (>75 days old) or preadolescent (25-28 days old) rats were tested with two objects and one social partner in an open-field arena for four 10-min sessions with a 20-min inter-session interval. After the first sample session, a new object replaced one of the sampled objects in the second session, and the location of one of the old objects was changed in the third session. Finally, a new social partner was introduced in the fourth session and replaced the familiar one. Exploration time with each stimulus was recorded and measures for the three recognitions were calculated based on the discrimination ratio. Overall results show that adult and preadolescent male and female rats spent more time exploring the social partner than the objects, showing a clear preference for social stimulus over nonsocial one. They also did not differ in their abilities to discriminate a new object, a new location and a new social partner from a familiar one, and to recognize a familiar conspecific. Acute administration of MK-801 (a NMDA receptor antagonist, 0.025 and 0.10 mg/kg, i.p.) after the sample session dose-dependently reduced the total time spent on exploring the social partner and objects in the adult rats, and had a significantly larger effect in the females than in the males. MK-801 also dose-dependently increased motor activity. However, it did not alter the object, spatial and social recognitions. These findings indicate that the new triple recognition paradigm is capable of recording the object, spatial location and social recognition together and revealing potential sex and age differences. This paradigm is also useful for the study of object and social exploration concurrently and can be used to evaluate cognition-altering drugs in various stages of recognition memories. Copyright © 2018. Published by Elsevier Inc.
Automatic anatomy recognition on CT images with pathology
NASA Astrophysics Data System (ADS)
Huang, Lidong; Udupa, Jayaram K.; Tong, Yubing; Odhner, Dewey; Torigian, Drew A.
2016-03-01
Body-wide anatomy recognition on CT images with pathology becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem because various diseases result in various abnormalities of objects such as shape and intensity patterns. We previously developed an automatic anatomy recognition (AAR) system [1] whose applicability was demonstrated on near normal diagnostic CT images in different body regions on 35 organs. The aim of this paper is to investigate strategies for adapting the previous AAR system to diagnostic CT images of patients with various pathologies as a first step toward automated body-wide disease quantification. The AAR approach consists of three main steps - model building, object recognition, and object delineation. In this paper, within the broader AAR framework, we describe a new strategy for object recognition to handle abnormal images. In the model building stage an optimal threshold interval is learned from near-normal training images for each object. This threshold is optimally tuned to the pathological manifestation of the object in the test image. Recognition is performed following a hierarchical representation of the objects. Experimental results for the abdominal body region based on 50 near-normal images used for model building and 20 abnormal images used for object recognition show that object localization accuracy within 2 voxels for liver and spleen and 3 voxels for kidney can be achieved with the new strategy.
Modeling recall memory for emotional objects in Alzheimer's disease.
Sundstrøm, Martin
2011-07-01
To examine whether emotional memory (EM) of objects with self-reference in Alzheimer's disease (AD) can be modeled with binomial logistic regression in a free recall and an object recognition test to predict EM enhancement. Twenty patients with AD and twenty healthy controls were studied. Six objects (three presented as gifts) were shown to each participant. Ten minutes later, a free recall and a recognition test were applied. The recognition test had target-objects mixed with six similar distracter objects. Participants were asked to name any object in the recall test and identify each object in the recognition test as known or unknown. The total of gift objects recalled in AD patients (41.6%) was larger than neutral objects (13.3%) and a significant EM recall effect for gifts was found (Wilcoxon: p < .003). EM was not found for recognition in AD patients due to a ceiling effect. Healthy older adults scored overall higher in recall and recognition but showed no EM enhancement due to a ceiling effect. A logistic regression showed that likelihood of emotional recall memory can be modeled as a function of MMSE score (p < .014) and object status (p < .0001) as gift or non-gift. Recall memory was enhanced in AD patients for emotional objects indicating that EM in mild to moderate AD although impaired can be provoked with strong emotional load. The logistic regression model suggests that EM declines with the progression of AD rather than disrupts and may be a useful tool for evaluating magnitude of emotional load.
Vision-based object detection and recognition system for intelligent vehicles
NASA Astrophysics Data System (ADS)
Ran, Bin; Liu, Henry X.; Martono, Wilfung
1999-01-01
Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.
Rapid effects of dorsal hippocampal G-protein coupled estrogen receptor on learning in female mice.
Lymer, Jennifer; Robinson, Alana; Winters, Boyer D; Choleris, Elena
2017-03-01
Through rapid mechanisms of action, estrogens affect learning and memory processes. It has been shown that 17β-estradiol and an Estrogen Receptor (ER) α agonist enhances performance in social recognition, object recognition, and object placement tasks when administered systemically or infused in the dorsal hippocampus. In contrast, systemic and dorsal hippocampal ERβ activation only promote spatial learning. In addition, 17β-estradiol, the ERα and the G-protein coupled estrogen receptor (GPER) agonists increase dendritic spine density in the CA1 hippocampus. Recently, we have shown that selective systemic activation of the GPER also rapidly facilitated social recognition, object recognition, and object placement learning in female mice. Whether activation the GPER specifically in the dorsal hippocampus can also rapidly improve learning and memory prior to acquisition is unknown. Here, we investigated the rapid effects of infusion of the GPER agonist, G-1 (dose: 50nM, 100nM, 200nM), in the dorsal hippocampus on social recognition, object recognition, and object placement learning tasks in home cage. These paradigms were completed within 40min, which is within the range of rapid estrogenic effects. Dorsal hippocampal administration of G-1 improved social (doses: 50nM, 200nM G-1) and object (dose: 200nM G-1) recognition with no effect on object placement. Additionally, when spatial cues were minimized by testing in a Y-apparatus, G-1 administration promoted social (doses: 100nM, 200nM G-1) and object (doses: 50nM, 100nM, 200nM G-1) recognition. Therefore, like ERα, the GPER in the hippocampus appears to be sufficient for the rapid facilitation of social and object recognition in female mice, but not for the rapid facilitation of object placement learning. Thus, the GPER in the dorsal hippocampus is involved in estrogenic mediation of learning and memory and these effects likely occur through rapid signalling mechanisms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dopamine D1 receptor activation leads to object recognition memory in a coral reef fish.
Hamilton, Trevor J; Tresguerres, Martin; Kline, David I
2017-07-01
Object recognition memory is the ability to identify previously seen objects and is an adaptive mechanism that increases survival for many species throughout the animal kingdom. Previously believed to be possessed by only the highest order mammals, it is now becoming clear that fish are also capable of this type of memory formation. Similar to the mammalian hippocampus, the dorsolateral pallium regulates distinct memory processes and is modulated by neurotransmitters such as dopamine. Caribbean bicolour damselfish ( Stegastes partitus ) live in complex environments dominated by coral reef structures and thus likely possess many types of complex memory abilities including object recognition. This study used a novel object recognition test in which fish were first presented two identical objects, then after a retention interval of 10 min with no objects, the fish were presented with a novel object and one of the objects they had previously encountered in the first trial. We demonstrate that the dopamine D 1 -receptor agonist (SKF 38393) induces the formation of object recognition memories in these fish. Thus, our results suggest that dopamine-receptor mediated enhancement of spatial memory formation in fish represents an evolutionarily conserved mechanism in vertebrates. © 2017 The Author(s).
Pezze, Marie A.; Marshall, Hayley J.; Fone, Kevin C.F.; Cassaday, Helen J.
2015-01-01
Previous studies have shown that dopamine D1 receptor antagonists impair novel object recognition memory but the effects of dopamine D1 receptor stimulation remain to be determined. This study investigated the effects of the selective dopamine D1 receptor agonist SKF81297 on acquisition and retrieval in the novel object recognition task in male Wistar rats. SKF81297 (0.4 and 0.8 mg/kg s.c.) given 15 min before the sampling phase impaired novel object recognition evaluated 10 min or 24 h later. The same treatments also reduced novel object recognition memory tested 24 h after the sampling phase and when given 15 min before the choice session. These data indicate that D1 receptor stimulation modulates both the encoding and retrieval of object recognition memory. Microinfusion of SKF81297 (0.025 or 0.05 μg/side) into the prelimbic sub-region of the medial prefrontal cortex (mPFC) in this case 10 min before the sampling phase also impaired novel object recognition memory, suggesting that the mPFC is one important site mediating the effects of D1 receptor stimulation on visual recognition memory. PMID:26277743
Method of synthesized phase objects for pattern recognition with rotation invariance
NASA Astrophysics Data System (ADS)
Ostroukh, Alexander P.; Butok, Alexander M.; Shvets, Rostislav A.; Yezhov, Pavel V.; Kim, Jin-Tae; Kuzmenko, Alexander V.
2015-11-01
We present a development of the method of synthesized phase objects (SPO-method) [1] for the rotation-invariant pattern recognition. For the standard method of recognition and the SPO-method, the comparison of the parameters of correlation signals for a number of amplitude objects is executed at the realization of a rotation in an optical-digital correlator with the joint Fourier transformation. It is shown that not only the invariance relative to a rotation at a realization of the joint correlation for synthesized phase objects (SP-objects) but also the main advantage of the method of SP-objects over the reference one such as the unified δ-like recognition signal with the largest possible signal-to-noise ratio independent of the type of an object are attained.
Infant visual attention and object recognition.
Reynolds, Greg D
2015-05-15
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. Copyright © 2015 Elsevier B.V. All rights reserved.
Deletion of the GluA1 AMPA receptor subunit impairs recency-dependent object recognition memory
Sanderson, David J.; Hindley, Emma; Smeaton, Emily; Denny, Nick; Taylor, Amy; Barkus, Chris; Sprengel, Rolf; Seeburg, Peter H.; Bannerman, David M.
2011-01-01
Deletion of the GluA1 AMPA receptor subunit impairs short-term spatial recognition memory. It has been suggested that short-term recognition depends upon memory caused by the recent presentation of a stimulus that is independent of contextual–retrieval processes. The aim of the present set of experiments was to test whether the role of GluA1 extends to nonspatial recognition memory. Wild-type and GluA1 knockout mice were tested on the standard object recognition task and a context-independent recognition task that required recency-dependent memory. In a first set of experiments it was found that GluA1 deletion failed to impair performance on either of the object recognition or recency-dependent tasks. However, GluA1 knockout mice displayed increased levels of exploration of the objects in both the sample and test phases compared to controls. In contrast, when the time that GluA1 knockout mice spent exploring the objects was yoked to control mice during the sample phase, it was found that GluA1 deletion now impaired performance on both the object recognition and the recency-dependent tasks. GluA1 deletion failed to impair performance on a context-dependent recognition task regardless of whether object exposure in knockout mice was yoked to controls or not. These results demonstrate that GluA1 is necessary for nonspatial as well as spatial recognition memory and plays an important role in recency-dependent memory processes. PMID:21378100
Recognition Of Complex Three Dimensional Objects Using Three Dimensional Moment Invariants
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz A.
1985-01-01
A technique for the recognition of complex three dimensional objects is presented. The complex 3-D objects are represented in terms of their 3-D moment invariants, algebraic expressions that remain invariant independent of the 3-D objects' orientations and locations in the field of view. The technique of 3-D moment invariants has been used successfully for simple 3-D object recognition in the past. In this work we have extended this method for the representation of more complex objects. Two complex objects are represented digitally; their 3-D moment invariants have been calculated, and then the invariancy of these 3-D invariant moment expressions is verified by changing the orientation and the location of the objects in the field of view. The results of this study have significant impact on 3-D robotic vision, 3-D target recognition, scene analysis and artificial intelligence.
A Taxonomy of 3D Occluded Objects Recognition Techniques
NASA Astrophysics Data System (ADS)
Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh
2016-03-01
The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.
Learned Non-Rigid Object Motion is a View-Invariant Cue to Recognizing Novel Objects
Chuang, Lewis L.; Vuong, Quoc C.; Bülthoff, Heinrich H.
2012-01-01
There is evidence that observers use learned object motion to recognize objects. For instance, studies have shown that reversing the learned direction in which a rigid object rotated in depth impaired recognition accuracy. This motion reversal can be achieved by playing animation sequences of moving objects in reverse frame order. In the current study, we used this sequence-reversal manipulation to investigate whether observers encode the motion of dynamic objects in visual memory, and whether such dynamic representations are encoded in a way that is dependent on the viewing conditions. Participants first learned dynamic novel objects, presented as animation sequences. Following learning, they were then tested on their ability to recognize these learned objects when their animation sequence was shown in the same sequence order as during learning or in the reverse sequence order. In Experiment 1, we found that non-rigid motion contributed to recognition performance; that is, sequence-reversal decreased sensitivity across different tasks. In subsequent experiments, we tested the recognition of non-rigidly deforming (Experiment 2) and rigidly rotating (Experiment 3) objects across novel viewpoints. Recognition performance was affected by viewpoint changes for both experiments. Learned non-rigid motion continued to contribute to recognition performance and this benefit was the same across all viewpoint changes. By comparison, learned rigid motion did not contribute to recognition performance. These results suggest that non-rigid motion provides a source of information for recognizing dynamic objects, which is not affected by changes to viewpoint. PMID:22661939
Decreased acetylcholine release delays the consolidation of object recognition memory.
De Jaeger, Xavier; Cammarota, Martín; Prado, Marco A M; Izquierdo, Iván; Prado, Vania F; Pereira, Grace S
2013-02-01
Acetylcholine (ACh) is important for different cognitive functions such as learning, memory and attention. The release of ACh depends on its vesicular loading by the vesicular acetylcholine transporter (VAChT). It has been demonstrated that VAChT expression can modulate object recognition memory. However, the role of VAChT expression on object recognition memory persistence still remains to be understood. To address this question we used distinct mouse lines with reduced expression of VAChT, as well as pharmacological manipulations of the cholinergic system. We showed that reduction of cholinergic tone impairs object recognition memory measured at 24h. Surprisingly, object recognition memory, measured at 4 days after training, was impaired by substantial, but not moderate, reduction in VAChT expression. Our results suggest that levels of acetylcholine release strongly modulate object recognition memory consolidation and appear to be of particular importance for memory persistence 4 days after training. Copyright © 2012 Elsevier B.V. All rights reserved.
Sticht, Martin A; Jacklin, Derek L; Mechoulam, Raphael; Parker, Linda A; Winters, Boyer D
2015-03-25
Cannabinoids disrupt learning and memory in human and nonhuman participants. Object recognition memory, which is particularly susceptible to the impairing effects of cannabinoids, relies critically on the perirhinal cortex (PRh); however, to date, the effects of cannabinoids within PRh have not been assessed. In the present study, we evaluated the effects of localized administration of the synthetic cannabinoid, HU210 (0.01, 1.0 μg/hemisphere), into PRh on spontaneous object recognition in Long-Evans rats. Animals received intra-PRh infusions of HU210 before the sample phase, and object recognition memory was assessed at various delays in a subsequent retention test. We found that presample intra-PRh HU210 dose dependently (1.0 μg but not 0.01 μg) interfered with spontaneous object recognition performance, exerting an apparently more pronounced effect when memory demands were increased. These novel findings show that cannabinoid agonists in PRh disrupt object recognition memory. Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yan, Fengxia; Udupa, Jayaram K.; Tong, Yubing; Xu, Guoping; Odhner, Dewey; Torigian, Drew A.
2018-03-01
The recently developed body-wide Automatic Anatomy Recognition (AAR) methodology depends on fuzzy modeling of individual objects, hierarchically arranging objects, constructing an anatomy ensemble of these models, and a dichotomous object recognition-delineation process. The parent-to-offspring spatial relationship in the object hierarchy is crucial in the AAR method. We have found this relationship to be quite complex, and as such any improvement in capturing this relationship information in the anatomy model will improve the process of recognition itself. Currently, the method encodes this relationship based on the layout of the geometric centers of the objects. Motivated by the concept of virtual landmarks (VLs), this paper presents a new one-shot AAR recognition method that utilizes the VLs to learn object relationships by training a neural network to predict the pose and the VLs of an offspring object given the VLs of the parent object in the hierarchy. We set up two neural networks for each parent-offspring object pair in a body region, one for predicting the VLs and another for predicting the pose parameters. The VL-based learning/prediction method is evaluated on two object hierarchies involving 14 objects. We utilize 54 computed tomography (CT) image data sets of head and neck cancer patients and the associated object contours drawn by dosimetrists for routine radiation therapy treatment planning. The VL neural network method is found to yield more accurate object localization than the currently used simple AAR method.
Tran, Dominic M D; Westbrook, R Frederick
2018-05-31
Exposure to a high-fat high-sugar (HFHS) diet rapidly impairs novel-place- but not novel-object-recognition memory in rats (Tran & Westbrook, 2015, 2017). Three experiments sought to investigate the generality of diet-induced cognitive deficits by examining whether there are conditions under which object-recognition memory is impaired. Experiments 1 and 3 tested the strength of short- and long-term object-memory trace, respectively, by varying the interval of time between object familiarization and subsequent novel object test. Experiment 2 tested the effect of increasing working memory load on object-recognition memory by interleaving additional object exposures between familiarization and test in an n-back style task. Experiments 1-3 failed to detect any differences in object recognition between HFHS and control rats. Experiment 4 controlled for object novelty by separately familiarizing both objects presented at test, which included one remote-familiar and one recent-familiar object. Under these conditions, when test objects differed in their relative recency, HFHS rats showed a weaker memory trace for the remote object compared to chow rats. This result suggests that the diet leaves intact recollection judgments, but impairs familiarity judgments. We speculate that the HFHS diet adversely affects "where" memories as well as the quality of "what" memories, and discuss these effects in relation to recollection and familiarity memory models, hippocampal-dependent functions, and episodic food memories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
El Bekri, Nadia; Angele, Susanne; Ruckhäberle, Martin; Peinsipp-Byma, Elisabeth; Haelke, Bruno
2015-10-01
This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The infrastructure analysis mode pursues the goal to analyze the function of the infrastructure. The image analyst extracts visually certain target object signatures, assigns them to corresponding object features and is finally able to recognize the object type. The system offers him the possibility to assign the image signatures to features given by sample images. The underlying data set contains a wide range of objects features and object types for different domains like ships or land vehicles. Each domain has its own feature tree developed by aerial image analyst experts. By selecting the corresponding features, the possible solution set of objects is automatically reduced and matches only the objects that contain the selected features. Moreover, we give an outlook of current research in the field of ground target analysis in which we deal with partly automated methods to extract image signatures and assign them to the corresponding features. This research includes methods for automatically determining the orientation of an object and geometric features like width and length of the object. This step enables to reduce automatically the possible object types offered to the image analyst by the interactive recognition assistance system.
Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning
Yee, Meagan; Jones, Susan S.; Smith, Linda B.
2012-01-01
Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015
Huang, Lijie; Song, Yiying; Li, Jingguang; Zhen, Zonglei; Yang, Zetian; Liu, Jia
2014-01-01
In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain. PMID:25071513
McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Gauthier, Isabel
2012-01-01
Individual differences in face recognition are often contrasted with differences in object recognition using a single object category. Likewise, individual differences in perceptual expertise for a given object domain have typically been measured relative to only a single category baseline. In Experiment 1, we present a new test of object recognition, the Vanderbilt Expertise Test (VET), which is comparable in methods to the Cambridge Face Memory Task (CFMT) but uses eight different object categories. Principal component analysis reveals that the underlying structure of the VET can be largely explained by two independent factors, which demonstrate good reliability and capture interesting sex differences inherent in the VET structure. In Experiment 2, we show how the VET can be used to separate domain-specific from domain-general contributions to a standard measure of perceptual expertise. While domain-specific contributions are found for car matching for both men and women and for plane matching in men, women in this sample appear to use more domain-general strategies to match planes. In Experiment 3, we use the VET to demonstrate that holistic processing of faces predicts face recognition independently of general object recognition ability, which has a sex-specific contribution to face recognition. Overall, the results suggest that the VET is a reliable and valid measure of object recognition abilities and can measure both domain-general skills and domain-specific expertise, which were both found to depend on the sex of observers. PMID:22877929
Affective and contextual values modulate spatial frequency use in object recognition
Caplette, Laurent; West, Gregory; Gomot, Marie; Gosselin, Frédéric; Wicker, Bruno
2014-01-01
Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system. PMID:24904514
Distinct roles of basal forebrain cholinergic neurons in spatial and object recognition memory.
Okada, Kana; Nishizawa, Kayo; Kobayashi, Tomoko; Sakata, Shogo; Kobayashi, Kazuto
2015-08-06
Recognition memory requires processing of various types of information such as objects and locations. Impairment in recognition memory is a prominent feature of amnesia and a symptom of Alzheimer's disease (AD). Basal forebrain cholinergic neurons contain two major groups, one localized in the medial septum (MS)/vertical diagonal band of Broca (vDB), and the other in the nucleus basalis magnocellularis (NBM). The roles of these cell groups in recognition memory have been debated, and it remains unclear how they contribute to it. We use a genetic cell targeting technique to selectively eliminate cholinergic cell groups and then test spatial and object recognition memory through different behavioural tasks. Eliminating MS/vDB neurons impairs spatial but not object recognition memory in the reference and working memory tasks, whereas NBM elimination undermines only object recognition memory in the working memory task. These impairments are restored by treatment with acetylcholinesterase inhibitors, anti-dementia drugs for AD. Our results highlight that MS/vDB and NBM cholinergic neurons are not only implicated in recognition memory but also have essential roles in different types of recognition memory.
Han, Ren-Wen; Xu, Hong-Jiao; Zhang, Rui-San; Wang, Pei; Chang, Min; Peng, Ya-Li; Deng, Ke-Yu; Wang, Rui
2014-01-01
The noradrenergic activity in the basolateral amygdala (BLA) was reported to be involved in the regulation of object recognition memory. As the BLA expresses high density of receptors for Neuropeptide S (NPS), we investigated whether the BLA is involved in mediating NPS's effects on object recognition memory consolidation and whether such effects require noradrenergic activity. Intracerebroventricular infusion of NPS (1nmol) post training facilitated 24-h memory in a mouse novel object recognition task. The memory-enhancing effect of NPS could be blocked by the β-adrenoceptor antagonist propranolol. Furthermore, post-training intra-BLA infusions of NPS (0.5nmol/side) improved 24-h memory for objects, which was impaired by co-administration of propranolol (0.5μg/side). Taken together, these results indicate that NPS interacts with the BLA noradrenergic system in improving object recognition memory during consolidation. Copyright © 2013 Elsevier Inc. All rights reserved.
Three-dimensional object recognition using similar triangles and decision trees
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly
1993-01-01
A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.
Angelone, Bonnie L; Levin, Daniel T; Simons, Daniel J
2003-01-01
Observers typically detect changes to central objects more readily than changes to marginal objects, but they sometimes miss changes to central, attended objects as well. However, even if observers do not report such changes, they may be able to recognize the changed object. In three experiments we explored change detection and recognition memory for several types of changes to central objects in motion pictures. Observers who failed to detect a change still performed at above chance levels on a recognition task in almost all conditions. In addition, observers who detected the change were no more accurate in their recognition than those who did not detect the change. Despite large differences in the detectability of changes across conditions, those observers who missed the change did not vary in their ability to recognize the changing object.
Bello-Medina, Paola C; Sánchez-Carrasco, Livia; González-Ornelas, Nadia R; Jeffery, Kathryn J; Ramírez-Amaya, Víctor
2013-08-01
Here we tested whether the well-known superiority of spaced training over massed training is equally evident in both object identity and object location recognition memory. We trained animals with objects placed in a variable or in a fixed location to produce a location-independent object identity memory or a location-dependent object representation. The training consisted of 5 trials that occurred either on one day (Massed) or over the course of 5 consecutive days (Spaced). The memory test was done in independent groups of animals either 24h or 7 days after the last training trial. In each test the animals were exposed to either a novel object, when trained with the objects in variable locations, or to a familiar object in a novel location, when trained with objects in fixed locations. The difference in time spent exploring the changed versus the familiar objects was used as a measure of recognition memory. For the object-identity-trained animals, spaced training produced clear evidence of recognition memory after both 24h and 7 days, but massed-training animals showed it only after 24h. In contrast, for the object-location-trained animals, recognition memory was evident after both retention intervals and with both training procedures. When objects were placed in variable locations for the two types of training and the test was done with a brand-new location, only the spaced-training animals showed recognition at 24h, but surprisingly, after 7 days, animals trained using both procedures were able to recognize the change, suggesting a post-training consolidation process. We suggest that the two training procedures trigger different neural mechanisms that may differ in the two segregated streams that process object information and that may consolidate differently. Copyright © 2013 Elsevier B.V. All rights reserved.
Gerlach, Christian; Starrfelt, Randi
2018-03-20
There has been an increase in studies adopting an individual difference approach to examine visual cognition and in particular in studies trying to relate face recognition performance with measures of holistic processing (the face composite effect and the part-whole effect). In the present study we examine whether global precedence effects, measured by means of non-face stimuli in Navon's paradigm, can also account for individual differences in face recognition and, if so, whether the effect is of similar magnitude for faces and objects. We find evidence that global precedence effects facilitate both face and object recognition, and to a similar extent. Our results suggest that both face and object recognition are characterized by a coarse-to-fine temporal dynamic, where global shape information is derived prior to local shape information, and that the efficiency of face and object recognition is related to the magnitude of the global precedence effect.
Post-Training Reversible Inactivation of the Hippocampus Enhances Novel Object Recognition Memory
ERIC Educational Resources Information Center
Oliveira, Ana M. M.; Hawk, Joshua D.; Abel, Ted; Havekes, Robbert
2010-01-01
Research on the role of the hippocampus in object recognition memory has produced conflicting results. Previous studies have used permanent hippocampal lesions to assess the requirement for the hippocampus in the object recognition task. However, permanent hippocampal lesions may impact performance through effects on processes besides memory…
Shape and texture fused recognition of flying targets
NASA Astrophysics Data System (ADS)
Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás
2011-06-01
This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).
Examining object recognition and object-in-Place memory in plateau zokors, Eospalax baileyi.
Hegab, Ibrahim M; Tan, Yuchen; Wang, Chan; Yao, Baohui; Wang, Haifang; Ji, Weihong; Su, Junhu
2018-01-01
Recognition memory is important for the survival and fitness of subterranean rodents due to the barren underground conditions that require avoiding the burden of higher energy costs or possible conflict with conspecifics. Our study aims to examine the object and object/place recognition memories in plateau zokors (Eospalax baileyi) and test whether their underground life exerts sex-specific differences in memory functions using Novel Object Recognition (NOR) and Object-in-Place (OiP) paradigms. Animals were tested in the NOR with short (10min) and long-term (24h) inter-trial intervals (ITI) and in the OiP for a 30-min ITI between the familiarization and testing sessions. Plateau zokors showed a strong preference for novel objects manifested by a longer exploration time for the novel object after 10min ITI but failed to remember the familiar object when tested after 24h, suggesting a lack of long-term memory. In the OiP test, zokors effectively formed an association between the objects and the place where they were formerly encountered, resulting in a higher duration of exploration to the switched objects. However, both sexes showed equivalent results in exploration time during the NOR and OiP tests, which eliminates the possibility of discovering sex-specific variations in memory performance. Taken together, our study illustrates robust novelty preference and an effective short-term recognition memory without marked sex-specific differences, which might elucidate the dynamics of recognition memory formation and retrieval in plateau zokors. Copyright © 2017 Elsevier B.V. All rights reserved.
Object recognition of ladar with support vector machine
NASA Astrophysics Data System (ADS)
Sun, Jian-Feng; Li, Qi; Wang, Qi
2005-01-01
Intensity, range and Doppler images can be obtained by using laser radar. Laser radar can detect much more object information than other detecting sensor, such as passive infrared imaging and synthetic aperture radar (SAR), so it is well suited as the sensor of object recognition. Traditional method of laser radar object recognition is extracting target features, which can be influenced by noise. In this paper, a laser radar recognition method-Support Vector Machine is introduced. Support Vector Machine (SVM) is a new hotspot of recognition research after neural network. It has well performance on digital written and face recognition. Two series experiments about SVM designed for preprocessing and non-preprocessing samples are performed by real laser radar images, and the experiments results are compared.
Implicit and Explicit Contributions to Object Recognition: Evidence from Rapid Perceptual Learning
Hassler, Uwe; Friese, Uwe; Gruber, Thomas
2012-01-01
The present study investigated implicit and explicit recognition processes of rapidly perceptually learned objects by means of steady-state visual evoked potentials (SSVEP). Participants were initially exposed to object pictures within an incidental learning task (living/non-living categorization). Subsequently, degraded versions of some of these learned pictures were presented together with degraded versions of unlearned pictures and participants had to judge, whether they recognized an object or not. During this test phase, stimuli were presented at 15 Hz eliciting an SSVEP at the same frequency. Source localizations of SSVEP effects revealed for implicit and explicit processes overlapping activations in orbito-frontal and temporal regions. Correlates of explicit object recognition were additionally found in the superior parietal lobe. These findings are discussed to reflect facilitation of object-specific processing areas within the temporal lobe by an orbito-frontal top-down signal as proposed by bi-directional accounts of object recognition. PMID:23056558
Haettig, Jakob; Stefanko, Daniel P.; Multani, Monica L.; Figueroa, Dario X.; McQuown, Susan C.; Wood, Marcelo A.
2011-01-01
Transcription of genes required for long-term memory not only involves transcription factors, but also enzymatic protein complexes that modify chromatin structure. Chromatin-modifying enzymes, such as the histone acetyltransferase (HAT) CREB (cyclic-AMP response element binding) binding protein (CBP), are pivotal for the transcriptional regulation required for long-term memory. Several studies have shown that CBP and histone acetylation are necessary for hippocampus-dependent long-term memory and hippocampal long-term potentiation (LTP). Importantly, every genetically modified Cbp mutant mouse exhibits long-term memory impairments in object recognition. However, the role of the hippocampus in object recognition is controversial. To better understand how chromatin-modifying enzymes modulate long-term memory for object recognition, we first examined the role of the hippocampus in retrieval of long-term memory for object recognition or object location. Muscimol inactivation of the dorsal hippocampus prior to retrieval had no effect on long-term memory for object recognition, but completely blocked long-term memory for object location. This was consistent with experiments showing that muscimol inactivation of the hippocampus had no effect on long-term memory for the object itself, supporting the idea that the hippocampus encodes spatial information about an object (such as location or context), whereas cortical areas (such as the perirhinal or insular cortex) encode information about the object itself. Using location-dependent object recognition tasks that engage the hippocampus, we demonstrate that CBP is essential for the modulation of long-term memory via HDAC inhibition. Together, these results indicate that HDAC inhibition modulates memory in the hippocampus via CBP and that different brain regions utilize different chromatin-modifying enzymes to regulate learning and memory. PMID:21224411
Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding
Li, Xin; Guo, Rui; Chen, Chao
2014-01-01
Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216
Palmer, Daniel; Creighton, Samantha; Prado, Vania F; Prado, Marco A M; Choleris, Elena; Winters, Boyer D
2016-09-15
Substantial evidence implicates Acetylcholine (ACh) in the acquisition of object memories. While most research has focused on the role of the cholinergic basal forebrain and its cortical targets, there are additional cholinergic networks that may contribute to object recognition. The striatum contains an independent cholinergic network comprised of interneurons. In the current study, we investigated the role of this cholinergic signalling in object recognition using mice deficient for Vesicular Acetylcholine Transporter (VAChT) within interneurons of the striatum. We tested whether these striatal VAChT(D2-Cre-flox/flox) mice would display normal short-term (5 or 15min retention delay) and long-term (3h retention delay) object recognition memory. In a home cage object recognition task, male and female VAChT(D2-Cre-flox/flox) mice were impaired selectively with a 15min retention delay. When tested on an object location task, VAChT(D2-Cre-flox/flox) mice displayed intact spatial memory. Finally, when object recognition was tested in a Y-shaped apparatus, designed to minimize the influence of spatial and contextual cues, only females displayed impaired recognition with a 5min retention delay, but when males were challenged with a 15min retention delay, they were also impaired; neither males nor females were impaired with the 3h delay. The pattern of results suggests that striatal cholinergic transmission plays a role in the short-term memory for object features, but not spatial location. Copyright © 2016 Elsevier B.V. All rights reserved.
Zelinsky, Gregory J; Peng, Yifan; Berg, Alexander C; Samaras, Dimitris
2013-10-08
Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery.
Zelinsky, Gregory J.; Peng, Yifan; Berg, Alexander C.; Samaras, Dimitris
2013-01-01
Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery. PMID:24105460
Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition.
Wong, Sebastien C; Stamatescu, Victor; Gatt, Adam; Kearney, David; Lee, Ivan; McDonnell, Mark D
2017-10-01
This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.
Running Improves Pattern Separation during Novel Object Recognition.
Bolz, Leoni; Heigele, Stefanie; Bischofberger, Josef
2015-10-09
Running increases adult neurogenesis and improves pattern separation in various memory tasks including context fear conditioning or touch-screen based spatial learning. However, it is unknown whether pattern separation is improved in spontaneous behavior, not emotionally biased by positive or negative reinforcement. Here we investigated the effect of voluntary running on pattern separation during novel object recognition in mice using relatively similar or substantially different objects.We show that running increases hippocampal neurogenesis but does not affect object recognition memory with 1.5 h delay after sample phase. By contrast, at 24 h delay, running significantly improves recognition memory for similar objects, whereas highly different objects can be distinguished by both, running and sedentary mice. These data show that physical exercise improves pattern separation, independent of negative or positive reinforcement. In sedentary mice there is a pronounced temporal gradient for remembering object details. In running mice, however, increased neurogenesis improves hippocampal coding and temporally preserves distinction of novel objects from familiar ones.
ERIC Educational Resources Information Center
Balderas, Israela; Rodriguez-Ortiz, Carlos J.; Salgado-Tonda, Paloma; Chavez-Hurtado, Julio; McGaugh, James L.; Bermudez-Rattoni, Federico
2008-01-01
These experiments investigated the involvement of several temporal lobe regions in consolidation of recognition memory. Anisomycin, a protein synthesis inhibitor, was infused into the hippocampus, perirhinal cortex, insular cortex, or basolateral amygdala of rats immediately after the sample phase of object or object-in-context recognition memory…
Peterson, M A; Gibson, B S
1994-11-01
In previous research, replicated here, we found that some object recognition processes influence figure-ground organization. We have proposed that these object recognition processes operate on edges (or contours) detected early in visual processing, rather than on regions. Consistent with this proposal, influences from object recognition on figure-ground organization were previously observed in both pictures and stereograms depicting regions of different luminance, but not in random-dot stereograms, where edges arise late in processing (Peterson & Gibson, 1993). In the present experiments, we examined whether or not two other types of contours--outlines and subjective contours--enable object recognition influences on figure-ground organization. For both types of contours we observed a pattern of effects similar to that originally obtained with luminance edges. The results of these experiments are valuable for distinguishing between alternative views of the mechanisms mediating object recognition influences on figure-ground organization. In addition, in both Experiments 1 and 2, fixated regions were seen as figure longer than nonfixated regions, suggesting that fixation location must be included among the variables relevant to figure-ground organization.
Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.
Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E
2010-11-01
Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Babayan, Pavel; Smirnov, Sergey; Strotov, Valery
2017-10-01
This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Raber, Jacob
2015-05-15
Object recognition is a sensitive cognitive test to detect effects of genetic and environmental factors on cognition in rodents. There are various versions of object recognition that have been used since the original test was reported by Ennaceur and Delacour in 1988. There are nonhuman primate and human primate versions of object recognition as well, allowing cross-species comparisons. As no language is required for test performance, object recognition is a very valuable test for human research studies in distinct parts of the world, including areas where there might be less years of formal education. The main focus of this review is to illustrate how object recognition can be used to assess cognition in humans under normal physiological and neurological conditions. Copyright © 2015 Elsevier B.V. All rights reserved.
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
A validated set of tool pictures with matched objects and non-objects for laterality research.
Verma, Ark; Brysbaert, Marc
2015-01-01
Neuropsychological and neuroimaging research has established that knowledge related to tool use and tool recognition is lateralized to the left cerebral hemisphere. Recently, behavioural studies with the visual half-field technique have confirmed the lateralization. A limitation of this research was that different sets of stimuli had to be used for the comparison of tools to other objects and objects to non-objects. Therefore, we developed a new set of stimuli containing matched triplets of tools, other objects and non-objects. With the new stimulus set, we successfully replicated the findings of no visual field advantage for objects in an object recognition task combined with a significant right visual field advantage for tools in a tool recognition task. The set of stimuli is available as supplemental data to this article.
A method of object recognition for single pixel imaging
NASA Astrophysics Data System (ADS)
Li, Boxuan; Zhang, Wenwen
2018-01-01
Computational ghost imaging(CGI), utilizing a single-pixel detector, has been extensively used in many fields. However, in order to achieve a high-quality reconstructed image, a large number of iterations are needed, which limits the flexibility of using CGI in practical situations, especially in the field of object recognition. In this paper, we purpose a method utilizing the feature matching to identify the number objects. In the given system, approximately 90% of accuracy of recognition rates can be achieved, which provides a new idea for the application of single pixel imaging in the field of object recognition
Contini, Erika W; Wardle, Susan G; Carlson, Thomas A
2017-10-01
Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram
2016-01-15
An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.
HWDA: A coherence recognition and resolution algorithm for hybrid web data aggregation
NASA Astrophysics Data System (ADS)
Guo, Shuhang; Wang, Jian; Wang, Tong
2017-09-01
Aiming at the object confliction recognition and resolution problem for hybrid distributed data stream aggregation, a distributed data stream object coherence solution technology is proposed. Firstly, the framework was defined for the object coherence conflict recognition and resolution, named HWDA. Secondly, an object coherence recognition technology was proposed based on formal language description logic and hierarchical dependency relationship between logic rules. Thirdly, a conflict traversal recognition algorithm was proposed based on the defined dependency graph. Next, the conflict resolution technology was prompted based on resolution pattern matching including the definition of the three types of conflict, conflict resolution matching pattern and arbitration resolution method. At last, the experiment use two kinds of web test data sets to validate the effect of application utilizing the conflict recognition and resolution technology of HWDA.
Perirhinal Cortex Lesions in Rats: Novelty Detection and Sensitivity to Interference
2015-01-01
Rats with perirhinal cortex lesions received multiple object recognition trials within a continuous session to examine whether they show false memories. Experiment 1 focused on exploration patterns during the first object recognition test postsurgery, in which each trial contained 1 novel and 1 familiar object. The perirhinal cortex lesions reduced time spent exploring novel objects, but did not affect overall time spent exploring the test objects (novel plus familiar). Replications with subsequent cohorts of rats (Experiments 2, 3, 4.1) repeated this pattern of results. When all recognition memory data were combined (Experiments 1–4), giving totals of 44 perirhinal lesion rats and 40 surgical sham controls, the perirhinal cortex lesions caused a marginal reduction in total exploration time. That decrease in time with novel objects was often compensated by increased exploration of familiar objects. Experiment 4 also assessed the impact of proactive interference on recognition memory. Evidence emerged that prior object experience could additionally impair recognition performance in rats with perirhinal cortex lesions. Experiment 5 examined exploration levels when rats were just given pairs of novel objects to explore. Despite their perirhinal cortex lesions, exploration levels were comparable with those of control rats. While the results of Experiment 4 support the notion that perirhinal lesions can increase sensitivity to proactive interference, the overall findings question whether rats lacking a perirhinal cortex typically behave as if novel objects are familiar, that is, show false recognition. Rather, the rats retain a signal of novelty but struggle to discriminate the identity of that signal. PMID:26030425
General object recognition is specific: Evidence from novel and familiar objects.
Richler, Jennifer J; Wilmer, Jeremy B; Gauthier, Isabel
2017-09-01
In tests of object recognition, individual differences typically correlate modestly but nontrivially across familiar categories (e.g. cars, faces, shoes, birds, mushrooms). In theory, these correlations could reflect either global, non-specific mechanisms, such as general intelligence (IQ), or more specific mechanisms. Here, we introduce two separate methods for effectively capturing category-general performance variation, one that uses novel objects and one that uses familiar objects. In each case, we show that category-general performance variance is unrelated to IQ, thereby implicating more specific mechanisms. The first approach examines three newly developed novel object memory tests (NOMTs). We predicted that NOMTs would exhibit more shared, category-general variance than familiar object memory tests (FOMTs) because novel objects, unlike familiar objects, lack category-specific environmental influences (e.g. exposure to car magazines or botany classes). This prediction held, and remarkably, virtually none of the substantial shared variance among NOMTs was explained by IQ. Also, while NOMTs correlated nontrivially with two FOMTs (faces, cars), these correlations were smaller than among NOMTs and no larger than between the face and car tests themselves, suggesting that the category-general variance captured by NOMTs is specific not only relative to IQ, but also, to some degree, relative to both face and car recognition. The second approach averaged performance across multiple FOMTs, which we predicted would increase category-general variance by averaging out category-specific factors. This prediction held, and as with NOMTs, virtually none of the shared variance among FOMTs was explained by IQ. Overall, these results support the existence of object recognition mechanisms that, though category-general, are specific relative to IQ and substantially separable from face and car recognition. They also add sensitive, well-normed NOMTs to the tools available to study object recognition. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
de la Rosa, Stephan; Choudhery, Rabia N.; Chatziastros, Astros
2011-01-01
Recent evidence suggests that the recognition of an object's presence and its explicit recognition are temporally closely related. Here we re-examined the time course (using a fine and a coarse temporal resolution) and the sensitivity of three possible component processes of visual object recognition. In particular, participants saw briefly…
ERIC Educational Resources Information Center
Richler, Jennifer J.; Gauthier, Isabel; Palmeri, Thomas J.
2011-01-01
Are there consequences of calling objects by their names? Lupyan (2008) suggested that overtly labeling objects impairs subsequent recognition memory because labeling shifts stored memory representations of objects toward the category prototype (representational shift hypothesis). In Experiment 1, we show that processing objects at the basic…
Faillace, M P; Pisera-Fuster, A; Medrano, M P; Bejarano, A C; Bernabeu, R O
2017-03-01
Zebrafish have a sophisticated color- and shape-sensitive visual system, so we examined color cue-based novel object recognition in zebrafish. We evaluated preference in the absence or presence of drugs that affect attention and memory retention in rodents: nicotine and the histone deacetylase inhibitor (HDACi) phenylbutyrate (PhB). The objective of this study was to evaluate whether nicotine and PhB affect innate preferences of zebrafish for familiar and novel objects after short- and long-retention intervals. We developed modified object recognition (OR) tasks using neutral novel and familiar objects in different colors. We also tested objects which differed with respect to the exploratory behavior they elicited from naïve zebrafish. Zebrafish showed an innate preference for exploring red or green objects rather than yellow or blue objects. Zebrafish were better at discriminating color changes than changes in object shape or size. Nicotine significantly enhanced or changed short-term innate novel object preference whereas PhB had similar effects when preference was assessed 24 h after training. Analysis of other zebrafish behaviors corroborated these results. Zebrafish were innately reluctant or prone to explore colored novel objects, so drug effects on innate preference for objects can be evaluated changing the color of objects with a simple geometry. Zebrafish exhibited recognition memory for novel objects with similar innate significance. Interestingly, nicotine and PhB significantly modified innate object preference.
Does object view influence the scene consistency effect?
Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko
2015-04-01
Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.
Short temporal asynchrony disrupts visual object recognition
Singer, Jedediah M.; Kreiman, Gabriel
2014-01-01
Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738
Rolls, Edmund T; Mills, W Patrick C
2018-05-01
When objects transform into different views, some properties are maintained, such as whether the edges are convex or concave, and these non-accidental properties are likely to be important in view-invariant object recognition. The metric properties, such as the degree of curvature, may change with different views, and are less likely to be useful in object recognition. It is shown that in a model of invariant visual object recognition in the ventral visual stream, VisNet, non-accidental properties are encoded much more than metric properties by neurons. Moreover, it is shown how with the temporal trace rule training in VisNet, non-accidental properties of objects become encoded by neurons, and how metric properties are treated invariantly. We also show how VisNet can generalize between different objects if they have the same non-accidental property, because the metric properties are likely to overlap. VisNet is a 4-layer unsupervised model of visual object recognition trained by competitive learning that utilizes a temporal trace learning rule to implement the learning of invariance using views that occur close together in time. A second crucial property of this model of object recognition is, when neurons in the level corresponding to the inferior temporal visual cortex respond selectively to objects, whether neurons in the intermediate layers can respond to combinations of features that may be parts of two or more objects. In an investigation using the four sides of a square presented in every possible combination, it was shown that even though different layer 4 neurons are tuned to encode each feature or feature combination orthogonally, neurons in the intermediate layers can respond to features or feature combinations present is several objects. This property is an important part of the way in which high capacity can be achieved in the four-layer ventral visual cortical pathway. These findings concerning non-accidental properties and the use of neurons in intermediate layers of the hierarchy help to emphasise fundamental underlying principles of the computations that may be implemented in the ventral cortical visual stream used in object recognition. Copyright © 2018 Elsevier Inc. All rights reserved.
Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.
NASA Technical Reports Server (NTRS)
Leonard, Desiree M.
1991-01-01
Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.
The Last Meter: Blind Visual Guidance to a Target.
Manduchi, Roberto; Coughlan, James M
2014-01-01
Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.
Cross, Laura; Brown, Malcolm W; Aggleton, John P; Warburton, E Clea
2012-12-21
In humans recognition memory deficits, a typical feature of diencephalic amnesia, have been tentatively linked to mediodorsal thalamic nucleus (MD) damage. Animal studies have occasionally investigated the role of the MD in single-item recognition, but have not systematically analyzed its involvement in other recognition memory processes. In Experiment 1 rats with bilateral excitotoxic lesions in the MD or the medial prefrontal cortex (mPFC) were tested in tasks that assessed single-item recognition (novel object preference), associative recognition memory (object-in-place), and recency discrimination (recency memory task). Experiment 2 examined the functional importance of the interactions between the MD and mPFC using disconnection techniques. Unilateral excitotoxic lesions were placed in both the MD and the mPFC in either the same (MD + mPFC Ipsi) or opposite hemispheres (MD + mPFC Contra group). Bilateral lesions in the MD or mPFC impaired object-in-place and recency memory tasks, but had no effect on novel object preference. In Experiment 2 the MD + mPFC Contra group was significantly impaired in the object-in-place and recency memory tasks compared with the MD + mPFC Ipsi group, but novel object preference was intact. Thus, connections between the MD and mPFC are critical for recognition memory when the discriminations involve associative or recency information. However, the rodent MD is not necessary for single-item recognition memory.
Recognition memory in tree shrew (Tupaia belangeri) after repeated familiarization sessions.
Khani, Abbas; Rainer, Gregor
2012-07-01
Recognition memories are formed during perceptual experience and allow subsequent recognition of previously encountered objects as well as their distinction from novel objects. As a consequence, novel objects are generally explored longer than familiar objects by many species. This novelty preference has been documented in rodents using the novel object recognition (NOR) test, as well is in primates including humans using preferential looking time paradigms. Here, we examine novelty preference using the NOR task in tree shrew, a small animal species that is considered to be an intermediary between rodents and primates. Our paradigm consisted of three phases: arena familiarization, object familiarization sessions with two identical objects in the arena and finally a test session following a 24-h retention period with a familiar and a novel object in the arena. We employed two different object familiarization durations: one and three sessions on consecutive days. After three object familiarization sessions, tree shrews exhibited robust preference for novel objects on the test day. This was accompanied by significant reduction in familiar object exploration time, occurring largely between the first and second day of object familiarization. By contrast, tree shrews did not show a significant preference for the novel object after a one-session object familiarization. Nonetheless, they spent significantly less time exploring the familiar object on the test day compared to the object familiarization day, indicating that they did maintain a memory trace for the familiar object. Our study revealed different time courses for familiar object habituation and emergence of novelty preference, suggesting that novelty preference is dependent on well-consolidated memory of the competing familiar object. Taken together, our results demonstrate robust novelty preference of tree shrews, in general similarity to previous findings in rodents and primates. Copyright © 2012 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi
2004-01-01
Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…
ERIC Educational Resources Information Center
Kuusikko-Gauffin, Sanna; Jansson-Verkasalo, Eira; Carter, Alice; Pollock-Wurman, Rachel; Jussila, Katja; Mattila, Marja-Leena; Rahko, Jukka; Ebeling, Hanna; Pauls, David; Moilanen, Irma
2011-01-01
Children with Autism Spectrum Disorders (ASDs) have reported to have impairments in face, recognition and face memory, but intact object recognition and object memory. Potential abnormalities, in these fields at the family level of high-functioning children with ASD remains understudied despite, the ever-mounting evidence that ASDs are genetic and…
NASA Astrophysics Data System (ADS)
Buryi, E. V.
1998-05-01
The main problems in the synthesis of an object recognition system, based on the principles of operation of neuron networks, are considered. Advantages are demonstrated of a hierarchical structure of the recognition algorithm. The use of reading of the amplitude spectrum of signals as information tags is justified and a method is developed for determination of the dimensionality of the tag space. Methods are suggested for ensuring the stability of object recognition in the optical range. It is concluded that it should be possible to recognise perspectives of complex objects.
Lateral entorhinal cortex is necessary for associative but not nonassociative recognition memory
Wilson, David IG; Watanabe, Sakurako; Milner, Helen; Ainge, James A
2013-01-01
The lateral entorhinal cortex (LEC) provides one of the two major input pathways to the hippocampus and has been suggested to process the nonspatial contextual details of episodic memory. Combined with spatial information from the medial entorhinal cortex it is hypothesised that this contextual information is used to form an integrated spatially selective, context-specific response in the hippocampus that underlies episodic memory. Recently, we reported that the LEC is required for recognition of objects that have been experienced in a specific context (Wilson et al. (2013) Hippocampus 23:352-366). Here, we sought to extend this work to assess the role of the LEC in recognition of all associative combinations of objects, places and contexts within an episode. Unlike controls, rats with excitotoxic lesions of the LEC showed no evidence of recognizing familiar combinations of object in place, place in context, or object in place and context. However, LEC lesioned rats showed normal recognition of objects and places independently from each other (nonassociative recognition). Together with our previous findings, these data suggest that the LEC is critical for associative recognition memory and may bind together information relating to objects, places, and contexts needed for episodic memory formation. PMID:23836525
Sensor agnostic object recognition using a map seeking circuit
NASA Astrophysics Data System (ADS)
Overman, Timothy L.; Hart, Michael
2012-05-01
Automatic object recognition capabilities are traditionally tuned to exploit the specific sensing modality they were designed to. Their successes (and shortcomings) are tied to object segmentation from the background, they typically require highly skilled personnel to train them, and they become cumbersome with the introduction of new objects. In this paper we describe a sensor independent algorithm based on the biologically inspired technology of map seeking circuits (MSC) which overcomes many of these obstacles. In particular, the MSC concept offers transparency in object recognition from a common interface to all sensor types, analogous to a USB device. It also provides a common core framework that is independent of the sensor and expandable to support high dimensionality decision spaces. Ease in training is assured by using commercially available 3D models from the video game community. The search time remains linear no matter how many objects are introduced, ensuring rapid object recognition. Here, we report results of an MSC algorithm applied to object recognition and pose estimation from high range resolution radar (1D), electrooptical imagery (2D), and LIDAR point clouds (3D) separately. By abstracting the sensor phenomenology from the underlying a prior knowledge base, MSC shows promise as an easily adaptable tool for incorporating additional sensor inputs.
Calderone, Daniel J.; Hoptman, Matthew J.; Martínez, Antígona; Nair-Collins, Sangeeta; Mauro, Cristina J.; Bar, Moshe; Javitt, Daniel C.; Butler, Pamela D.
2013-01-01
Patients with schizophrenia exhibit cognitive and sensory impairment, and object recognition deficits have been linked to sensory deficits. The “frame and fill” model of object recognition posits that low spatial frequency (LSF) information rapidly reaches the prefrontal cortex (PFC) and creates a general shape of an object that feeds back to the ventral temporal cortex to assist object recognition. Visual dysfunction findings in schizophrenia suggest a preferential loss of LSF information. This study used functional magnetic resonance imaging (fMRI) and resting state functional connectivity (RSFC) to investigate the contribution of visual deficits to impaired object “framing” circuitry in schizophrenia. Participants were shown object stimuli that were intact or contained only LSF or high spatial frequency (HSF) information. For controls, fMRI revealed preferential activation to LSF information in precuneus, superior temporal, and medial and dorsolateral PFC areas, whereas patients showed a preference for HSF information or no preference. RSFC revealed a lack of connectivity between early visual areas and PFC for patients. These results demonstrate impaired processing of LSF information during object recognition in schizophrenia, with patients instead displaying increased processing of HSF information. This is consistent with findings of a preference for local over global visual information in schizophrenia. PMID:22735157
ERIC Educational Resources Information Center
Acres, K.; Taylor, K. I.; Moss, H. E.; Stamatakis, E. A.; Tyler, L. K.
2009-01-01
Cognitive neuroscientific research proposes complementary hemispheric asymmetries in naming and recognising visual objects, with a left temporal lobe advantage for object naming and a right temporal lobe advantage for object recognition. Specifically, it has been proposed that the left inferior temporal lobe plays a mediational role linking…
Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.
ERIC Educational Resources Information Center
Biederman, Irving; Cooper, Eric E.
1991-01-01
Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…
Electrophysiological evidence for effects of color knowledge in object recognition.
Lu, Aitao; Xu, Guiping; Jin, Hua; Mo, Lei; Zhang, Jijia; Zhang, John X
2010-01-29
Knowledge about the typical colors associated with familiar everyday objects (i.e., strawberries are red) is well-known to be represented in the conceptual semantic system. Evidence that such knowledge may also play a role in early perceptual processes for object recognition is scant. In the present ERP study, participants viewed a list of object pictures and detected infrequent stimulus repetitions. Results show that shortly after stimulus onset, ERP components indexing early perceptual processes, including N1, P2, and N2, differentiated between objects in their appropriate or congruent color from these objects in an inappropriate or incongruent color. Such congruence effect also occurred in N3 associated with semantic processing of pictures but not in N4 for domain-general semantic processing. Our results demonstrate a clear effect of color knowledge in early object recognition stages and support the following proposal-color as a surface property is stored in a multiple-memory system where pre-semantic perceptual and semantic conceptual representations interact during object recognition. (c) 2009 Elsevier Ireland Ltd. All rights reserved.
Coding of visual object features and feature conjunctions in the human brain.
Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M
2008-01-01
Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.
Yamada, Kazuo; Arai, Misaki; Suenaga, Toshiko; Ichitani, Yukio
2017-07-28
The hippocampus is thought to be involved in object location recognition memory, yet the contribution of hippocampal NMDA receptors to the memory processes, such as encoding, retention and retrieval, is unknown. First, we confirmed that hippocampal infusion of a competitive NMDA receptor antagonist, AP5 (2-amino-5-phosphonopentanoic acid, 20-40nmol), impaired performance of spontaneous object location recognition test but not that of novel object recognition test in Wistar rats. Next, the effects of hippocampal AP5 treatment on each process of object location recognition memory were examined with three different injection times using a 120min delay-interposed test: 15min before the sample phase (Time I), immediately after the sample phase (Time II), and 15min before the test phase (Time III). The blockade of hippocampal NMDA receptors before and immediately after the sample phase, but not before the test phase, markedly impaired performance of object location recognition test, suggesting that hippocampal NMDA receptors play an important role in encoding and consolidation/retention, but not retrieval, of spontaneous object location memory. Copyright © 2017 Elsevier B.V. All rights reserved.
Combining color and shape information for illumination-viewpoint invariant object recognition.
Diplaros, Aristeidis; Gevers, Theo; Patras, Ioannis
2006-01-01
In this paper, we propose a new scheme that merges color- and shape-invariant information for object recognition. To obtain robustness against photometric changes, color-invariant derivatives are computed first. Color invariance is an important aspect of any object recognition scheme, as color changes considerably with the variation in illumination, object pose, and camera viewpoint. These color invariant derivatives are then used to obtain similarity invariant shape descriptors. Shape invariance is equally important as, under a change in camera viewpoint and object pose, the shape of a rigid object undergoes a perspective projection on the image plane. Then, the color and shape invariants are combined in a multidimensional color-shape context which is subsequently used as an index. As the indexing scheme makes use of a color-shape invariant context, it provides a high-discriminative information cue robust against varying imaging conditions. The matching function of the color-shape context allows for fast recognition, even in the presence of object occlusion and cluttering. From the experimental results, it is shown that the method recognizes rigid objects with high accuracy in 3-D complex scenes and is robust against changing illumination, camera viewpoint, object pose, and noise.
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
2018-01-01
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
Object recognition with severe spatial deficits in Williams syndrome: sparing and breakdown.
Landau, Barbara; Hoffman, James E; Kurz, Nicole
2006-07-01
Williams syndrome (WS) is a rare genetic disorder that results in severe visual-spatial cognitive deficits coupled with relative sparing in language, face recognition, and certain aspects of motion processing. Here, we look for evidence for sparing or impairment in another cognitive system-object recognition. Children with WS, normal mental-age (MA) and chronological age-matched (CA) children, and normal adults viewed pictures of a large range of objects briefly presented under various conditions of degradation, including canonical and unusual orientations, and clear or blurred contours. Objects were shown as either full-color views (Experiment 1) or line drawings (Experiment 2). Across both experiments, WS and MA children performed similarly in all conditions while CA children performed better than both WS group and MA groups with unusual views. This advantage, however, was eliminated when images were also blurred. The error types and relative difficulty of different objects were similar across all participant groups. The results indicate selective sparing of basic mechanisms of object recognition in WS, together with developmental delay or arrest in recognition of objects from unusual viewpoints. These findings are consistent with the growing literature on brain abnormalities in WS which points to selective impairment in the parietal areas of the brain. As a whole, the results lend further support to the growing literature on the functional separability of object recognition mechanisms from other spatial functions, and raise intriguing questions about the link between genetic deficits and cognition.
Parallel and distributed computation for fault-tolerant object recognition
NASA Technical Reports Server (NTRS)
Wechsler, Harry
1988-01-01
The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.
Neural-Network Object-Recognition Program
NASA Technical Reports Server (NTRS)
Spirkovska, L.; Reid, M. B.
1993-01-01
HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.
On the three-quarter view advantage of familiar object recognition.
Nonose, Kohei; Niimi, Ryosuke; Yokosawa, Kazuhiko
2016-11-01
A three-quarter view, i.e., an oblique view, of familiar objects often leads to a higher subjective goodness rating when compared with other orientations. What is the source of the high goodness for oblique views? First, we confirmed that object recognition performance was also best for oblique views around 30° view, even when the foreshortening disadvantage of front- and side-views was minimized (Experiments 1 and 2). In Experiment 3, we measured subjective ratings of view goodness and two possible determinants of view goodness: familiarity of view, and subjective impression of three-dimensionality. Three-dimensionality was measured as the subjective saliency of visual depth information. The oblique views were rated best, most familiar, and as approximating greatest three-dimensionality on average; however, the cluster analyses showed that the "best" orientation systematically varied among objects. We found three clusters of objects: front-preferred objects, oblique-preferred objects, and side-preferred objects. Interestingly, recognition performance and the three-dimensionality rating were higher for oblique views irrespective of the clusters. It appears that recognition efficiency is not the major source of the three-quarter view advantage. There are multiple determinants and variability among objects. This study suggests that the classical idea that a canonical view has a unique advantage in object perception requires further discussion.
Eye Movements to Pictures Reveal Transient Semantic Activation during Spoken Word Recognition
ERIC Educational Resources Information Center
Yee, Eiling; Sedivy, Julie C.
2006-01-01
Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an…
It's all connected: Pathways in visual object recognition and early noun learning.
Smith, Linda B
2013-11-01
A developmental pathway may be defined as the route, or chain of events, through which a new structure or function forms. For many human behaviors, including object name learning and visual object recognition, these pathways are often complex and multicausal and include unexpected dependencies. This article presents three principles of development that suggest the value of a developmental psychology that explicitly seeks to trace these pathways and uses empirical evidence on developmental dependencies among motor development, action on objects, visual object recognition, and object name learning in 12- to 24-month-old infants to make the case. The article concludes with a consideration of the theoretical implications of this approach. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Detection and recognition of targets by using signal polarization properties
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Peralta-Fabi, Ricardo; Popov, Anatoly V.; Babakov, Mikhail F.
1999-08-01
The quality of radar target recognition can be enhanced by exploiting its polarization signatures. A specialized X-band polarimetric radar was used for target recognition in experimental investigations. The following polarization characteristics connected to the object geometrical properties were investigated: the amplitudes of the polarization matrix elements; an anisotropy coefficient; depolarization coefficient; asymmetry coefficient; the energy of a backscattering signal; object shape factor. A large quantity of polarimetric radar data was measured and processed to form a database of different object and different weather conditions. The histograms of polarization signatures were approximated by a Nakagami distribution, then used for real- time target recognition. The Neyman-Pearson criterion was used for the target detection, and the criterion of the maximum of a posterior probability was used for recognition problem. Some results of experimental verification of pattern recognition and detection of objects with different electrophysical and geometrical characteristics urban in clutter are presented in this paper.
The effect of colour congruency on shape discriminations of novel objects.
Nicholson, Karen G; Humphrey, G Keith
2004-01-01
Although visual object recognition is primarily shape driven, colour assists the recognition of some objects. It is unclear, however, just how colour information is coded with respect to shape in long-term memory and how the availability of colour in the visual image facilitates object recognition. We examined the role of colour in the recognition of novel, 3-D objects by manipulating the congruency of object colour across the study and test phases, using an old/new shape-identification task. In experiment 1, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented in their original colour, rather than in a different colour. In experiments 2 and 3, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented with their original part-colour conjunctions, rather than in different or in reversed part-colour conjunctions. In experiment 4, we found that participants were quite poor at the verbal recall of part-colour conjunctions for correctly identified old objects, presented as grey-scale images at test. In experiment 5, we found that participants were significantly slower at correctly identifying old objects when object colour was incongruent across study and test, than when background colour was incongruent across study and test. The results of these experiments suggest that both shape and colour information are stored as part of the long-term representation of these novel objects. Results are discussed in terms of how colour might be coded with respect to shape in stored object representations.
Exogenous temporal cues enhance recognition memory in an object-based manner.
Ohyama, Junji; Watanabe, Katsumi
2010-11-01
Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.
Toward a unified model of face and object recognition in the human visual system
Wallis, Guy
2013-01-01
Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963
Semantic and visual determinants of face recognition in a prosopagnosic patient.
Dixon, M J; Bub, D N; Arguin, M
1998-05-01
Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.
Picture object recognition in an American black bear (Ursus americanus).
Johnson-Ulrich, Zoe; Vonk, Jennifer; Humbyrd, Mary; Crowley, Marilyn; Wojtkowski, Ela; Yates, Florence; Allard, Stephanie
2016-11-01
Many animals have been tested for conceptual discriminations using two-dimensional images as stimuli, and many of these species appear to transfer knowledge from 2D images to analogous real life objects. We tested an American black bear for picture-object recognition using a two alternative forced choice task. She was presented with four unique sets of objects and corresponding pictures. The bear showed generalization from both objects to pictures and pictures to objects; however, her transfer was superior when transferring from real objects to pictures, suggesting that bears can recognize visual features from real objects within photographic images during discriminations.
Mechanisms and neural basis of object and pattern recognition: a study with chess experts.
Bilalić, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang
2010-11-01
Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and novices performing chess-related and -unrelated (visual) search tasks. As expected, the superiority of experts was limited to the chess-specific task, as there were no differences in a control task that used the same chess stimuli but did not require chess-specific recognition. The analysis of eye movements showed that experts immediately and exclusively focused on the relevant aspects in the chess task, whereas novices also examined irrelevant aspects. With random chess positions, when pattern knowledge could not be used to guide perception, experts nevertheless maintained an advantage. Experts' superior domain-specific parafoveal vision, a consequence of their knowledge about individual domain-specific symbols, enabled improved object recognition. Functional magnetic resonance imaging corroborated this differentiation between object and pattern recognition and showed that chess-specific object recognition was accompanied by bilateral activation of the occipitotemporal junction, whereas chess-specific pattern recognition was related to bilateral activations in the middle part of the collateral sulci. Using the expertise approach together with carefully chosen controls and multiple dependent measures, we identified object and pattern recognition as two essential cognitive processes in expert visual cognition, which may also help to explain the mechanisms of everyday perception.
Single prolonged stress impairs social and object novelty recognition in rats.
Eagle, Andrew L; Fitzpatrick, Chris J; Perrine, Shane A
2013-11-01
Posttraumatic stress disorder (PTSD) results from exposure to a traumatic event and manifests as re-experiencing, arousal, avoidance, and negative cognition/mood symptoms. Avoidant symptoms, as well as the newly defined negative cognitions/mood, are a serious complication leading to diminished interest in once important or positive activities, such as social interaction; however, the basis of these symptoms remains poorly understood. PTSD patients also exhibit impaired object and social recognition, which may underlie the avoidance and symptoms of negative cognition, such as social estrangement or diminished interest in activities. Previous studies have demonstrated that single prolonged stress (SPS), models PTSD phenotypes, including impairments in learning and memory. Therefore, it was hypothesized that SPS would impair social and object recognition memory. Male Sprague Dawley rats were exposed to SPS then tested in the social choice test (SCT) or novel object recognition test (NOR). These tests measure recognition of novelty over familiarity, a natural preference of rodents. Results show that SPS impaired preference for both social and object novelty. In addition, SPS impairment in social recognition may be caused by impaired behavioral flexibility, or an inability to shift behavior during the SCT. These results demonstrate that traumatic stress can impair social and object recognition memory, which may underlie certain avoidant symptoms or negative cognition in PTSD and be related to impaired behavioral flexibility. Copyright © 2013 Elsevier B.V. All rights reserved.
Progestogens’ effects and mechanisms for object recognition memory across the lifespan
Walf, Alicia A.; Koonce, Carolyn J.; Frye, Cheryl A.
2016-01-01
This review explores the effects of female reproductive hormones, estrogens and progestogens, with a focus on progesterone and allopregnanolone, on object memory. Progesterone and its metabolites, in particular allopregnanolone, exert various effects on both cognitive and non-mnemonic functions in females. The well-known object recognition task is a valuable experimental paradigm that can be used to determine the effects and mechanisms of progestogens for mnemonic effects across the lifespan, which will be discussed herein. In this task there is little test-decay when different objects are used as targets and baseline valance for objects is controlled. This allows repeated testing, within-subjects designs, and longitudinal assessments, which aid understanding of changes in hormonal milieu. Objects are not aversive or food-based, which are hormone-sensitive factors. This review focuses on published data from our laboratory, and others, using the object recognition task in rodents to assess the role and mechanisms of progestogens throughout the lifespan. Improvements in object recognition performance of rodents are often associated with higher hormone levels in the hippocampus and prefrontal cortex during natural cycles, with hormone replacement following ovariectomy in young animals, or with aging. The capacity for reversal of age- and reproductive senescence-related decline in cognitive performance, and changes in neural plasticity that may be dissociated from peripheral effects with such decline, are discussed. The focus here will be on the effects of brain-derived factors, such as the neurosteroid, allopregnanolone, and other hormones, for enhancing object recognition across the lifespan. PMID:26235328
Cippitelli, Andrea; Zook, Michelle; Bell, Lauren; Damadzic, Ruslan; Eskay, Robert L.; Schwandt, Melanie; Heilig, Markus
2010-01-01
Excessive alcohol use leads to neurodegeneration in several brain structures including the hippocampal dentate gyrus and the entorhinal cortex. Cognitive deficits that result are among the most insidious and debilitating consequences of alcoholism. The object exploration task (OET) provides a sensitive measurement of spatial memory impairment induced by hippocampal and cortical damage. In this study, we examine whether the observed neurotoxicity produced by a 4-day binge ethanol treatment results in long-term memory impairment by observing the time course of reactions to spatial change (object configuration) and non-spatial change (object recognition). Wistar rats were assessed for their abilities to detect spatial configuration in the OET at 1 week and 10 weeks following the ethanol treatment, in which ethanol groups received 9–15 g/kg/day and achieved blood alcohol levels over 300 mg/dl. At 1 week, results indicated that the binge alcohol treatment produced impairment in both spatial memory and non-spatial object recognition performance. Unlike the controls, ethanol treated rats did not increase the duration or number of contacts with the displaced object in the spatial memory task, nor did they increase the duration of contacts with the novel object in the object recognition task. After 10 weeks, spatial memory remained impaired in the ethanol treated rats but object recognition ability was recovered. Our data suggest that episodes of binge-like alcohol exposure result in long-term and possibly permanent impairments in memory for the configuration of objects during exploration, whereas the ability to detect non-spatial changes is only temporarily affected. PMID:20849966
1989-10-01
weight based on how powerful the corresponding feature is for object recognition and discrimination. For example, consider an arbitrary weight, denoted...quality of the segmentation, how powerful the features and spatial constraints in the knowledge base are (as far as object recognition is concern...that are powerful for object recognition and discrimination. At this point, this selection is performed heuristically through trial-and-error. As a
1988-04-30
side it necessary and Identify’ by’ block n~nmbot) haptic hand, touch , vision, robot, object recognition, categorization 20. AGSTRPACT (Continue an...established that the haptic system has remarkable capabilities for object recognition. We define haptics as purposive touch . The basic tactual system...gathered ratings of the importance of dimensions for categorizing common objects by touch . Texture and hardness ratings strongly co-vary, which is
High speed optical object recognition processor with massive holographic memory
NASA Technical Reports Server (NTRS)
Chao, T.; Zhou, H.; Reyes, G.
2002-01-01
Real-time object recognition using a compact grayscale optical correlator will be introduced. A holographic memory module for storing a large bank of optimum correlation filters, to accommodate the large data throughput rate needed for many real-world applications, has also been developed. System architecture of the optical processor and the holographic memory will be presented. Application examples of this object recognition technology will also be demonstrated.
Combining heterogenous features for 3D hand-held object recognition
NASA Astrophysics Data System (ADS)
Lv, Xiong; Wang, Shuang; Li, Xiangyang; Jiang, Shuqiang
2014-10-01
Object recognition has wide applications in the area of human-machine interaction and multimedia retrieval. However, due to the problem of visual polysemous and concept polymorphism, it is still a great challenge to obtain reliable recognition result for the 2D images. Recently, with the emergence and easy availability of RGB-D equipment such as Kinect, this challenge could be relieved because the depth channel could bring more information. A very special and important case of object recognition is hand-held object recognition, as hand is a straight and natural way for both human-human interaction and human-machine interaction. In this paper, we study the problem of 3D object recognition by combining heterogenous features with different modalities and extraction techniques. For hand-craft feature, although it reserves the low-level information such as shape and color, it has shown weakness in representing hiconvolutionalgh-level semantic information compared with the automatic learned feature, especially deep feature. Deep feature has shown its great advantages in large scale dataset recognition but is not always robust to rotation or scale variance compared with hand-craft feature. In this paper, we propose a method to combine hand-craft point cloud features and deep learned features in RGB and depth channle. First, hand-held object segmentation is implemented by using depth cues and human skeleton information. Second, we combine the extracted hetegerogenous 3D features in different stages using linear concatenation and multiple kernel learning (MKL). Then a training model is used to recognize 3D handheld objects. Experimental results validate the effectiveness and gerneralization ability of the proposed method.
ERIC Educational Resources Information Center
Lawson, Rebecca
2009-01-01
A sequential matching task was used to compare how the difficulty of shape discrimination influences the achievement of object constancy for depth rotations across haptic and visual object recognition. Stimuli were nameable, 3-dimensional plastic models of familiar objects (e.g., bed, chair) and morphs midway between these endpoint shapes (e.g., a…
Single-pixel non-imaging object recognition by means of Fourier spectrum acquisition
NASA Astrophysics Data System (ADS)
Chen, Huichao; Shi, Jianhong; Liu, Xialin; Niu, Zhouzhou; Zeng, Guihua
2018-04-01
Single-pixel imaging has emerged over recent years as a novel imaging technique, which has significant application prospects. In this paper, we propose and experimentally demonstrate a scheme that can achieve single-pixel non-imaging object recognition by acquiring the Fourier spectrum. In an experiment, a four-step phase-shifting sinusoid illumination light is used to irradiate the object image, the value of the light intensity is measured with a single-pixel detection unit, and the Fourier coefficients of the object image are obtained by a differential measurement. The Fourier coefficients are first cast into binary numbers to obtain the hash value. We propose a new method of perceptual hashing algorithm, which is combined with a discrete Fourier transform to calculate the hash value. The hash distance is obtained by calculating the difference of the hash value between the object image and the contrast images. By setting an appropriate threshold, the object image can be quickly and accurately recognized. The proposed scheme realizes single-pixel non-imaging perceptual hashing object recognition by using fewer measurements. Our result might open a new path for realizing object recognition with non-imaging.
Preserved Haptic Shape Processing after Bilateral LOC Lesions.
Snow, Jacqueline C; Goodale, Melvyn A; Culham, Jody C
2015-10-07
The visual and haptic perceptual systems are understood to share a common neural representation of object shape. A region thought to be critical for recognizing visual and haptic shape information is the lateral occipital complex (LOC). We investigated whether LOC is essential for haptic shape recognition in humans by studying behavioral responses and brain activation for haptically explored objects in a patient (M.C.) with bilateral lesions of the occipitotemporal cortex, including LOC. Despite severe deficits in recognizing objects using vision, M.C. was able to accurately recognize objects via touch. M.C.'s psychophysical response profile to haptically explored shapes was also indistinguishable from controls. Using fMRI, M.C. showed no object-selective visual or haptic responses in LOC, but her pattern of haptic activation in other brain regions was remarkably similar to healthy controls. Although LOC is routinely active during visual and haptic shape recognition tasks, it is not essential for haptic recognition of object shape. The lateral occipital complex (LOC) is a brain region regarded to be critical for recognizing object shape, both in vision and in touch. However, causal evidence linking LOC with haptic shape processing is lacking. We studied recognition performance, psychophysical sensitivity, and brain response to touched objects, in a patient (M.C.) with extensive lesions involving LOC bilaterally. Despite being severely impaired in visual shape recognition, M.C. was able to identify objects via touch and she showed normal sensitivity to a haptic shape illusion. M.C.'s brain response to touched objects in areas of undamaged cortex was also very similar to that observed in neurologically healthy controls. These results demonstrate that LOC is not necessary for recognizing objects via touch. Copyright © 2015 the authors 0270-6474/15/3513745-16$15.00/0.
Ding, Fang; Zheng, Limin; Liu, Min; Chen, Rongfa; Leung, L Stan; Luo, Tao
2016-08-01
Exposure to volatile anesthetics has been reported to cause temporary or sustained impairments in learning and memory in pre-clinical studies. The selective antagonists of the histamine H3 receptors (H3R) are considered to be a promising group of novel therapeutic agents for the treatment of cognitive disorders. The aim of this study was to evaluate the effect of H3R antagonist ciproxifan on isoflurane-induced deficits in an object recognition task. Adult C57BL/6 J mice were exposed to isoflurane (1.3 %) or vehicle gas for 2 h. The object recognition tests were carried at 24 h or 7 days after exposure to anesthesia to exploit the tendency of mice to prefer exploring novel objects in an environment when a familiar object is also present. During the training phase, two identical objects were placed in two defined sites of the chamber. During the test phase, performed 1 or 24 h after the training phase, one of the objects was replaced by a new object with a different shape. The time spent exploring each object was recorded. A robust deficit in object recognition memory occurred 1 day after exposure to isoflurane anesthesia. Isoflurane-treated mice spent significantly less time exploring a novel object at 1 h but not at 24 h after the training phase. The deficit in short-term memory was reversed by the administration of ciproxifan 30 min before behavioral training. Isoflurane exposure induces reversible deficits in object recognition memory. Ciproxifan appears to be a potential therapeutic agent for improving post-anesthesia cognitive memory performance.
Central administration of angiotensin IV rapidly enhances novel object recognition among mice.
Paris, Jason J; Eans, Shainnel O; Mizrachi, Elisa; Reilley, Kate J; Ganno, Michelle L; McLaughlin, Jay P
2013-07-01
Angiotensin IV (Val(1)-Tyr(2)-Ile(3)-His(4)-Pro(5)-Phe(6)) has demonstrated potential cognitive-enhancing effects. The present investigation assessed and characterized: (1) dose-dependency of angiotensin IV's cognitive enhancement in a C57BL/6J mouse model of novel object recognition, (2) the time-course for these effects, (3) the identity of residues in the hexapeptide important to these effects and (4) the necessity of actions at angiotensin IV receptors for procognitive activity. Assessment of C57BL/6J mice in a novel object recognition task demonstrated that prior administration of angiotensin IV (0.1, 1.0, or 10.0, but not 0.01 nmol, i.c.v.) significantly enhanced novel object recognition in a dose-dependent manner. These effects were time dependent, with improved novel object recognition observed when angiotensin IV (0.1 nmol, i.c.v.) was administered 10 or 20, but not 30 min prior to the onset of the novel object recognition testing. An alanine scan of the angiotensin IV peptide revealed that replacement of the Val(1), Ile(3), His(4), or Phe(6) residues with Ala attenuated peptide-induced improvements in novel object recognition, whereas Tyr(2) or Pro(5) replacement did not significantly affect performance. Administration of the angiotensin IV receptor antagonist, divalinal-Ang IV (20 nmol, i.c.v.), reduced (but did not abolish) novel object recognition; however, this antagonist completely blocked the procognitive effects of angiotensin IV (0.1 nmol, i.c.v.) in this task. Rotorod testing demonstrated no locomotor effects with any angiotensin IV or divalinal-Ang IV dose tested. These data demonstrate that angiotensin IV produces a rapid enhancement of associative learning and memory performance in a mouse model that was dependent on the angiotensin IV receptor. Copyright © 2013 Elsevier Ltd. All rights reserved.
Central administration of angiotensin IV rapidly enhances novel object recognition among mice
Paris, Jason J.; Eans, Shainnel O.; Mizrachi, Elisa; Reilley, Kate J.; Ganno, Michelle L.; McLaughlin, Jay P.
2013-01-01
Angiotensin IV (Val1-Tyr2-Ile3-His4-Pro5-Phe6) has demonstrated potential cognitive-enhancing effects. The present investigation assessed and characterized: (1) dose-dependency of angiotensin IV's cognitive enhancement in a C57BL/6J mouse model of novel object recognition, (2) the time-course for these effects, (3) the identity of residues in the hexapeptide important to these effects and (4) the necessity of actions at angiotensin IV receptors for pro-cognitive activity. Assessment of C57BL/6J mice in a novel object recognition task demonstrated that prior administration of angiotensin IV (0.1, 1.0, or 10.0, but not 0.01, nmol, i.c.v.) significantly enhanced novel object recognition in a dose-dependent manner. These effects were time dependent, with improved novel object recognition observed when angiotensin IV (0.1 nmol, i.c.v.) was administered 10 or 20, but not 30, min prior to the onset of the novel object recognition testing. An alanine scan of the angiotensin IV peptide revealed that replacement of the Val1, Ile3, His4, or Phe6 residues with Ala attenuated peptide-induced improvements in novel object recognition, whereas Tyr2 or Pro5 replacement did not significantly affect performance. Administration of the angiotensin IV receptor antagonist, divalinal-Ang IV (20 nmol, i.c.v.), reduced (but did not abolish) novel object recognition; however, this antagonist completely blocked the pro-cognitive effects of angiotensin IV (0.1 nmol, i.c.v.) in this task. Rotorod testing demonstrated no locomotor effects for any angiotensin IV or divalinal-Ang IV dose tested. These data demonstrate that angiotensin IV produces a rapid enhancement of associative learning and memory performance in a mouse model that was dependent on the angiotensin IV receptor. PMID:23416700
Integration trumps selection in object recognition.
Saarela, Toni P; Landy, Michael S
2015-03-30
Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. Copyright © 2015 Elsevier Ltd. All rights reserved.
Compensation for Blur Requires Increase in Field of View and Viewing Time
Kwon, MiYoung; Liu, Rong; Chien, Lillian
2016-01-01
Spatial resolution is an important factor for human pattern recognition. In particular, low resolution (blur) is a defining characteristic of low vision. Here, we examined spatial (field of view) and temporal (stimulus duration) requirements for blurry object recognition. The spatial resolution of an image such as letter or face, was manipulated with a low-pass filter. In experiment 1, studying spatial requirement, observers viewed a fixed-size object through a window of varying sizes, which was repositioned until object identification (moving window paradigm). Field of view requirement, quantified as the number of “views” (window repositions) for correct recognition, was obtained for three blur levels, including no blur. In experiment 2, studying temporal requirement, we determined threshold viewing time, the stimulus duration yielding criterion recognition accuracy, at six blur levels, including no blur. For letter and face recognition, we found blur significantly increased the number of views, suggesting a larger field of view is required to recognize blurry objects. We also found blur significantly increased threshold viewing time, suggesting longer temporal integration is necessary to recognize blurry objects. The temporal integration reflects the tradeoff between stimulus intensity and time. While humans excel at recognizing blurry objects, our findings suggest compensating for blur requires increased field of view and viewing time. The need for larger spatial and longer temporal integration for recognizing blurry objects may further challenge object recognition in low vision. Thus, interactions between blur and field of view should be considered for developing low vision rehabilitation or assistive aids. PMID:27622710
Integration trumps selection in object recognition
Saarela, Toni P.; Landy, Michael S.
2015-01-01
Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154
Tian, Moqian; Grill-Spector, Kalanit
2015-01-01
Recognizing objects is difficult because it requires both linking views of an object that can be different and distinguishing objects with similar appearance. Interestingly, people can learn to recognize objects across views in an unsupervised way, without feedback, just from the natural viewing statistics. However, there is intense debate regarding what information during unsupervised learning is used to link among object views. Specifically, researchers argue whether temporal proximity, motion, or spatiotemporal continuity among object views during unsupervised learning is beneficial. Here, we untangled the role of each of these factors in unsupervised learning of novel three-dimensional (3-D) objects. We found that after unsupervised training with 24 object views spanning a 180° view space, participants showed significant improvement in their ability to recognize 3-D objects across rotation. Surprisingly, there was no advantage to unsupervised learning with spatiotemporal continuity or motion information than training with temporal proximity. However, we discovered that when participants were trained with just a third of the views spanning the same view space, unsupervised learning via spatiotemporal continuity yielded significantly better recognition performance on novel views than learning via temporal proximity. These results suggest that while it is possible to obtain view-invariant recognition just from observing many views of an object presented in temporal proximity, spatiotemporal information enhances performance by producing representations with broader view tuning than learning via temporal association. Our findings have important implications for theories of object recognition and for the development of computational algorithms that learn from examples. PMID:26024454
An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors
Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai
2017-01-01
RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553
Coordinate Transformations in Object Recognition
ERIC Educational Resources Information Center
Graf, Markus
2006-01-01
A basic problem of visual perception is how human beings recognize objects after spatial transformations. Three central classes of findings have to be accounted for: (a) Recognition performance varies systematically with orientation, size, and position; (b) recognition latencies are sequentially additive, suggesting analogue transformation…
Object memory and change detection: dissociation as a function of visual and conceptual similarity.
Yeh, Yei-Yu; Yang, Cheng-Ta
2008-01-01
People often fail to detect a change between two visual scenes, a phenomenon referred to as change blindness. This study investigates how a post-change object's similarity to the pre-change object influences memory of the pre-change object and affects change detection. The results of Experiment 1 showed that similarity lowered detection sensitivity but did not affect the speed of identifying the pre-change object, suggesting that similarity between the pre- and post-change objects does not degrade the pre-change representation. Identification speed for the pre-change object was faster than naming the new object regardless of detection accuracy. Similarity also decreased detection sensitivity in Experiment 2 but improved the recognition of the pre-change object under both correct detection and detection failure. The similarity effect on recognition was greatly reduced when 20% of each pre-change stimulus was masked by random dots in Experiment 3. Together the results suggest that the level of pre-change representation under detection failure is equivalent to the level under correct detection and that the pre-change representation is almost complete. Similarity lowers detection sensitivity but improves explicit access in recognition. Dissociation arises between recognition and change detection as the two judgments rely on the match-to-mismatch signal and mismatch-to-match signal, respectively.
Jung, Jaehoon; Yoon, Inhye; Paik, Joonki
2016-01-01
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978
Multi-objects recognition for distributed intelligent sensor networks
NASA Astrophysics Data System (ADS)
He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.
2008-04-01
This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.
Aniracetam restores object recognition impaired by age, scopolamine, and nucleus basalis lesions.
Bartolini, L; Casamenti, F; Pepeu, G
1996-02-01
Object recognition was investigated in adult and aging male rats in a two-trials, unrewarded, test that assessed a form of working-episodic memory. Exploration time in the first trial, in which two copies of the same object were presented, was recorded. In the second trial, in which one of the familiar objects and a new object were presented, the time spent exploring the two objects was separately recorded and a discrimination index was calculated. Adult rats explored the new object longer than the familiar object when the intertrial time ranged from 1 to 60 min. Rats older than 20 months of age did not discriminate between familiar and new objects. Object discrimination was lost in adult rats after scopolamine (0.2 mg/kg SC) administration and with lesions of the nucleus basalis, resulting in a 40% decrease in cortical ChAT activity. Both aniracetam (25, 50, 100 mg/kg os) and oxiracetam (50 mg/kg os) restored object recognition in aging rats, in rats treated with scopolamine, and with lesions of the nucleus basalis. In the rat, object discrimination appears to depend on the integrity of the cholinergic system, and nootropic drugs can correct its disruption.
Critical object recognition in millimeter-wave images with robustness to rotation and scale.
Mohammadzade, Hoda; Ghojogh, Benyamin; Faezi, Sina; Shabany, Mahdi
2017-06-01
Locating critical objects is crucial in various security applications and industries. For example, in security applications, such as in airports, these objects might be hidden or covered under shields or secret sheaths. Millimeter-wave images can be utilized to discover and recognize the critical objects out of the hidden cases without any health risk due to their non-ionizing features. However, millimeter-wave images usually have waves in and around the detected objects, making object recognition difficult. Thus, regular image processing and classification methods cannot be used for these images and additional pre-processings and classification methods should be introduced. This paper proposes a novel pre-processing method for canceling rotation and scale using principal component analysis. In addition, a two-layer classification method is introduced and utilized for recognition. Moreover, a large dataset of millimeter-wave images is collected and created for experiments. Experimental results show that a typical classification method such as support vector machines can recognize 45.5% of a type of critical objects at 34.2% false alarm rate (FAR), which is a drastically poor recognition. The same method within the proposed recognition framework achieves 92.9% recognition rate at 0.43% FAR, which indicates a highly significant improvement. The significant contribution of this work is to introduce a new method for analyzing millimeter-wave images based on machine vision and learning approaches, which is not yet widely noted in the field of millimeter-wave image analysis.
Barker, Gareth R I; Warburton, Elizabeth Clea
2018-03-28
Recognition memory for single items requires the perirhinal cortex (PRH), whereas recognition of an item and its associated location requires a functional interaction among the PRH, hippocampus (HPC), and medial prefrontal cortex (mPFC). Although the precise mechanisms through which these interactions are effected are unknown, the nucleus reuniens (NRe) has bidirectional connections with each regions and thus may play a role in recognition memory. Here we investigated, in male rats, whether specific manipulations of NRe function affected performance of recognition memory for single items, object location, or object-in-place associations. Permanent lesions in the NRe significantly impaired long-term, but not short-term, object-in-place associative recognition memory, whereas single item recognition memory and object location memory were unaffected. Temporary inactivation of the NRe during distinct phases of the object-in-place task revealed its importance in both the encoding and retrieval stages of long-term associative recognition memory. Infusions of specific receptor antagonists showed that encoding was dependent on muscarinic and nicotinic cholinergic neurotransmission, whereas NMDA receptor neurotransmission was not required. Finally, we found that long-term object-in-place memory required protein synthesis within the NRe. These data reveal a specific role for the NRe in long-term associative recognition memory through its interactions with the HPC and mPFC, but not the PRH. The delay-dependent involvement of the NRe suggests that it is not a simple relay station between brain regions, but, rather, during high mnemonic demand, facilitates interactions between the mPFC and HPC, a process that requires both cholinergic neurotransmission and protein synthesis. SIGNIFICANCE STATEMENT Recognizing an object and its associated location, which is fundamental to our everyday memory, requires specific hippocampal-cortical interactions, potentially facilitated by the nucleus reuniens (NRe) of the thalamus. However, the role of the NRe itself in associative recognition memory is unknown. Here, we reveal the crucial role of the NRe in encoding and retrieval of long-term object-in-place memory, but not for remembrance of an individual object or individual location and such involvement is cholinergic receptor and protein synthesis dependent. This is the first demonstration that the NRe is a key node within an associative recognition memory network and is not just a simple relay for information within the network. Rather, we argue, the NRe actively modulates information processing during long-term associative memory formation. Copyright © 2018 the authors 0270-6474/18/383208-10$15.00/0.
Binary optical filters for scale invariant pattern recognition
NASA Technical Reports Server (NTRS)
Reid, Max B.; Downie, John D.; Hine, Butler P.
1992-01-01
Binary synthetic discriminant function (BSDF) optical filters which are invariant to scale changes in the target object of more than 50 percent are demonstrated in simulation and experiment. Efficient databases of scale invariant BSDF filters can be designed which discriminate between two very similar objects at any view scaled over a factor of 2 or more. The BSDF technique has considerable advantages over other methods for achieving scale invariant object recognition, as it also allows determination of the object's scale. In addition to scale, the technique can be used to design recognition systems invariant to other geometric distortions.
Newborn chickens generate invariant object representations at the onset of visual object experience
Wood, Justin N.
2013-01-01
To recognize objects quickly and accurately, mature visual systems build invariant object representations that generalize across a range of novel viewing conditions (e.g., changes in viewpoint). To date, however, the origins of this core cognitive ability have not yet been established. To examine how invariant object recognition develops in a newborn visual system, I raised chickens from birth for 2 weeks within controlled-rearing chambers. These chambers provided complete control over all visual object experiences. In the first week of life, subjects’ visual object experience was limited to a single virtual object rotating through a 60° viewpoint range. In the second week of life, I examined whether subjects could recognize that virtual object from novel viewpoints. Newborn chickens were able to generate viewpoint-invariant representations that supported object recognition across large, novel, and complex changes in the object’s appearance. Thus, newborn visual systems can begin building invariant object representations at the onset of visual object experience. These abstract representations can be generated from sparse data, in this case from a visual world containing a single virtual object seen from a limited range of viewpoints. This study shows that powerful, robust, and invariant object recognition machinery is an inherent feature of the newborn brain. PMID:23918372
Perceptual Learning of Object Shape
Golcu, Doruk; Gilbert, Charles D.
2009-01-01
Recognition of objects is accomplished through the use of cues that depend on internal representations of familiar shapes. We used a paradigm of perceptual learning during visual search to explore what features human observers use to identify objects. Human subjects were trained to search for a target object embedded in an array of distractors, until their performance improved from near-chance levels to over 80% of trials in an object specific manner. We determined the role of specific object components in the recognition of the object as a whole by measuring the transfer of learning from the trained object to other objects sharing components with it. Depending on the geometric relationship of the trained object with untrained objects, transfer to untrained objects was observed. Novel objects that shared a component with the trained object were identified at much higher levels than those that did not, and this could be used as an indicator of which features of the object were important for recognition. Training on an object also transferred to the components of the object when these components were embedded in an array of distractors of similar complexity. These results suggest that objects are not represented in a holistic manner during learning, but that their individual components are encoded. Transfer between objects was not complete, and occurred for more than one component, regardless of how well they distinguish the object from distractors. This suggests that a joint involvement of multiple components was necessary for full performance. PMID:19864574
Ultra-fast Object Recognition from Few Spikes
2005-07-06
Computer Science and Artificial Intelligence Laboratory Ultra-fast Object Recognition from Few Spikes Chou Hung, Gabriel Kreiman , Tomaso Poggio...neural code for different kinds of object-related information. *The authors, Chou Hung and Gabriel Kreiman , contributed equally to this work...Supplementary Material is available at http://ramonycajal.mit.edu/ kreiman /resources/ultrafast
The Neural Regions Sustaining Episodic Encoding and Recognition of Objects
ERIC Educational Resources Information Center
Hofer, Alex; Siedentopf, Christian M.; Ischebeck, Anja; Rettenbacher, Maria A.; Widschwendter, Christian G.; Verius, Michael; Golaszewski, Stefan M.; Koppelstaetter, Florian; Felber, Stephan; Wolfgang Fleischhacker, W.
2007-01-01
In this functional MRI experiment, encoding of objects was associated with activation in left ventrolateral prefrontal/insular and right dorsolateral prefrontal and fusiform regions as well as in the left putamen. By contrast, correct recognition of previously learned objects (R judgments) produced activation in left superior frontal, bilateral…
Implications of Animal Object Memory Research for Human Amnesia
ERIC Educational Resources Information Center
Winters, Boyer D.; Saksida, Lisa M.; Bussey, Timothy J.
2010-01-01
Damage to structures in the human medial temporal lobe causes severe memory impairment. Animal object recognition tests gained prominence from attempts to model "global" human medial temporal lobe amnesia, such as that observed in patient HM. These tasks, such as delayed nonmatching-to-sample and spontaneous object recognition, for assessing…
Crowded and Sparse Domains in Object Recognition: Consequences for Categorization and Naming
ERIC Educational Resources Information Center
Gale, Tim M.; Laws, Keith R.; Foley, Kerry
2006-01-01
Some models of object recognition propose that items from structurally crowded categories (e.g., living things) permit faster access to superordinate semantic information than structurally dissimilar categories (e.g., nonliving things), but slower access to individual object information when naming items. We present four experiments that utilize…
Automatic anatomy recognition in whole-body PET/CT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Huiqian; Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Odhner, Dewey
Purpose: Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity ofmore » anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., “Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images,” Med. Image Anal. 18, 752–771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. Methods: The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process, to bring performance to the level achieved on diagnostic CT and MR images in body-region-wise approaches. The intermodality approach fosters the use of already existing fuzzy models, previously created from diagnostic CT images, on PET/CT and other derived images, thus truly separating the modality-independent object assembly anatomy from modality-specific tissue property portrayal in the image. Results: Key ways of combining the above three basic ideas lead them to 15 different strategies for recognizing objects in PET/CT images. Utilizing 50 diagnostic CT image data sets from the thoracic and abdominal body regions and 16 whole-body PET/CT image data sets, the authors compare the recognition performance among these 15 strategies on 18 objects from the thorax, abdomen, and pelvis in object localization error and size estimation error. Particularly on texture membership images, object localization is within three voxels on whole-body low-dose CT images and 2 voxels on body-region-wise low-dose images of known true locations. Surprisingly, even on direct body-region-wise PET images, localization error within 3 voxels seems possible. Conclusions: The previous body-region-wise approach can be extended to whole-body torso with similar object localization performance. Combined use of image texture and intensity property yields the best object localization accuracy. In both body-region-wise and whole-body approaches, recognition performance on low-dose CT images reaches levels previously achieved on diagnostic CT images. The best object recognition strategy varies among objects; the proposed framework however allows employing a strategy that is optimal for each object.« less
Lawson, Rebecca
2004-10-01
In two experiments, the identification of novel 3-D objects was worse for depth-rotated and mirror-reflected views, compared with the study view in an implicit affective preference memory task, as well as in an explicit recognition memory task. In Experiment 1, recognition was worse and preference was lower when depth-rotated views of an object were paired with an unstudied object relative to trials when the study view of that object was shown. There was a similar trend for mirror-reflected views. In Experiment 2, the study view of an object was both recognized and preferred above chance when it was paired with either depth-rotated or mirror-reflected views of that object. These results suggest that view-sensitive representations of objects mediate performance in implicit, as well as explicit, memory tasks. The findings do not support the claim that separate episodic and structural description representations underlie performance in implicit and explicit memory tasks, respectively.
Muñoz, Pablo C; Aspé, Mauricio A; Contreras, Luis S; Palacios, Adrián G
2010-01-01
Object recognition memory allows discrimination between novel and familiar objects. This kind of memory consists of two components: recollection, which depends on the hippocampus, and familiarity, which depends on the perirhinal cortex (Pcx). The importance of brain-derived neurotrophic factor (BDNF) for recognition memory has already been recognized. Recent evidence suggests that DNA methylation regulates the expression of BDNF and memory. Behavioral and molecular approaches were used to understand the potential contribution of DNA methylation to recognition memory. To that end, rats were tested for their ability to distinguish novel from familiar objects by using a spontaneous object recognition task. Furthermore, the level of DNA methylation was estimated after trials with a methyl-sensitive PCR. We found a significant correlation between performance on the novel object task and the expression of BDNF, negatively in hippocampal slices and positively in perirhinal cortical slices. By contrast, methylation of DNA in CpG island 1 in the promoter of exon 1 in BDNF only correlated in hippocampal slices, but not in the Pxc cortical slices from trained animals. These results suggest that DNA methylation may be involved in the regulation of the BDNF gene during recognition memory, at least in the hippocampus.
Cippitelli, Andrea; Zook, Michelle; Bell, Lauren; Damadzic, Ruslan; Eskay, Robert L; Schwandt, Melanie; Heilig, Markus
2010-11-01
Excessive alcohol use leads to neurodegeneration in several brain structures including the hippocampal dentate gyrus and the entorhinal cortex. Cognitive deficits that result are among the most insidious and debilitating consequences of alcoholism. The object exploration task (OET) provides a sensitive measurement of spatial memory impairment induced by hippocampal and cortical damage. In this study, we examine whether the observed neurotoxicity produced by a 4-day binge ethanol treatment results in long-term memory impairment by observing the time course of reactions to spatial change (object configuration) and non-spatial change (object recognition). Wistar rats were assessed for their abilities to detect spatial configuration in the OET at 1 week and 10 weeks following the ethanol treatment, in which ethanol groups received 9-15 g/kg/day and achieved blood alcohol levels over 300 mg/dl. At 1 week, results indicated that the binge alcohol treatment produced impairment in both spatial memory and non-spatial object recognition performance. Unlike the controls, ethanol treated rats did not increase the duration or number of contacts with the displaced object in the spatial memory task, nor did they increase the duration of contacts with the novel object in the object recognition task. After 10 weeks, spatial memory remained impaired in the ethanol treated rats but object recognition ability was recovered. Our data suggest that episodes of binge-like alcohol exposure result in long-term and possibly permanent impairments in memory for the configuration of objects during exploration, whereas the ability to detect non-spatial changes is only temporarily affected. Copyright © 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun
2017-04-01
In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.
Neurocomputational bases of object and face recognition.
Biederman, I; Kalocsai, P
1997-01-01
A number of behavioural phenomena distinguish the recognition of faces and objects, even when members of a set of objects are highly similar. Because faces have the same parts in approximately the same relations, individuation of faces typically requires specification of the metric variation in a holistic and integral representation of the facial surface. The direct mapping of a hypercolumn-like pattern of activation onto a representation layer that preserves relative spatial filter values in a two-dimensional (2D) coordinate space, as proposed by C. von der Malsburg and his associates, may account for many of the phenomena associated with face recognition. An additional refinement, in which each column of filters (termed a 'jet') is centred on a particular facial feature (or fiducial point), allows selectivity of the input into the holistic representation to avoid incorporation of occluding or nearby surfaces. The initial hypercolumn representation also characterizes the first stage of object perception, but the image variation for objects at a given location in a 2D coordinate space may be too great to yield sufficient predictability directly from the output of spatial kernels. Consequently, objects can be represented by a structural description specifying qualitative (typically, non-accidental) characterizations of an object's parts, the attributes of the parts, and the relations among the parts, largely based on orientation and depth discontinuities (as shown by Hummel & Biederman). A series of experiments on the name priming or physical matching of complementary images (in the Fourier domain) of objects and faces documents that whereas face recognition is strongly dependent on the original spatial filter values, evidence from object recognition indicates strong invariance to these values, even when distinguishing among objects that are as similar as faces. PMID:9304687
Developmental Commonalities between Object and Face Recognition in Adolescence
Jüttner, Martin; Wakui, Elley; Petters, Dean; Davidoff, Jules
2016-01-01
In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains. PMID:27014176
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Ball-scale based hierarchical multi-object recognition in 3D medical images
NASA Astrophysics Data System (ADS)
Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian
2010-03-01
This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.
Harris, Jill D; Cutmore, Tim R H; O'Gorman, John; Finnigan, Simon; Shum, David
2009-02-01
The aim of this study was to identify ERP correlates of perceptual object priming that are insensitive to factors affecting explicit, episodic memory. EEG was recorded from 21 participants while they performed a visual object recognition test on a combination of unstudied items and old items that were previously encountered during either a 'deep' or 'shallow' levels-of-processing (LOP) study task. The results demonstrated a midline P150 old/new effect which was sensitive only to objects' old/new status and not to the accuracy of recognition responses to old items, or to the LOP manipulation. Similar outcomes were observed for the subsequent P200 and N400 effects, the former of which had a parietal scalp maximum and the latter, a broadly distributed topography. In addition an LPC old/new effect typical of those reported in past ERP recognition studies was observed. These outcomes support the proposal that the P150 effect is reflective of perceptual object priming and moreover, provide novel evidence that this and the P200 effect are independent of explicit recognition memory process(es).
Lateral Entorhinal Cortex is Critical for Novel Object-Context Recognition
Wilson, David IG; Langston, Rosamund F; Schlesiger, Magdalene I; Wagner, Monica; Watanabe, Sakurako; Ainge, James A
2013-01-01
Episodic memory incorporates information about specific events or occasions including spatial locations and the contextual features of the environment in which the event took place. It has been modeled in rats using spontaneous exploration of novel configurations of objects, their locations, and the contexts in which they are presented. While we have a detailed understanding of how spatial location is processed in the brain relatively little is known about where the nonspatial contextual components of episodic memory are processed. Initial experiments measured c-fos expression during an object-context recognition (OCR) task to examine which networks within the brain process contextual features of an event. Increased c-fos expression was found in the lateral entorhinal cortex (LEC; a major hippocampal afferent) during OCR relative to control conditions. In a subsequent experiment it was demonstrated that rats with lesions of LEC were unable to recognize object-context associations yet showed normal object recognition and normal context recognition. These data suggest that contextual features of the environment are integrated with object identity in LEC and demonstrate that recognition of such object-context associations requires the LEC. This is consistent with the suggestion that contextual features of an event are processed in LEC and that this information is combined with spatial information from medial entorhinal cortex to form episodic memory in the hippocampus. © 2013 Wiley Periodicals, Inc. PMID:23389958
Auditory-visual object recognition time suggests specific processing for animal sounds.
Suied, Clara; Viaud-Delmon, Isabelle
2009-01-01
Recognizing an object requires binding together several cues, which may be distributed across different sensory modalities, and ignoring competing information originating from other objects. In addition, knowledge of the semantic category of an object is fundamental to determine how we should react to it. Here we investigate the role of semantic categories in the processing of auditory-visual objects. We used an auditory-visual object-recognition task (go/no-go paradigm). We compared recognition times for two categories: a biologically relevant one (animals) and a non-biologically relevant one (means of transport). Participants were asked to react as fast as possible to target objects, presented in the visual and/or the auditory modality, and to withhold their response for distractor objects. A first main finding was that, when participants were presented with unimodal or bimodal congruent stimuli (an image and a sound from the same object), similar reaction times were observed for all object categories. Thus, there was no advantage in the speed of recognition for biologically relevant compared to non-biologically relevant objects. A second finding was that, in the presence of a biologically relevant auditory distractor, the processing of a target object was slowed down, whether or not it was itself biologically relevant. It seems impossible to effectively ignore an animal sound, even when it is irrelevant to the task. These results suggest a specific and mandatory processing of animal sounds, possibly due to phylogenetic memory and consistent with the idea that hearing is particularly efficient as an alerting sense. They also highlight the importance of taking into account the auditory modality when investigating the way object concepts of biologically relevant categories are stored and retrieved.
Behavioral model of visual perception and recognition
NASA Astrophysics Data System (ADS)
Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.
1993-09-01
In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.
Comparison of Object Recognition Behavior in Human and Monkey
Rajalingham, Rishi; Schmidt, Kailyn
2015-01-01
Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324
USDA-ARS?s Scientific Manuscript database
Objective Previously, four months of a blueberry-enriched (BB) antioxidant diet prevented impaired object recognition memory in aged rats. Experiment 1 determined whether one and two-month BB diets would have a similar effect and whether the benefits would disappear promptly after terminating the d...
Qualitative Differences in the Representation of Spatial Relations for Different Object Classes
ERIC Educational Resources Information Center
Cooper, Eric E.; Brooks, Brian E.
2004-01-01
Two experiments investigated whether the representations used for animal, produce, and object recognition code spatial relations in a similar manner. Experiment 1 tested the effects of planar rotation on the recognition of animals and nonanimal objects. Response times for recognizing animals followed an inverted U-shaped function, whereas those…
Bimodal Benefits on Objective and Subjective Outcomes for Adult Cochlear Implant Users
Heo, Ji-Hye; Lee, Won-Sang
2013-01-01
Background and Objectives Given that only a few studies have focused on the bimodal benefits on objective and subjective outcomes and emphasized the importance of individual data, the present study aimed to measure the bimodal benefits on the objective and subjective outcomes for adults with cochlear implant. Subjects and Methods Fourteen listeners with bimodal devices were tested on the localization and recognition abilities using environmental sounds, 1-talker, and 2-talker speech materials. The localization ability was measured through an 8-loudspeaker array. For the recognition measures, listeners were asked to repeat the sentences or say the environmental sounds the listeners heard. As a subjective questionnaire, three domains of Korean-version of Speech, Spatial, Qualities of Hearing scale (K-SSQ) were used to explore any relationships between objective and subjective outcomes. Results Based on the group-mean data, the bimodal hearing enhanced both localization and recognition regardless of test material. However, the inter- and intra-subject variability appeared to be large across test materials for both localization and recognition abilities. Correlation analyses revealed that the relationships were not always consistent between the objective outcomes and the subjective self-reports with bimodal devices. Conclusions Overall, this study supports significant bimodal advantages on localization and recognition measures, yet the large individual variability in bimodal benefits should be considered carefully for the clinical assessment as well as counseling. The discrepant relations between objective and subjective results suggest that the bimodal benefits in traditional localization or recognition measures might not necessarily correspond to the self-reported subjective advantages in everyday listening environments. PMID:24653909
Object Recognition Memory and the Rodent Hippocampus
ERIC Educational Resources Information Center
Broadbent, Nicola J.; Gaskin, Stephane; Squire, Larry R.; Clark, Robert E.
2010-01-01
In rodents, the novel object recognition task (NOR) has become a benchmark task for assessing recognition memory. Yet, despite its widespread use, a consensus has not developed about which brain structures are important for task performance. We assessed both the anterograde and retrograde effects of hippocampal lesions on performance in the NOR…
Self-Recognition in Autistic Children.
ERIC Educational Resources Information Center
Dawson, Geraldine; McKissick, Fawn Celeste
1984-01-01
Fifteen autistic children (four to six years old) were assessed for visual self-recognition ability, as well as for object permanence and gestural imitation. It was found that 13 of 15 autistic children showed evidence of self-recognition. Consistent relationships were suggested between self-cognition and object permanence but not between…
Aging and solid shape recognition: Vision and haptics.
Norman, J Farley; Cheeseman, Jacob R; Adkins, Olivia C; Cox, Andrea G; Rogers, Connor E; Dowell, Catherine J; Baxter, Michael W; Norman, Hideko F; Reyes, Cecia M
2015-10-01
The ability of 114 younger and older adults to recognize naturally-shaped objects was evaluated in three experiments. The participants viewed or haptically explored six randomly-chosen bell peppers (Capsicum annuum) in a study session and were later required to judge whether each of twelve bell peppers was "old" (previously presented during the study session) or "new" (not presented during the study session). When recognition memory was tested immediately after study, the younger adults' (Experiment 1) performance for vision and haptics was identical when the individual study objects were presented once. Vision became superior to haptics, however, when the individual study objects were presented multiple times. When 10- and 20-min delays (Experiment 2) were inserted in between study and test sessions, no significant differences occurred between vision and haptics: recognition performance in both modalities was comparable. When the recognition performance of older adults was evaluated (Experiment 3), a negative effect of age was found for visual shape recognition (younger adults' overall recognition performance was 60% higher). There was no age effect, however, for haptic shape recognition. The results of the present experiments indicate that the visual recognition of natural object shape is different from haptic recognition in multiple ways: visual shape recognition can be superior to that of haptics and is affected by aging, while haptic shape recognition is less accurate and unaffected by aging. Copyright © 2015 Elsevier Ltd. All rights reserved.
Intelligent data processing of an ultrasonic sensor system for pattern recognition improvements
NASA Astrophysics Data System (ADS)
Na, Seung You; Park, Min-Sang; Hwang, Won-Gul; Kee, Chang-Doo
1999-05-01
Though conventional time-of-flight ultrasonic sensor systems are popular due to the advantages of low cost and simplicity, the usage of the sensors is rather narrowly restricted within object detection and distance readings. There is a strong need to enlarge the amount of environmental information for mobile applications to provide intelligent autonomy. Wide sectors of such neighboring object recognition problems can be satisfactorily handled with coarse vision data such as sonar maps instead of accurate laser or optic measurements. For the usage of object pattern recognition, ultrasonic senors have inherent shortcomings of poor directionality and specularity which result in low spatial resolution and indistinctiveness of object patterns. To resolve these problems an array of increased number of sensor elements has been used for large objects. In this paper we propose a method of sensor array system with improved recognition capability using electronic circuits accompanying the sensor array and neuro-fuzzy processing of data fusion. The circuit changes transmitter output voltages of array elements in several steps. Relying upon the known sensor characteristics, a set of different return signals from neighboring senors is manipulated to provide an enhanced pattern recognition in the aspects of inclination angle, size and shift as well as distance of objects. The results show improved resolution of the measurements for smaller targets.
Development of a sonar-based object recognition system
NASA Astrophysics Data System (ADS)
Ecemis, Mustafa Ihsan
2001-02-01
Sonars are used extensively in mobile robotics for obstacle detection, ranging and avoidance. However, these range-finding applications do not exploit the full range of information carried in sonar echoes. In addition, mobile robots need robust object recognition systems. Therefore, a simple and robust object recognition system using ultrasonic sensors may have a wide range of applications in robotics. This dissertation develops and analyzes an object recognition system that uses ultrasonic sensors of the type commonly found on mobile robots. Three principal experiments are used to test the sonar recognition system: object recognition at various distances, object recognition during unconstrained motion, and softness discrimination. The hardware setup, consisting of an inexpensive Polaroid sonar and a data acquisition board, is described first. The software for ultrasound signal generation, echo detection, data collection, and data processing is then presented. Next, the dissertation describes two methods to extract information from the echoes, one in the frequency domain and the other in the time domain. The system uses the fuzzy ARTMAP neural network to recognize objects on the basis of the information content of their echoes. In order to demonstrate that the performance of the system does not depend on the specific classification method being used, the K- Nearest Neighbors (KNN) Algorithm is also implemented. KNN yields a test accuracy similar to fuzzy ARTMAP in all experiments. Finally, the dissertation describes a method for extracting features from the envelope function in order to reduce the dimension of the input vector used by the classifiers. Decreasing the size of the input vectors reduces the memory requirements of the system and makes it run faster. It is shown that this method does not affect the performance of the system dramatically and is more appropriate for some tasks. The results of these experiments demonstrate that sonar can be used to develop a low-cost, low-computation system for real-time object recognition tasks on mobile robots. This system differs from all previous approaches in that it is relatively simple, robust, fast, and inexpensive.
Object recognition for autonomous robot utilizing distributed knowledge database
NASA Astrophysics Data System (ADS)
Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji
2003-10-01
In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.
The Dark Side of Context: Context Reinstatement Can Distort Memory.
Doss, Manoj K; Picart, Jamila K; Gallo, David A
2018-04-01
It is widely assumed that context reinstatement benefits memory, but our experiments revealed that context reinstatement can systematically distort memory. Participants viewed pictures of objects superimposed over scenes, and we later tested their ability to differentiate these old objects from similar new objects. Context reinstatement was manipulated by presenting objects on the reinstated or switched scene at test. Not only did context reinstatement increase correct recognition of old objects, but it also consistently increased incorrect recognition of similar objects as old ones. This false recognition effect was robust, as it was found in several experiments, occurred after both immediate and delayed testing, and persisted with high confidence even after participants were warned to avoid the distorting effects of context. To explain this memory illusion, we propose that context reinstatement increases the likelihood of confusing conceptual and perceptual information, potentially in medial temporal brain regions that integrate this information.
Spatiotemporal dynamics underlying object completion in human ventral visual cortex.
Tang, Hanlin; Buia, Calin; Madhavan, Radhika; Crone, Nathan E; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2014-08-06
Natural vision often involves recognizing objects from partial information. Recognition of objects from parts presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. Here we recorded intracranial field potentials of 113 visually selective electrodes from epilepsy patients in response to whole and partial objects. Responses along the ventral visual stream, particularly the inferior occipital and fusiform gyri, remained selective despite showing only 9%-25% of the object areas. However, these visually selective signals emerged ∼100 ms later for partial versus whole objects. These processing delays were particularly pronounced in higher visual areas within the ventral stream. This latency difference persisted when controlling for changes in contrast, signal amplitude, and the strength of selectivity. These results argue against a purely feedforward explanation of recognition from partial information, and provide spatiotemporal constraints on theories of object recognition that involve recurrent processing. Copyright © 2014 Elsevier Inc. All rights reserved.
3D visual mechinism by neural networkings
NASA Astrophysics Data System (ADS)
Sugiyama, Shigeki
2007-04-01
There are some computer vision systems that are available on a market but those are quite far from a real usage of our daily life in a sense of security guard or in a sense of a usage of recognition of a target object behaviour. Because those surroundings' sensing might need to recognize a detail description of an object, like "the distance to an object" and "an object detail figure" and "its figure of edging", which are not possible to have a clear picture of the mechanisms of them with the present recognition system. So for doing this, here studies on mechanisms of how a pair of human eyes can recognize a distance apart, an object edging, and an object in order to get basic essences of vision mechanisms. And those basic mechanisms of object recognition are simplified and are extended logically for applying to a computer vision system. Some of the results of these studies are introduced on this paper.
Products recognition on shop-racks from local scale-invariant features
NASA Astrophysics Data System (ADS)
Zawistowski, Jacek; Kurzejamski, Grzegorz; Garbat, Piotr; Naruniec, Jacek
2016-04-01
This paper presents a system designed for the multi-object detection purposes and adjusted for the application of product search on the market shelves. System uses well known binary keypoint detection algorithms for finding characteristic points in the image. One of the main idea is object recognition based on Implicit Shape Model method. Authors of the article proposed many improvements of the algorithm. Originally fiducial points are matched with a very simple function. This leads to the limitations in the number of objects parts being success- fully separated, while various methods of classification may be validated in order to achieve higher performance. Such an extension implies research on training procedure able to deal with many objects categories. Proposed solution opens a new possibilities for many algorithms demanding fast and robust multi-object recognition.
Motion Imagery Processing and Exploitation (MIPE)
2013-01-01
facial recognition —i.e., the identification of a specific person.37 Object detection is often (but not always) considered a prerequisite for instance...The goal of segmentation is to distinguish objects and identify boundaries in images. Some of the earliest approaches to facial recognition involved...methods of instance recognition are at varying levels of maturity. Facial recognition methods are arguably the most mature; the technology is well
Grouping in object recognition: the role of a Gestalt law in letter identification.
Pelli, Denis G; Majaj, Najib J; Raizman, Noah; Christian, Christopher J; Kim, Edward; Palomares, Melanie C
2009-02-01
The Gestalt psychologists reported a set of laws describing how vision groups elements to recognize objects. The Gestalt laws "prescribe for us what we are to recognize 'as one thing'" (Kohler, 1920). Were they right? Does object recognition involve grouping? Tests of the laws of grouping have been favourable, but mostly assessed only detection, not identification, of the compound object. The grouping of elements seen in the detection experiments with lattices and "snakes in the grass" is compelling, but falls far short of the vivid everyday experience of recognizing a familiar, meaningful, named thing, which mediates the ordinary identification of an object. Thus, after nearly a century, there is hardly any evidence that grouping plays a role in ordinary object recognition. To assess grouping in object recognition, we made letters out of grating patches and measured threshold contrast for identifying these letters in visual noise as a function of perturbation of grating orientation, phase, and offset. We define a new measure, "wiggle", to characterize the degree to which these various perturbations violate the Gestalt law of good continuation. We find that efficiency for letter identification is inversely proportional to wiggle and is wholly determined by wiggle, independent of how the wiggle was produced. Thus the effects of three different kinds of shape perturbation on letter identifiability are predicted by a single measure of goodness of continuation. This shows that letter identification obeys the Gestalt law of good continuation and may be the first confirmation of the original Gestalt claim that object recognition involves grouping.
Visual memory in unilateral spatial neglect: immediate recall versus delayed recognition.
Moreh, Elior; Malkinson, Tal Seidel; Zohary, Ehud; Soroker, Nachum
2014-09-01
Patients with unilateral spatial neglect (USN) often show impaired performance in spatial working memory tasks, apart from the difficulty retrieving "left-sided" spatial data from long-term memory, shown in the "piazza effect" by Bisiach and colleagues. This study's aim was to compare the effect of the spatial position of a visual object on immediate and delayed memory performance in USN patients. Specifically, immediate verbal recall performance, tested using a simultaneous presentation of four visual objects in four quadrants, was compared with memory in a later-provided recognition task, in which objects were individually shown at the screen center. Unlike healthy controls, USN patients showed a left-side disadvantage and a vertical bias in the immediate free recall task (69% vs. 42% recall for right- and left-sided objects, respectively). In the recognition task, the patients correctly recognized half of "old" items, and their correct rejection rate was 95.5%. Importantly, when the analysis focused on previously recalled items (in the immediate task), no statistically significant difference was found in the delayed recognition of objects according to their original quadrant of presentation. Furthermore, USN patients were able to recollect the correct original location of the recognized objects in 60% of the cases, well beyond chance level. This suggests that the memory trace formed in these cases was not only semantic but also contained a visuospatial tag. Finally, successful recognition of objects missed in recall trials points to formation of memory traces for neglected contralesional objects, which may become accessible to retrieval processes in explicit memory.
Grouping in object recognition: The role of a Gestalt law in letter identification
Pelli, Denis G.; Majaj, Najib J.; Raizman, Noah; Christian, Christopher J.; Kim, Edward; Palomares, Melanie C.
2009-01-01
The Gestalt psychologists reported a set of laws describing how vision groups elements to recognize objects. The Gestalt laws “prescribe for us what we are to recognize ‘as one thing’” (Köhler, 1920). Were they right? Does object recognition involve grouping? Tests of the laws of grouping have been favourable, but mostly assessed only detection, not identification, of the compound object. The grouping of elements seen in the detection experiments with lattices and “snakes in the grass” is compelling, but falls far short of the vivid everyday experience of recognizing a familiar, meaningful, named thing, which mediates the ordinary identification of an object. Thus, after nearly a century, there is hardly any evidence that grouping plays a role in ordinary object recognition. To assess grouping in object recognition, we made letters out of grating patches and measured threshold contrast for identifying these letters in visual noise as a function of perturbation of grating orientation, phase, and offset. We define a new measure, “wiggle”, to characterize the degree to which these various perturbations violate the Gestalt law of good continuation. We find that efficiency for letter identification is inversely proportional to wiggle and is wholly determined by wiggle, independent of how the wiggle was produced. Thus the effects of three different kinds of shape perturbation on letter identifiability are predicted by a single measure of goodness of continuation. This shows that letter identification obeys the Gestalt law of good continuation and may be the first confirmation of the original Gestalt claim that object recognition involves grouping. PMID:19424881
Rotation And Scale Invariant Object Recognition Using A Distributed Associative Memory
NASA Astrophysics Data System (ADS)
Wechsler, Harry; Zimmerman, George Lee
1988-04-01
This paper describes an approach to 2-dimensional object recognition. Complex-log conformal mapping is combined with a distributed associative memory to create a system which recognizes objects regardless of changes in rotation or scale. Recalled information from the memorized database is used to classify an object, reconstruct the memorized version of the object, and estimate the magnitude of changes in scale or rotation. The system response is resistant to moderate amounts of noise and occlusion. Several experiments, using real, gray scale images, are presented to show the feasibility of our approach.
Bimodal benefits on objective and subjective outcomes for adult cochlear implant users.
Heo, Ji-Hye; Lee, Jae-Hee; Lee, Won-Sang
2013-09-01
Given that only a few studies have focused on the bimodal benefits on objective and subjective outcomes and emphasized the importance of individual data, the present study aimed to measure the bimodal benefits on the objective and subjective outcomes for adults with cochlear implant. Fourteen listeners with bimodal devices were tested on the localization and recognition abilities using environmental sounds, 1-talker, and 2-talker speech materials. The localization ability was measured through an 8-loudspeaker array. For the recognition measures, listeners were asked to repeat the sentences or say the environmental sounds the listeners heard. As a subjective questionnaire, three domains of Korean-version of Speech, Spatial, Qualities of Hearing scale (K-SSQ) were used to explore any relationships between objective and subjective outcomes. Based on the group-mean data, the bimodal hearing enhanced both localization and recognition regardless of test material. However, the inter- and intra-subject variability appeared to be large across test materials for both localization and recognition abilities. Correlation analyses revealed that the relationships were not always consistent between the objective outcomes and the subjective self-reports with bimodal devices. Overall, this study supports significant bimodal advantages on localization and recognition measures, yet the large individual variability in bimodal benefits should be considered carefully for the clinical assessment as well as counseling. The discrepant relations between objective and subjective results suggest that the bimodal benefits in traditional localization or recognition measures might not necessarily correspond to the self-reported subjective advantages in everyday listening environments.
Brown, M.W.; Barker, G.R.I.; Aggleton, J.P.; Warburton, E.C.
2012-01-01
Findings of pharmacological studies that have investigated the involvement of specific regions of the brain in recognition memory are reviewed. The particular emphasis of the review concerns what such studies indicate concerning the role of the perirhinal cortex in recognition memory. Most of the studies involve rats and most have investigated recognition memory for objects. Pharmacological studies provide a large body of evidence supporting the essential role of the perirhinal cortex in the acquisition, consolidation and retrieval of object recognition memory. Such studies provide increasingly detailed evidence concerning both the neurotransmitter systems and the underlying intracellular mechanisms involved in recognition memory processes. They have provided evidence in support of synaptic weakening as a major synaptic plastic process within perirhinal cortex underlying object recognition memory. They have also supplied confirmatory evidence that that there is more than one synaptic plastic process involved. The demonstrated necessity to long-term recognition memory of intracellular signalling mechanisms related to synaptic modification within perirhinal cortex establishes a central role for the region in the information storage underlying such memory. Perirhinal cortex is thereby established as an information storage site rather than solely a processing station. Pharmacological studies have also supplied new evidence concerning the detailed roles of other regions, including the hippocampus and the medial prefrontal cortex in different types of recognition memory tasks that include a spatial or temporal component. In so doing, they have also further defined the contribution of perirhinal cortex to such tasks. To date it appears that the contribution of perirhinal cortex to associative and temporal order memory reflects that in simple object recognition memory, namely that perirhinal cortex provides information concerning objects and their prior occurrence (novelty/familiarity). PMID:22841990
Recognition vs Reverse Engineering in Boolean Concepts Learning
ERIC Educational Resources Information Center
Shafat, Gabriel; Levin, Ilya
2012-01-01
This paper deals with two types of logical problems--recognition problems and reverse engineering problems, and with the interrelations between these types of problems. The recognition problems are modeled in the form of a visual representation of various objects in a common pattern, with a composition of represented objects in the pattern.…
Orientation estimation of anatomical structures in medical images for object recognition
NASA Astrophysics Data System (ADS)
Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian
2011-03-01
Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.
Cultural differences in visual object recognition in 3-year-old children
Kuwabara, Megumi; Smith, Linda B.
2016-01-01
Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576
Cultural differences in visual object recognition in 3-year-old children.
Kuwabara, Megumi; Smith, Linda B
2016-07-01
Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.
Associative (prosop)agnosia without (apparent) perceptual deficits: a case-study.
Anaki, David; Kaufman, Yakir; Freedman, Morris; Moscovitch, Morris
2007-04-09
In associative agnosia early perceptual processing of faces or objects are considered to be intact, while the ability to access stored semantic information about the individual face or object is impaired. Recent claims, however, have asserted that associative agnosia is also characterized by deficits at the perceptual level, which are too subtle to be detected by current neuropsychological tests. Thus, the impaired identification of famous faces or common objects in associative agnosia stems from difficulties in extracting the minute perceptual details required to identify a face or an object. In the present study, we report the case of a patient DBO with a left occipital infarct, who shows impaired object and famous face recognition. Despite his disability, he exhibits a face inversion effect, and is able to select a famous face from among non-famous distractors. In addition, his performance is normal in an immediate and delayed recognition memory for faces, whose external features were deleted. His deficits in face recognition are apparent only when he is required to name a famous face, or select two faces from among a triad of famous figures based on their semantic relationships (a task which does not require access to names). The nature of his deficits in object perception and recognition are similar to his impairments in the face domain. This pattern of behavior supports the notion that apperceptive and associative agnosia reflect distinct and dissociated deficits, which result from damage to different stages of the face and object recognition process.
Tc1 mouse model of trisomy-21 dissociates properties of short- and long-term recognition memory.
Hall, Jessica H; Wiseman, Frances K; Fisher, Elizabeth M C; Tybulewicz, Victor L J; Harwood, John L; Good, Mark A
2016-04-01
The present study examined memory function in Tc1 mice, a transchromosomic model of Down syndrome (DS). Tc1 mice demonstrated an unusual delay-dependent deficit in recognition memory. More specifically, Tc1 mice showed intact immediate (30sec), impaired short-term (10-min) and intact long-term (24-h) memory for objects. A similar pattern was observed for olfactory stimuli, confirming the generality of the pattern across sensory modalities. The specificity of the behavioural deficits in Tc1 mice was confirmed using APP overexpressing mice that showed the opposite pattern of object memory deficits. In contrast to object memory, Tc1 mice showed no deficit in either immediate or long-term memory for object-in-place information. Similarly, Tc1 mice showed no deficit in short-term memory for object-location information. The latter result indicates that Tc1 mice were able to detect and react to spatial novelty at the same delay interval that was sensitive to an object novelty recognition impairment. These results demonstrate (1) that novelty detection per se and (2) the encoding of visuo-spatial information was not disrupted in adult Tc1 mice. The authors conclude that the task specific nature of the short-term recognition memory deficit suggests that the trisomy of genes on human chromosome 21 in Tc1 mice impacts on (perirhinal) cortical systems supporting short-term object and olfactory recognition memory. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Figure-ground organization and object recognition processes: an interactive account.
Vecera, S P; O'Reilly, R C
1998-04-01
Traditional bottom-up models of visual processing assume that figure-ground organization precedes object recognition. This assumption seems logically necessary: How can object recognition occur before a region is labeled as figure? However, some behavioral studies find that familiar regions are more likely to be labeled figure than less familiar regions, a problematic finding for bottom-up models. An interactive account is proposed in which figure-ground processes receive top-down input from object representations in a hierarchical system. A graded, interactive computational model is presented that accounts for behavioral results in which familiarity effects are found. The interactive model offers an alternative conception of visual processing to bottom-up models.
The uncrowded window of object recognition
Pelli, Denis G; Tillman, Katharine A
2009-01-01
It is now emerging that vision is usually limited by object spacing rather than size. The visual system recognizes an object by detecting and then combining its features. ‘Crowding’ occurs when objects are too close together and features from several objects are combined into a jumbled percept. Here, we review the explosion of studies on crowding—in grating discrimination, letter and face recognition, visual search, selective attention, and reading—and find a universal principle, the Bouma law. The critical spacing required to prevent crowding is equal for all objects, although the effect is weaker between dissimilar objects. Furthermore, critical spacing at the cortex is independent of object position, and critical spacing at the visual field is proportional to object distance from fixation. The region where object spacing exceeds critical spacing is the ‘uncrowded window’. Observers cannot recognize objects outside of this window and its size limits the speed of reading and search. PMID:18828191
Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition
Craddock, Matt; Lawson, Rebecca
2009-01-01
A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685
Pattern recognition of the targets with help of polarization properties of the signal
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; de Rivera, Luis N.; Castellanos, Aldo B.; Popov, Anatoly V.
1999-10-01
We proposed to use the possibility of recognition of the targets on background of the scattering from the surface, weather objects with the help of polarimetric 3-cm radar. It has been investigated such polarization characteristics: the amplitudes of the polarization matrix elements; an anisotropy coefficient; depolarization coefficient; asymmetry coefficient; the energy section was less than 1 dB at ranges up to 15 km and less than 1.5 dB at ranges up to 100 km. During the experiments urban objects and 6 various ships of small displacement having the closest values of the backscattering cross-section were used. The analysis has shown: the factor of the polarization selection for anisotropy objects and weather objects had the values about 0.02-0.08 Isotropy had the values of polarimetric correlation factor for hydrometers about 0.7-0.8, for earth surface about 0.8-0.9, for sea surface - from 0.33 to 0.7. The results of the work of recognition algorithm of a class 'concrete objects', and 'metal objects' are submitted as example in the paper. The result of experiments have shown that the probability of correct recognition of the identified objects was in the limits from 0.93 to 0.97.
Exploiting range imagery: techniques and applications
NASA Astrophysics Data System (ADS)
Armbruster, Walter
2009-07-01
Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.
Three-dimensional object recognition based on planar images
NASA Astrophysics Data System (ADS)
Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.
1993-01-01
This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.
1992-12-23
predominance of structural models of recognition, of which a recent example is the Recognition By Components (RBC) theory ( Biederman , 1987 ). Structural...related to recent statistical theory (Huber, 1985; Friedman, 1987 ) and is derived from a biologically motivated computational theory (Bienenstock et...dimensional object recognition (Intrator and Gold, 1991). The method is related to recent statistical theory (Huber, 1985; Friedman, 1987 ) and is derived
Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W
2014-01-01
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.
Halliday, Drew W. R.; MacDonald, Stuart W. S.; Sherf, Suzanne K.; Tanaka, James W.
2014-01-01
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862
What Three-Year-Olds Remember from Their Past: Long-Term Memory for Persons, Objects, and Actions
ERIC Educational Resources Information Center
Hirte, Monika; Graf, Frauke; Kim, Ziyon; Knopf, Monika
2017-01-01
From birth on, infants show long-term recognition memory for persons. Furthermore, infants from six months onwards are able to store and retrieve demonstrated actions over long-term intervals in deferred imitation tasks. Thus, information about the model demonstrating the object-related actions is stored and recognition memory for the objects as…
Representations of Shape in Object Recognition and Long-Term Visual Memory
1993-02-11
in anything other than linguistic terms ( Biederman , 1987 , for example). STATUS 1. Viewpoint-Dependent Features in Object Representation Tarr and...is object- based orientation-independent representations sufficient for "basic-level" categorization ( Biederman , 1987 ; Corballis, 1988). Alternatively...space. REFERENCES Biederman , I. ( 1987 ). Recognition-by-components: A theory of human image understanding. Psychological Review, 94,115-147. Cooper, L
Impaired recognition of faces and objects in dyslexia: Evidence for ventral stream dysfunction?
Sigurdardottir, Heida Maria; Ívarsson, Eysteinn; Kristinsdóttir, Kristjana; Kristjánsson, Árni
2015-09-01
The objective of this study was to establish whether or not dyslexics are impaired at the recognition of faces and other complex nonword visual objects. This would be expected based on a meta-analysis revealing that children and adult dyslexics show functional abnormalities within the left fusiform gyrus, a brain region high up in the ventral visual stream, which is thought to support the recognition of words, faces, and other objects. 20 adult dyslexics (M = 29 years) and 20 matched typical readers (M = 29 years) participated in the study. One dyslexic-typical reader pair was excluded based on Adult Reading History Questionnaire scores and IS-FORM reading scores. Performance was measured on 3 high-level visual processing tasks: the Cambridge Face Memory Test, the Vanderbilt Holistic Face Processing Test, and the Vanderbilt Expertise Test. People with dyslexia are impaired in their recognition of faces and other visually complex objects. Their holistic processing of faces appears to be intact, suggesting that dyslexics may instead be specifically impaired at part-based processing of visual objects. The difficulty that people with dyslexia experience with reading might be the most salient manifestation of a more general high-level visual deficit. (c) 2015 APA, all rights reserved).
Chao, Owen Y; Huston, Joseph P; Li, Jay-Shake; Wang, An-Li; de Souza Silva, Maria A
2016-05-01
The prefrontal cortex directly projects to the lateral entorhinal cortex (LEC), an important substrate for engaging item-associated information and relaying the information to the hippocampus. Here we ask to what extent the communication between the prefrontal cortex and LEC is critically involved in the processing of episodic-like memory. We applied a disconnection procedure to test whether the interaction between the medial prefrontal cortex (mPFC) and LEC is essential for the expression of recognition memory. It was found that male rats that received unilateral NMDA lesions of the mPFC and LEC in the same hemisphere, exhibited intact episodic-like (what-where-when) and object-recognition memories. When these lesions were placed in the opposite hemispheres (disconnection), episodic-like and associative memories for object identity, location and context were impaired. However, the disconnection did not impair the components of episodic memory, namely memory for novel object (what), object place (where) and temporal order (when), per se. Thus, the present findings suggest that the mPFC and LEC are a critical part of a neural circuit that underlies episodic-like and associative object-recognition memory. © 2015 Wiley Periodicals, Inc.
Okamura, Jun-ya; Yamaguchi, Reona; Honda, Kazunari; Tanaka, Keiji
2014-01-01
One fails to recognize an unfamiliar object across changes in viewing angle when it must be discriminated from similar distractor objects. View-invariant recognition gradually develops as the viewer repeatedly sees the objects in rotation. It is assumed that different views of each object are associated with one another while their successive appearance is experienced in rotation. However, natural experience of objects also contains ample opportunities to discriminate among objects at each of the multiple viewing angles. Our previous behavioral experiments showed that after experiencing a new set of object stimuli during a task that required only discrimination at each of four viewing angles at 30° intervals, monkeys could recognize the objects across changes in viewing angle up to 60°. By recording activities of neurons from the inferotemporal cortex after various types of preparatory experience, we here found a possible neural substrate for the monkeys' performance. For object sets that the monkeys had experienced during the task that required only discrimination at each of four viewing angles, many inferotemporal neurons showed object selectivity covering multiple views. The degree of view generalization found for these object sets was similar to that found for stimulus sets with which the monkeys had been trained to conduct view-invariant recognition. These results suggest that the experience of discriminating new objects in each of several viewing angles develops the partially view-generalized object selectivity distributed over many neurons in the inferotemporal cortex, which in turn bases the monkeys' emergent capability to discriminate the objects across changes in viewing angle. PMID:25378169
ERIC Educational Resources Information Center
McNulty, Susan E.; Barrett, Ruth M.; Vogel-Ciernia, Annie; Malvaez, Melissa; Hernandez, Nicole; Davatolhagh, M. Felicia; Matheos, Dina P.; Schiffman, Aaron; Wood, Marcelo A.
2012-01-01
"Nr4a1" and "Nr4a2" are transcription factors and immediate early genes belonging to the nuclear receptor Nr4a family. In this study, we examine their role in long-term memory formation for object location and object recognition. Using siRNA to block expression of either "Nr4a1" or "Nr4a2", we found that "Nr4a2" is necessary for both long-term…
Dashniani, M G; Burjanadze, M A; Naneishvili, T L; Chkhikvishvili, N C; Beselia, G V; Kruashvili, L B; Pochkhidze, N O; Chighladze, M R
2015-01-01
In the present study, the effect of the medial septal (MS) lesions on exploratory activity in the open field and the spatial and object recognition memory has been investigated. This experiment compares three types of MS lesions: electrolytic lesions that destroy cells and fibers of passage, neurotoxic - ibotenic acid lesions that spare fibers of passage but predominantly affect the septal noncholinergic neurons, and immunotoxin - 192 IgG-saporin infusions that only eliminate cholinergic neurons. The main results are: the MS electrolytic lesioned rats were impaired in habituating to the environment in the repeated spatial environment, but rats with immuno- or neurotoxic lesions of the MS did not differ from control ones; the MS electrolytic and ibotenic acid lesioned rats showed an increase in their exploratory activity to the objects and were impaired in habituating to the objects in the repeated spatial environment; rats with immunolesions of the MS did not differ from control rats; electrolytic lesions of the MS disrupt spatial recognition memory; rats with immuno- or neurotoxic lesions of the MS were normal in detecting spatial novelty; all of the MS-lesioned and control rats clearly reacted to the object novelty by exploring the new object more than familiar ones. Results observed across lesion techniques indicate that: (i) the deficits after nonselective damage of MS are limited to a subset of cognitive processes dependent on the hippocampus, (ii) MS is substantial for spatial, but not for object recognition memory - the object recognition memory can be supported outside the septohippocampal system; (iii) the selective loss of septohippocampal cholinergic or noncholinergic projections does not disrupt the function of the hippocampus to a sufficient extent to impair spatial recognition memory; (iv) there is dissociation between the two major components (cholinergic and noncholinergic) of the septohippocampal pathway in exploratory behavior assessed in the open field - the memory exhibited by decrements in exploration of repeated object presentations is affected by either electrolytic or ibotenic lesions, but not saporin.
ERIC Educational Resources Information Center
Wolk, D.A.; Coslett, H.B.; Glosser, G.
2005-01-01
The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more…
Use of Authentic-Speech Technique for Teaching Sound Recognition to EFL Students
ERIC Educational Resources Information Center
Sersen, William J.
2011-01-01
The main objective of this research was to test an authentic-speech technique for improving the sound-recognition skills of EFL (English as a foreign language) students at Roi-Et Rajabhat University. The secondary objective was to determine the correlation, if any, between students' self-evaluation of sound-recognition progress and the actual…
Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Kulikov, M A; Mikhaĭlova, E S
2013-01-01
In 38 healthy subjects accuracy and response time were examined during recognition of two categories of images--animals andnonliving objects--under forward masking. We revealed new data that masking effects depended of categorical similarity of target and masking stimuli. The recognition accuracy was the lowest and the response time was the most slow, when the target and masking stimuli belongs to the same category, that was combined with high dispersion of response times. The revealed effects were more clear in the task of animal recognition in comparison with the recognition of nonliving objects. We supposed that the revealed effects connected with interference between cortical representations of the target and masking stimuli and discussed our results in context of cortical interference and negative priming.
Gomes, Karin M; Souza, Renan P; Valvassori, Samira S; Réus, Gislaine Z; Inácio, Cecília G; Martins, Márcio R; Comim, Clarissa M; Quevedo, João
2009-11-01
In this study age-, circadian rhythm- and methylphenidate administration- effect on open field habituation and object recognition were analyzed. Young and adult male Wistar rats were treated with saline or methylphenidate 2.0 mg/kg for 28 days. Experiments were performed during the light and the dark cycle. Locomotor activity was significantly altered by circadian cycle and methylphenidate treatment during the training session and by drug treatment during the testing session. Exploratory activity was significantly modulated by age during the training session and by age and drug treatment during the testing session. Object recognition memory was altered by cycle at the training session; by age 1.5 h later and by cycle and age 24 h after the training session. These results show that methylphenidate treatment was the major modulator factor on open-field test while cycle and age had an important effect on object recognition experiment.
The roles of perceptual and conceptual information in face recognition.
Schwartz, Linoy; Yovel, Galit
2016-11-01
The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Paris, Jason J; Frye, Cheryl A
2008-01-01
Ovarian hormone elevations are associated with enhanced learning/memory. During behavioral estrus or pregnancy, progestins, such as progesterone (P4) and its metabolite 5α-pregnan-3α-ol-20-one (3α,5α-THP), are elevated due, in part, to corpora luteal and placental secretion. During ‘pseudopregnancy’, the induction of corpora luteal functioning results in a hormonal milieu analogous to pregnancy, which ceases after about 12 days, due to the lack of placental formation. Multiparity is also associated with enhanced learning/memory, perhaps due to prior steroid exposure during pregnancy. Given evidence that progestins and/or parity may influence cognition, we investigated how natural alterations in the progestin milieu influence cognitive performance. In Experiment 1, virgin rats (nulliparous) or rats with two prior pregnancies (multiparous) were assessed on the object placement and recognition tasks, when in high-estrogen/P4 (behavioral estrus) or low-estrogen/P4 (diestrus) phases of the estrous cycle. In Experiment 2, primiparous or multiparous rats were tested in the object placement and recognition tasks when not pregnant, pseudopregnant, or pregnant (between gestational days (GDs) 6 and 12). In Experiment 3, pregnant primiparous or multiparous rats were assessed daily in the object placement or recognition tasks. Females in natural states associated with higher endogenous progestins (behavioral estrus, pregnancy, multiparity) outperformed rats in low progestin states (diestrus, non-pregnancy, nulliparity) on the object placement and recognition tasks. In earlier pregnancy, multiparous, compared with primiparous, rats had a lower corticosterone, but higher estrogen levels, concomitant with better object placement performance. From GD 13 until post partum, primiparous rats had higher 3α,5α-THP levels and improved object placement performance compared with multiparous rats. PMID:18390689
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)
1987-01-01
The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.
Effect of tDCS on task relevant and irrelevant perceptual learning of complex objects.
Van Meel, Chayenne; Daniels, Nicky; de Beeck, Hans Op; Baeck, Annelies
2016-01-01
During perceptual learning the visual representations in the brain are altered, but these changes' causal role has not yet been fully characterized. We used transcranial direct current stimulation (tDCS) to investigate the role of higher visual regions in lateral occipital cortex (LO) in perceptual learning with complex objects. We also investigated whether object learning is dependent on the relevance of the objects for the learning task. Participants were trained in two tasks: object recognition using a backward masking paradigm and an orientation judgment task. During both tasks, an object with a red line on top of it were presented in each trial. The crucial difference between both tasks was the relevance of the object: the object was relevant for the object recognition task, but not for the orientation judgment task. During training, half of the participants received anodal tDCS stimulation targeted at the lateral occipital cortex (LO). Afterwards, participants were tested on how well they recognized the trained objects, the irrelevant objects presented during the orientation judgment task and a set of completely new objects. Participants stimulated with tDCS during training showed larger improvements of performance compared to participants in the sham condition. No learning effect was found for the objects presented during the orientation judgment task. To conclude, this study suggests a causal role of LO in relevant object learning, but given the rather low spatial resolution of tDCS, more research on the specificity of this effect is needed. Further, mere exposure is not sufficient to train object recognition in our paradigm.
A knowledge-based object recognition system for applications in the space station
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.
1988-01-01
A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.
Bonin, Patrick; Guillemard-Tsaparina, Diana; Méot, Alain
2013-09-01
We report object-naming and object recognition times collected from Russian native speakers for the colorized version of the Snodgrass and Vanderwart (Journal of Experimental Psychology: Human Learning and Memory 6:174-215, 1980) pictures (Rossion & Pourtois, Perception 33:217-236, 2004). New norms for image variability, body-object interaction [BOI], and subjective frequency collected in Russian, as well as new name agreement scores for the colorized pictures in French, are also reported. In both object-naming and object comprehension times, the name agreement, image agreement, and age-of-acquisition variables made significant independent contributions. Objective word frequency was reliable in object-naming latencies only. The variables of image variability, BOI, and subjective frequency were not significant in either object naming or object comprehension. Finally, imageability was reliable in both tasks. The new norms and object-naming and object recognition times are provided as supplemental materials.
Hilbig, Benjamin E; Pohl, Rüdiger F
2009-09-01
According to part of the adaptive toolbox notion of decision making known as the recognition heuristic (RH), the decision process in comparative judgments-and its duration-is determined by whether recognition discriminates between objects. By contrast, some recently proposed alternative models predict that choices largely depend on the amount of evidence speaking for each of the objects and that decision times thus depend on the evidential difference between objects, or the degree of conflict between options. This article presents 3 experiments that tested predictions derived from the RH against those from alternative models. All experiments used naturally recognized objects without teaching participants any information and thus provided optimal conditions for application of the RH. However, results supported the alternative, evidence-based models and often conflicted with the RH. Recognition was not the key determinant of decision times, whereas differences between objects with respect to (both positive and negative) evidence predicted effects well. In sum, alternative models that allow for the integration of different pieces of information may well provide a better account of comparative judgments. (c) 2009 APA, all rights reserved.
Generation, recognition, and consistent fusion of partial boundary representations from range images
NASA Astrophysics Data System (ADS)
Kohlhepp, Peter; Hanczak, Andrzej M.; Li, Gang
1994-10-01
This paper presents SOMBRERO, a new system for recognizing and locating 3D, rigid, non- moving objects from range data. The objects may be polyhedral or curved, partially occluding, touching or lying flush with each other. For data collection, we employ 2D time- of-flight laser scanners mounted to a moving gantry robot. By combining sensor and robot coordinates, we obtain 3D cartesian coordinates. Boundary representations (Brep's) provide view independent geometry models that are both efficiently recognizable and derivable automatically from sensor data. SOMBRERO's methods for generating, matching and fusing Brep's are highly synergetic. A split-and-merge segmentation algorithm with dynamic triangular builds a partial (21/2D) Brep from scattered data. The recognition module matches this scene description with a model database and outputs recognized objects, their positions and orientations, and possibly surfaces corresponding to unknown objects. We present preliminary results in scene segmentation and recognition. Partial Brep's corresponding to different range sensors or viewpoints can be merged into a consistent, complete and irredundant 3D object or scene model. This fusion algorithm itself uses the recognition and segmentation methods.
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
A rat in the sewer: How mental imagery interacts with object recognition
Hamburger, Kai
2018-01-01
The role of mental imagery has been puzzling researchers for more than two millennia. Both positive and negative effects of mental imagery on information processing have been discussed. The aim of this work was to examine how mental imagery affects object recognition and associative learning. Based on different perceptual and cognitive accounts we tested our imagery-induced interaction hypothesis in a series of two experiments. According to that, mental imagery could lead to (1) a superior performance in object recognition and associative learning if these objects are imagery-congruent (semantically) and to (2) an inferior performance if these objects are imagery-incongruent. In the first experiment, we used a static environment and tested associative learning. In the second experiment, subjects encoded object information in a dynamic environment by means of a virtual sewer system. Our results demonstrate that subjects who received a role adoption task (by means of guided mental imagery) performed better when imagery-congruent objects were used and worse when imagery-incongruent objects were used. We finally discuss our findings also with respect to alternative accounts and plead for a multi-methodological approach for future research in order to solve this issue. PMID:29590161
A rat in the sewer: How mental imagery interacts with object recognition.
Karimpur, Harun; Hamburger, Kai
2018-01-01
The role of mental imagery has been puzzling researchers for more than two millennia. Both positive and negative effects of mental imagery on information processing have been discussed. The aim of this work was to examine how mental imagery affects object recognition and associative learning. Based on different perceptual and cognitive accounts we tested our imagery-induced interaction hypothesis in a series of two experiments. According to that, mental imagery could lead to (1) a superior performance in object recognition and associative learning if these objects are imagery-congruent (semantically) and to (2) an inferior performance if these objects are imagery-incongruent. In the first experiment, we used a static environment and tested associative learning. In the second experiment, subjects encoded object information in a dynamic environment by means of a virtual sewer system. Our results demonstrate that subjects who received a role adoption task (by means of guided mental imagery) performed better when imagery-congruent objects were used and worse when imagery-incongruent objects were used. We finally discuss our findings also with respect to alternative accounts and plead for a multi-methodological approach for future research in order to solve this issue.
Hayes, Scott M; Nadel, Lynn; Ryan, Lee
2007-01-01
Previous research has investigated intentional retrieval of contextual information and contextual influences on object identification and word recognition, yet few studies have investigated context effects in episodic memory for objects. To address this issue, unique objects embedded in a visually rich scene or on a white background were presented to participants. At test, objects were presented either in the original scene or on a white background. A series of behavioral studies with young adults demonstrated a context shift decrement (CSD)-decreased recognition performance when context is changed between encoding and retrieval. The CSD was not attenuated by encoding or retrieval manipulations, suggesting that binding of object and context may be automatic. A final experiment explored the neural correlates of the CSD, using functional Magnetic Resonance Imaging. Parahippocampal cortex (PHC) activation (right greater than left) during incidental encoding was associated with subsequent memory of objects in the context shift condition. Greater activity in right PHC was also observed during successful recognition of objects previously presented in a scene. Finally, a subset of regions activated during scene encoding, such as bilateral PHC, was reactivated when the object was presented on a white background at retrieval. Although participants were not required to intentionally retrieve contextual information, the results suggest that PHC may reinstate visual context to mediate successful episodic memory retrieval. The CSD is attributed to automatic and obligatory binding of object and context. The results suggest that PHC is important not only for processing of scene information, but also plays a role in successful episodic memory encoding and retrieval. These findings are consistent with the view that spatial information is stored in the hippocampal complex, one of the central tenets of Multiple Trace Theory. (c) 2007 Wiley-Liss, Inc.
Object Recognition using Feature- and Color-Based Methods
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu; Stubberud, Allen
2008-01-01
An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.
Akirav, Irit; Maroun, Mouna
2006-12-01
Once consolidated, a long-term memory item could regain susceptibility to consolidation blockers, that is, reconsolidate, upon its reactivation. Both consolidation and reconsolidation require protein synthesis, but it is not yet known how similar these processes are in terms of molecular, cellular, and neural circuit mechanisms. Whereas most previous studies focused on aversive conditioning in the amygdala and the hippocampus, here we examine the role of the ventromedial prefrontal cortex (vmPFC) in consolidation and reconsolidation of object recognition memory. Object recognition memory is the ability to discriminate the familiarity of previously encountered objects. We found that microinfusion of the protein synthesis inhibitor anisomycin or the N-methyl-D-aspartate (NMDA) receptor antagonist D,L-2-amino-5-phosphonovaleric acid (APV) into the vmPFC, immediately after training, resulted in impairment of long-term (24 h) but not short-term (3 h) recognition memory. Similarly, microinfusion of anisomycin or APV into the vmPFC immediately after reactivation of the long-term memory impaired recognition memory 24 h, but not 3 h, post-reactivation. These results indicate that both protein synthesis and NMDA receptors are required for consolidation and reconsolidation of recognition memory in the vmPFC.
Winters, Boyer D; Tucci, Mark C; Jacklin, Derek L; Reid, James M; Newsome, James
2011-11-30
Research has implicated the perirhinal cortex (PRh) in several aspects of object recognition memory. The specific role of the hippocampus (HPC) remains controversial, but its involvement in object recognition may pertain to processing contextual information in relation to objects rather than object representation per se. Here we investigated the roles of the PRh and HPC in object memory reconsolidation using the spontaneous object recognition task for rats. Intra-PRh infusions of the protein synthesis inhibitor anisomycin immediately following memory reactivation prevented object memory reconsolidation. Similar deficits were observed when a novel object or a salient contextual change was introduced during the reactivation phase. Intra-HPC infusions of anisomycin, however, blocked object memory reconsolidation only when a contextual change was introduced during reactivation. Moreover, disrupting functional interaction between the HPC and PRh by infusing anisomycin unilaterally into each structure in opposite hemispheres also impaired reconsolidation when reactivation was done in an altered context. These results show for the first time that the PRh is critical for reconsolidation of object memory traces and provide insight into the dynamic process of object memory storage; the selective requirement for hippocampal involvement following reactivation in an altered context suggests a substantial circuit level object trace reorganization whereby an initially PRh-dependent object memory becomes reliant on both the HPC and PRh and their interaction. Such trace reorganization may play a central role in reconsolidation-mediated memory updating and could represent an important aspect of lingering consolidation processes proposed to underlie long-term memory modulation and stabilization.
Jacklin, Derek L; Cloke, Jacob M; Potvin, Alphonse; Garrett, Inara; Winters, Boyer D
2016-01-27
Rats, humans, and monkeys demonstrate robust crossmodal object recognition (CMOR), identifying objects across sensory modalities. We have shown that rats' performance of a spontaneous tactile-to-visual CMOR task requires functional integration of perirhinal (PRh) and posterior parietal (PPC) cortices, which seemingly provide visual and tactile object feature processing, respectively. However, research with primates has suggested that PRh is sufficient for multisensory object representation. We tested this hypothesis in rats using a modification of the CMOR task in which multimodal preexposure to the to-be-remembered objects significantly facilitates performance. In the original CMOR task, with no preexposure, reversible lesions of PRh or PPC produced patterns of impairment consistent with modality-specific contributions. Conversely, in the CMOR task with preexposure, PPC lesions had no effect, whereas PRh involvement was robust, proving necessary for phases of the task that did not require PRh activity when rats did not have preexposure; this pattern was supported by results from c-fos imaging. We suggest that multimodal preexposure alters the circuitry responsible for object recognition, in this case obviating the need for PPC contributions and expanding PRh involvement, consistent with the polymodal nature of PRh connections and results from primates indicating a key role for PRh in multisensory object representation. These findings have significant implications for our understanding of multisensory information processing, suggesting that the nature of an individual's past experience with an object strongly determines the brain circuitry involved in representing that object's multisensory features in memory. The ability to integrate information from multiple sensory modalities is crucial to the survival of organisms living in complex environments. Appropriate responses to behaviorally relevant objects are informed by integration of multisensory object features. We used crossmodal object recognition tasks in rats to study the neurobiological basis of multisensory object representation. When rats had no prior exposure to the to-be-remembered objects, the spontaneous ability to recognize objects across sensory modalities relied on functional interaction between multiple cortical regions. However, prior multisensory exploration of the task-relevant objects remapped cortical contributions, negating the involvement of one region and significantly expanding the role of another. This finding emphasizes the dynamic nature of cortical representation of objects in relation to past experience. Copyright © 2016 the authors 0270-6474/16/361273-17$15.00/0.
Sparse aperture 3D passive image sensing and recognition
NASA Astrophysics Data System (ADS)
Daneshpanah, Mehdi
The way we perceive, capture, store, communicate and visualize the world has greatly changed in the past century Novel three dimensional (3D) imaging and display systems are being pursued both in academic and industrial settings. In many cases, these systems have revolutionized traditional approaches and/or enabled new technologies in other disciplines including medical imaging and diagnostics, industrial metrology, entertainment, robotics as well as defense and security. In this dissertation, we focus on novel aspects of sparse aperture multi-view imaging systems and their application in quantum-limited object recognition in two separate parts. In the first part, two concepts are proposed. First a solution is presented that involves a generalized framework for 3D imaging using randomly distributed sparse apertures. Second, a method is suggested to extract the profile of objects in the scene through statistical properties of the reconstructed light field. In both cases, experimental results are presented that demonstrate the feasibility of the techniques. In the second part, the application of 3D imaging systems in sensing and recognition of objects is addressed. In particular, we focus on the scenario in which only 10s of photons reach the sensor from the object of interest, as opposed to hundreds of billions of photons in normal imaging conditions. At this level, the quantum limited behavior of light will dominate and traditional object recognition practices may fail. We suggest a likelihood based object recognition framework that incorporates the physics of sensing at quantum-limited conditions. Sensor dark noise has been modeled and taken into account. This framework is applied to 3D sensing of thermal objects using visible spectrum detectors. Thermal objects as cold as 250K are shown to provide enough signature photons to be sensed and recognized within background and dark noise with mature, visible band, image forming optics and detector arrays. The results suggest that one might not need to venture into exotic and expensive detector arrays and associated optics for sensing room-temperature thermal objects in complete darkness.
Multiple degree of freedom optical pattern recognition
NASA Technical Reports Server (NTRS)
Casasent, D.
1987-01-01
Three general optical approaches to multiple degree of freedom object pattern recognition (where no stable object rest position exists) are advanced. These techniques include: feature extraction, correlation, and artificial intelligence. The details of the various processors are advanced together with initial results.
Branstetter, Brian K; DeLong, Caroline M; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly
2016-01-01
Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin's (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin's ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin's acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition.
Branstetter, Brian K.; DeLong, Caroline M.; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly
2016-01-01
Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin’s (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin’s ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin’s acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition. PMID:26863519
Okamura, Jun-Ya; Yamaguchi, Reona; Honda, Kazunari; Wang, Gang; Tanaka, Keiji
2014-11-05
One fails to recognize an unfamiliar object across changes in viewing angle when it must be discriminated from similar distractor objects. View-invariant recognition gradually develops as the viewer repeatedly sees the objects in rotation. It is assumed that different views of each object are associated with one another while their successive appearance is experienced in rotation. However, natural experience of objects also contains ample opportunities to discriminate among objects at each of the multiple viewing angles. Our previous behavioral experiments showed that after experiencing a new set of object stimuli during a task that required only discrimination at each of four viewing angles at 30° intervals, monkeys could recognize the objects across changes in viewing angle up to 60°. By recording activities of neurons from the inferotemporal cortex after various types of preparatory experience, we here found a possible neural substrate for the monkeys' performance. For object sets that the monkeys had experienced during the task that required only discrimination at each of four viewing angles, many inferotemporal neurons showed object selectivity covering multiple views. The degree of view generalization found for these object sets was similar to that found for stimulus sets with which the monkeys had been trained to conduct view-invariant recognition. These results suggest that the experience of discriminating new objects in each of several viewing angles develops the partially view-generalized object selectivity distributed over many neurons in the inferotemporal cortex, which in turn bases the monkeys' emergent capability to discriminate the objects across changes in viewing angle. Copyright © 2014 the authors 0270-6474/14/3415047-13$15.00/0.
Two speed factors of visual recognition independently correlated with fluid intelligence.
Tachibana, Ryosuke; Namba, Yuri; Noguchi, Yasuki
2014-01-01
Growing evidence indicates a moderate but significant relationship between processing speed in visuo-cognitive tasks and general intelligence. On the other hand, findings from neuroscience proposed that the primate visual system consists of two major pathways, the ventral pathway for objects recognition and the dorsal pathway for spatial processing and attentive analysis. Previous studies seeking for visuo-cognitive factors of human intelligence indicated a significant correlation between fluid intelligence and the inspection time (IT), an index for a speed of object recognition performed in the ventral pathway. We thus presently examined a possibility that neural processing speed in the dorsal pathway also represented a factor of intelligence. Specifically, we used the mental rotation (MR) task, a popular psychometric measure for mental speed of spatial processing in the dorsal pathway. We found that the speed of MR was significantly correlated with intelligence scores, while it had no correlation with one's IT (recognition speed of visual objects). Our results support the new possibility that intelligence could be explained by two types of mental speed, one related to object recognition (IT) and another for manipulation of mental images (MR).
Visual working memory is more tolerant than visual long-term memory.
Schurgin, Mark W; Flombaum, Jonathan I
2018-05-07
Human visual memory is tolerant, meaning that it supports object recognition despite variability across encounters at the image level. Tolerant object recognition remains one capacity in which artificial intelligence trails humans. Typically, tolerance is described as a property of human visual long-term memory (VLTM). In contrast, visual working memory (VWM) is not usually ascribed a role in tolerant recognition, with tests of that system usually demanding discriminatory power-identifying changes, not sameness. There are good reasons to expect that VLTM is more tolerant; functionally, recognition over the long-term must accommodate the fact that objects will not be viewed under identical conditions; and practically, the passive and massive nature of VLTM may impose relatively permissive criteria for thinking that two inputs are the same. But empirically, tolerance has never been compared across working and long-term visual memory. We therefore developed a novel paradigm for equating encoding and test across different memory types. In each experiment trial, participants saw two objects, memory for one tested immediately (VWM) and later for the other (VLTM). VWM performance was better than VLTM and remained robust despite the introduction of image and object variability. In contrast, VLTM performance suffered linearly as more variability was introduced into test stimuli. Additional experiments excluded interference effects as causes for the observed differences. These results suggest the possibility of a previously unidentified role for VWM in the acquisition of tolerant representations for object recognition. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Mesa-Gresa, Patricia; Pérez-Martinez, Asunción; Redolat, Rosa
2013-01-01
Environmental enrichment (EE) is an experimental paradigm in which rodents are housed in complex environments containing objects that provide stimulation, the effects of which are expected to improve the welfare of these subjects. EE has been shown to considerably improve learning and memory in rodents. However, knowledge about the effects of EE on social interaction is generally limited and rather controversial. Thus, our aim was to evaluate both novel object recognition and agonistic behavior in NMRI mice receiving EE, hypothesizing enhanced cognition and slightly enhanced agonistic interaction upon EE rearing. During a 4-week period half the mice (n = 16) were exposed to EE and the other half (n = 16) remained in a standard environment (SE). On PND 56-57, animals performed the object recognition test, in which recognition memory was measured using a discrimination index. The social interaction test consisted of an encounter between an experimental animal and a standard opponent. Results indicated that EE mice explored the new object for longer periods than SE animals (P < .05). During social encounters, EE mice devoted more time to sociability and agonistic behavior (P < .05) than their non-EE counterparts. In conclusion, EE has been shown to improve object recognition and increase agonistic behavior in adolescent/early adulthood mice. In the future we intend to extend this study on a longitudinal basis in order to assess in more depth the effect of EE and the consistency of the above-mentioned observations in NMRI mice. Copyright © 2013 Wiley Periodicals, Inc.
Robust selectivity to two-object images in human visual cortex
Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105
Neural Correlates of Object-Associated Choice Behavior in the Perirhinal Cortex of Rats
Ahn, Jae-Rong
2015-01-01
The perirhinal cortex (PRC) is reportedly important for object recognition memory, with supporting physiological evidence obtained largely from primate studies. Whether neurons in the rodent PRC also exhibit similar physiological correlates of object recognition, however, remains to be determined. We recorded single units from the PRC in a PRC-dependent, object-cued spatial choice task in which, when cued by an object image, the rat chose the associated spatial target from two identical discs appearing on a touchscreen monitor. The firing rates of PRC neurons were significantly modulated by critical events in the task, such as object sampling and choice response. Neuronal firing in the PRC was correlated primarily with the conjunctive relationships between an object and its associated choice response, although some neurons also responded to the choice response alone. However, we rarely observed a PRC neuron that represented a specific object exclusively regardless of spatial response in rats, although the neurons were influenced by the perceptual ambiguity of the object at the population level. Some PRC neurons fired maximally after a choice response, and this post-choice feedback signal significantly enhanced the neuronal specificity for the choice response in the subsequent trial. Our findings suggest that neurons in the rat PRC may not participate exclusively in object recognition memory but that their activity may be more dynamically modulated in conjunction with other variables, such as choice response and its outcomes. PMID:25632144
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex
Leibo, Joel Z.; Liao, Qianli; Anselmi, Fabio; Poggio, Tomaso
2015-01-01
Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system’s optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA). Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions in agreement with the available data. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions. PMID:26496457
Gandarias, Juan M; Gómez-de-Gabriel, Jesús M; García-Cerezo, Alfonso J
2018-02-26
The use of tactile perception can help first response robotic teams in disaster scenarios, where visibility conditions are often reduced due to the presence of dust, mud, or smoke, distinguishing human limbs from other objects with similar shapes. Here, the integration of the tactile sensor in adaptive grippers is evaluated, measuring the performance of an object recognition task based on deep convolutional neural networks (DCNNs) using a flexible sensor mounted in adaptive grippers. A total of 15 classes with 50 tactile images each were trained, including human body parts and common environment objects, in semi-rigid and flexible adaptive grippers based on the fin ray effect. The classifier was compared against the rigid configuration and a support vector machine classifier (SVM). Finally, a two-level output network has been proposed to provide both object-type recognition and human/non-human classification. Sensors in adaptive grippers have a higher number of non-null tactels (up to 37% more), with a lower mean of pressure values (up to 72% less) than when using a rigid sensor, with a softer grip, which is needed in physical human-robot interaction (pHRI). A semi-rigid implementation with 95.13% object recognition rate was chosen, even though the human/non-human classification had better results (98.78%) with a rigid sensor.
Bonardi, Charlotte; Pardon, Marie-Christine; Armstrong, Paul
2016-10-15
Performance was examined on three variants of the spontaneous object recognition (SOR) task, in 5-month old APPswe/PS1dE9 mice and wild-type littermate controls. A deficit was observed in an object-in-place (OIP) task, in which mice are preexposed to four different objects in specific locations, and then at test two of the objects swap locations (Experiment 2). Typically more exploration is seen of the objects which have switched location, which is taken as evidence of a retrieval-generated priming mechanism. However, no significant transgenic deficit was found in a relative recency (RR) task (Experiment 1), in which mice are exposed to two different objects in two separate sample phases, and then tested with both objects. Typically more exploration of the first-presented object is observed, which is taken as evidence of a self-generated priming mechanism. Nor was there any impairment in the simplest variant, the spontaneous object recognition (SOR) task, in which mice are preexposed to one object and then tested with the familiar and a novel object. This was true regardless of whether the sample-test interval was 5min (Experiment 1) or 24h (Experiments 1 and 2). It is argued that SOR performance depends on retrieval-generated priming as well as self-generated priming, and our preliminary evidence suggests that the retrieval-generated priming process is especially impaired in these young transgenic animals. Copyright © 2016 Elsevier B.V. All rights reserved.
Nicotine Administration Attenuates Methamphetamine-Induced Novel Object Recognition Deficits
Vieira-Brock, Paula L.; McFadden, Lisa M.; Nielsen, Shannon M.; Smith, Misty D.; Hanson, Glen R.
2015-01-01
Background: Previous studies have demonstrated that methamphetamine abuse leads to memory deficits and these are associated with relapse. Furthermore, extensive evidence indicates that nicotine prevents and/or improves memory deficits in different models of cognitive dysfunction and these nicotinic effects might be mediated by hippocampal or cortical nicotinic acetylcholine receptors. The present study investigated whether nicotine attenuates methamphetamine-induced novel object recognition deficits in rats and explored potential underlying mechanisms. Methods: Adolescent or adult male Sprague-Dawley rats received either nicotine water (10–75 μg/mL) or tap water for several weeks. Methamphetamine (4×7.5mg/kg/injection) or saline was administered either before or after chronic nicotine exposure. Novel object recognition was evaluated 6 days after methamphetamine or saline. Serotonin transporter function and density and α4β2 nicotinic acetylcholine receptor density were assessed on the following day. Results: Chronic nicotine intake via drinking water beginning during either adolescence or adulthood attenuated the novel object recognition deficits caused by a high-dose methamphetamine administration. Similarly, nicotine attenuated methamphetamine-induced deficits in novel object recognition when administered after methamphetamine treatment. However, nicotine did not attenuate the serotonergic deficits caused by methamphetamine in adults. Conversely, nicotine attenuated methamphetamine-induced deficits in α4β2 nicotinic acetylcholine receptor density in the hippocampal CA1 region. Furthermore, nicotine increased α4β2 nicotinic acetylcholine receptor density in the hippocampal CA3, dentate gyrus and perirhinal cortex in both saline- and methamphetamine-treated rats. Conclusions: Overall, these findings suggest that nicotine-induced increases in α4β2 nicotinic acetylcholine receptors in the hippocampus and perirhinal cortex might be one mechanism by which novel object recognition deficits are attenuated by nicotine in methamphetamine-treated rats. PMID:26164716
Hopkins, Michael E.; Bucci, David J.
2010-01-01
Physical exercise induces widespread neurobiological adaptations and improves learning and memory. Most research in this field has focused on hippocampus-based spatial tasks and changes in brain-derived neurotrophic factor (BDNF) as a putative substrate underlying exercise-induced cognitive improvements. Chronic exercise can also be anxiolytic and causes adaptive changes in stress reactivity. The present study employed a perirhinal cortex-dependent object recognition task as well as the elevated plus maze to directly test for interactions between the cognitive and anxiolytic effects of exercise in male Long Evans rats. Hippocampal and perirhinal cortex tissue was collected to determine whether the relationship between BDNF and cognitive performance extends to this non-spatial and non-hippocampal-dependent task. We also examined whether the cognitive improvements persisted once the exercise regimen was terminated. Our data indicate that 4 weeks of voluntary exercise every-other-day improved object recognition memory. Importantly, BDNF expression in the perirhinal cortex of exercising rats was strongly correlated with object recognition memory. Exercise also decreased anxiety-like behavior, however there was no evidence to support a relationship between anxiety-like behavior and performance on the novel object recognition task. There was a trend for a negative relationship between anxiety-like behavior and hippocampal BDNF. Neither the cognitive improvements nor the relationship between cognitive function and perirhinal BDNF levels persisted after 2 weeks of inactivity. These are the first data demonstrating that region-specific changes in BDNF protein levels are correlated with exercise-induced improvements in non-spatial memory, mediated by structures outside the hippocampus and are consistent with the theory that, with regard to object recognition, the anxiolytic and cognitive effects of exercise may be mediated through separable mechanisms. PMID:20601027
Zhao, Zaorui; Fan, Lu; Fortress, Ashley M.; Boulware, Marissa I.; Frick, Karyn M.
2012-01-01
Histone acetylation has recently been implicated in learning and memory processes, yet necessity of histone acetylation for such processes has not been demonstrated using pharmacological inhibitors of histone acetyltransferases (HATs). As such, the present study tested whether garcinol, a potent HAT inhibitor in vitro, could impair hippocampal memory consolidation and block the memory-enhancing effects of the modulatory hormone 17β-estradiol (E2). We first showed that bilateral infusion of garcinol (0.1, 1, or 10 μg/side) into the dorsal hippocampus (DH) immediately after training impaired object recognition memory consolidation in ovariectomized female mice. A behaviorally effective dose of garcinol (10 μg/side) also significantly decreased DH HAT activity. We next examined whether DH infusion of a behaviorally subeffective dose of garcinol (1 ng/side) could block the effects of DH E2 infusion on object recognition and epigenetic processes. Immediately after training, ovariectomized female mice received bilateral DH infusions of vehicle, E2 (5 μg/side), garcinol (1 ng/side), or E2 plus garcinol. Forty-eight hours later, garcinol blocked the memory-enhancing effects of E2. Garcinol also reversed the E2-induced increase in DH histone H3 acetylation, HAT activity, and levels of the de novo methyltransferase DNMT3B, as well as the E2-induced decrease in levels of the memory repressor protein histone deacetylase 2 (HDAC2). Collectively, these findings suggest that histone acetylation is critical for object recognition memory consolidation and the beneficial effects of E2 on object recognition. Importantly, this work demonstrates that the role of histone acetylation in memory processes can be studied using a HAT inhibitor. PMID:22396409
Culture modulates implicit ownership-induced self-bias in memory.
Sparks, Samuel; Cunningham, Sheila J; Kritikos, Ada
2016-08-01
The relation of incoming stimuli to the self implicitly determines the allocation of cognitive resources. Cultural variations in the self-concept shape cognition, but the extent is unclear because the majority of studies sample only Western participants. We report cultural differences (Asian versus Western) in ownership-induced self-bias in recognition memory for objects. In two experiments, participants allocated a series of images depicting household objects to self-owned or other-owned virtual baskets based on colour cues before completing a surprise recognition memory test for the objects. The 'other' was either a stranger or a close other. In both experiments, Western participants showed greater recognition memory accuracy for self-owned compared with other-owned objects, consistent with an independent self-construal. In Experiment 1, which required minimal attention to the owned objects, Asian participants showed no such ownership-related bias in recognition accuracy. In Experiment 2, which required attention to owned objects to move them along the screen, Asian participants again showed no overall memory advantage for self-owned items and actually exhibited higher recognition accuracy for mother-owned than self-owned objects, reversing the pattern observed for Westerners. This is consistent with an interdependent self-construal which is sensitive to the particular relationship between the self and other. Overall, our results suggest that the self acts as an organising principle for allocating cognitive resources, but that the way it is constructed depends upon cultural experience. Additionally, the manifestation of these cultural differences in self-representation depends on the allocation of attentional resources to self- and other-associated stimuli. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
How landmark suitability shapes recognition memory signals for objects in the medial temporal lobes.
Martin, Chris B; Sullivan, Jacqueline A; Wright, Jessey; Köhler, Stefan
2018-02-01
A role of perirhinal cortex (PrC) in recognition memory for objects has been well established. Contributions of parahippocampal cortex (PhC) to this function, while documented, remain less well understood. Here, we used fMRI to examine whether the organization of item-based recognition memory signals across these two structures is shaped by object category, independent of any difference in representing episodic context. Guided by research suggesting that PhC plays a critical role in processing landmarks, we focused on three categories of objects that differ from each other in their landmark suitability as confirmed with behavioral ratings (buildings > trees > aircraft). Participants made item-based recognition-memory decisions for novel and previously studied objects from these categories, which were matched in accuracy. Multi-voxel pattern classification revealed category-specific item-recognition memory signals along the long axis of PrC and PhC, with no sharp functional boundaries between these structures. Memory signals for buildings were observed in the mid to posterior extent of PhC, signals for trees in anterior to posterior segments of PhC, and signals for aircraft in mid to posterior aspects of PrC and the anterior extent of PhC. Notably, item-based memory signals for the category with highest landmark suitability ratings were observed only in those posterior segments of PhC that also allowed for classification of landmark suitability of objects when memory status was held constant. These findings provide new evidence in support of the notion that item-based memory signals for objects are not limited to PrC, and that the organization of these signals along the longitudinal axis that crosses PrC and PhC can be captured with reference to landmark suitability. Copyright © 2017 Elsevier Inc. All rights reserved.
Agnosic vision is like peripheral vision, which is limited by crowding.
Strappini, Francesca; Pelli, Denis G; Di Pace, Enrico; Martelli, Marialuisa
2017-04-01
Visual agnosia is a neuropsychological impairment of visual object recognition despite near-normal acuity and visual fields. A century of research has provided only a rudimentary account of the functional damage underlying this deficit. We find that the object-recognition ability of agnosic patients viewing an object directly is like that of normally-sighted observers viewing it indirectly, with peripheral vision. Thus, agnosic vision is like peripheral vision. We obtained 14 visual-object-recognition tests that are commonly used for diagnosis of visual agnosia. Our "standard" normal observer took these tests at various eccentricities in his periphery. Analyzing the published data of 32 apperceptive agnosia patients and a group of 14 posterior cortical atrophy (PCA) patients on these tests, we find that each patient's pattern of object recognition deficits is well characterized by one number, the equivalent eccentricity at which our standard observer's peripheral vision is like the central vision of the agnosic patient. In other words, each agnosic patient's equivalent eccentricity is conserved across tests. Across patients, equivalent eccentricity ranges from 4 to 40 deg, which rates severity of the visual deficit. In normal peripheral vision, the required size to perceive a simple image (e.g., an isolated letter) is limited by acuity, and that for a complex image (e.g., a face or a word) is limited by crowding. In crowding, adjacent simple objects appear unrecognizably jumbled unless their spacing exceeds the crowding distance, which grows linearly with eccentricity. Besides conservation of equivalent eccentricity across object-recognition tests, we also find conservation, from eccentricity to agnosia, of the relative susceptibility of recognition of ten visual tests. These findings show that agnosic vision is like eccentric vision. Whence crowding? Peripheral vision, strabismic amblyopia, and possibly apperceptive agnosia are all limited by crowding, making it urgent to know what drives crowding. Acuity does not (Song et al., 2014), but neural density might: neurons per deg 2 in the crowding-relevant cortical area. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Functional Architecture of Visual Object Recognition
1991-07-01
different forms of agnosia can provide clues to the representations underlying normal object recognition (Farah, 1990). For example, the pair-wise...patterns of deficit and sparing occur. In a review of 99 published cases of agnosia , the observed patterns of co- occurrence implicated two underlying
Real-Time pedestrian detection : layered object recognition system for pedestrian collision sensing.
DOT National Transportation Integrated Search
2010-01-01
In 2005 alone, 64,000 pedestrians were injured and 4,882 were killed in the United States, with pedestrians accounting for 11 percent of all traffic fatalities and 2 percent of injuries. The focus of "Layered Object Recognition System for Pedestrian ...
Evaluating structural pattern recognition for handwritten math via primitive label graphs
NASA Astrophysics Data System (ADS)
Zanibbi, Richard; MoucheÌre, Harold; Viard-Gaudin, Christian
2013-01-01
Currently, structural pattern recognizer evaluations compare graphs of detected structure to target structures (i.e. ground truth) using recognition rates, recall and precision for object segmentation, classification and relationships. In document recognition, these target objects (e.g. symbols) are frequently comprised of multiple primitives (e.g. connected components, or strokes for online handwritten data), but current metrics do not characterize errors at the primitive level, from which object-level structure is obtained. Primitive label graphs are directed graphs defined over primitives and primitive pairs. We define new metrics obtained by Hamming distances over label graphs, which allow classification, segmentation and parsing errors to be characterized separately, or using a single measure. Recall and precision for detected objects may also be computed directly from label graphs. We illustrate the new metrics by comparing a new primitive-level evaluation to the symbol-level evaluation performed for the CROHME 2012 handwritten math recognition competition. A Python-based set of utilities for evaluating, visualizing and translating label graphs is publicly available.
a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
He, H.; Khoshelham, K.; Fraser, C.
2017-09-01
Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.
Reconciling change blindness with long-term memory for objects.
Wood, Katherine; Simons, Daniel J
2017-02-01
How can we reconcile remarkably precise long-term memory for thousands of images with failures to detect changes to similar images? We explored whether people can use detailed, long-term memory to improve change detection performance. Subjects studied a set of images of objects and then performed recognition and change detection tasks with those images. Recognition memory performance exceeded change detection performance, even when a single familiar object in the postchange display consistently indicated the change location. In fact, participants were no better when a familiar object predicted the change location than when the displays consisted of unfamiliar objects. When given an explicit strategy to search for a familiar object as a way to improve performance on the change detection task, they performed no better than in a 6-alternative recognition memory task. Subjects only benefited from the presence of familiar objects in the change detection task when they had more time to view the prechange array before it switched. Once the cost to using the change detection information decreased, subjects made use of it in conjunction with memory to boost performance on the familiar-item change detection task. This suggests that even useful information will go unused if it is sufficiently difficult to extract.
ERIC Educational Resources Information Center
Shinskey, Jeanne L.; Jachens, Liza J.
2014-01-01
Infants' transfer of information from pictures to objects was tested by familiarizing 9-month-olds (N = 31) with either a color or black-and-white photograph of an object and observing their preferential reaching for the real target object versus a distractor. One condition tested object recognition by keeping both objects visible, and the…
Three-dimensional passive sensing photon counting for object classification
NASA Astrophysics Data System (ADS)
Yeom, Seokwon; Javidi, Bahram; Watson, Edward
2007-04-01
In this keynote address, we address three-dimensional (3D) distortion-tolerant object recognition using photon-counting integral imaging (II). A photon-counting linear discriminant analysis (LDA) is discussed for classification of photon-limited images. We develop a compact distortion-tolerant recognition system based on the multiple-perspective imaging of II. Experimental and simulation results have shown that a low level of photons is sufficient to classify out-of-plane rotated objects.
Viewpoint-Specific Representations in Three-Dimensional Object Recognition
1990-08-01
for useful suggestions and illuminating discuc- sions, and Ellen Hildreth for her comments on a draft of this repcrt. References [1] 1. Biederman ...1982. [24] I. Rock and J. DiVita. A case of viewer-centered object perception. Cognitive Psychology, 19:280-293, 1987 . [25] I. Rock, D. Wheeler, and...Raleigh, NC, 1987 . [30] S. Ullman. Aligning pictorial descriptions: an approach to object recognition. Cognition, 32:193-254, 1989. [31] S. UUman and R
Dalrymple, Kirsten A; Elison, Jed T; Duchaine, Brad
2017-02-01
Evidence suggests that face and object recognition depend on distinct neural circuitry within the visual system. Work with adults with developmental prosopagnosia (DP) demonstrates that some individuals have preserved object recognition despite severe face recognition deficits. This face selectivity in adults with DP indicates that face- and object-processing systems can develop independently, but it is unclear at what point in development these mechanisms are separable. Determining when individuals with DP first show dissociations between faces and objects is one means to address this question. In the current study, we investigated face and object processing in six children with DP (5-12-years-old). Each child was assessed with one face perception test, two different face memory tests, and two object memory tests that were matched to the face memory tests in format and difficulty. Scores from the DP children on the matched face and object tasks were compared to within-subject data from age-matched controls. Four of the six DP children, including the 5-year-old, showed evidence of face-specific deficits, while one child appeared to have more general visual-processing deficits. The remaining child had inconsistent results. The presence of face-specific deficits in children with DP suggests that face and object perception depend on dissociable processes in childhood.
Beyond sensory images: Object-based representation in the human ventral pathway
Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.
2004-01-01
We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396
Do we understand high-level vision?
Cox, David Daniel
2014-04-01
'High-level' vision lacks a single, agreed upon definition, but it might usefully be defined as those stages of visual processing that transition from analyzing local image structure to analyzing structure of the external world that produced those images. Much work in the last several decades has focused on object recognition as a framing problem for the study of high-level visual cortex, and much progress has been made in this direction. This approach presumes that the operational goal of the visual system is to read-out the identity of an object (or objects) in a scene, in spite of variation in the position, size, lighting and the presence of other nearby objects. However, while object recognition as a operational framing of high-level is intuitive appealing, it is by no means the only task that visual cortex might do, and the study of object recognition is beset by challenges in building stimulus sets that adequately sample the infinite space of possible stimuli. Here I review the successes and limitations of this work, and ask whether we should reframe our approaches to understanding high-level vision. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.
2009-04-01
θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.
A role for the CAMKK pathway in visual object recognition memory.
Tinsley, Chris J; Narduzzo, Katherine E; Brown, Malcolm W; Warburton, E Clea
2012-03-01
The role of the CAMKK pathway in object recognition memory was investigated. Rats' performance in a preferential object recognition test was examined after local infusion into the perirhinal cortex of the CAMKK inhibitor STO-609. STO-609 infused either before or immediately after acquisition impaired memory tested after a 24 h but not a 20-min delay. Memory was not impaired when STO-609 was infused 20 min after acquisition. The expression of a downstream reaction product of CAMKK was measured by immunohistochemical staining for phospho-CAMKI(Thr177) at 10, 40, 70, and 100 min following the viewing of novel and familiar images of objects. Processing familiar images resulted in more pCAMKI stained neurons in the perirhinal cortex than processing novel images at the 10- and 40-min delays. Prior infusion of STO-609 caused a reduction in pCAMKI stained neurons in response to viewing either novel or familiar images, consistent with its role as an inhibitor of CAMKK. The results establish that the CAMKK pathway within the perirhinal cortex is important for the consolidation of object recognition memory. The activation of pCAMKI after acquisition is earlier than previously reported for pCAMKII. Copyright © 2011 Wiley Periodicals, Inc.
Huberle, Elisabeth; Karnath, Hans-Otto
2006-01-01
Simultanagnosia is a rare deficit that impairs individuals in perceiving several objects at the same time. It is usually observed following bilateral parieto-occipital brain damage. Despite the restrictions in perceiving the global aspect of a scene, processing of individual objects remains unaffected. The mechanisms underlying simultanagnosia are not well understood. Previous findings indicated that the integration of multiple objects into a holistic representation of the environment is not impossible per se, but might depend on the spatial relationship between individual objects. The present study examined the influence of inter-element distances between individual objects on the recognition of global shapes in two patients with simultanagnosia. We presented Navon hierarchical letter stimuli with different inter-element distances between letters at the Local Scale. Improved recognition at the Global Scale was observed in both patients by reducing the inter-element distance. Global shape recognition in simultanagnosia thus seems to be modulated by the spatial distance of local elements and does not appear to be an all-or-nothing phenomenon depending on spatial continuity. The findings seem to argue against a deficit in visual working memory capacity as the primary deficit in simultanagnosia. However, further research is necessary to investigate alternative interpretations.
A new method of edge detection for object recognition
Maddox, Brian G.; Rhew, Benjamin
2004-01-01
Traditional edge detection systems function by returning every edge in an input image. This can result in a large amount of clutter and make certain vectorization algorithms less accurate. Accuracy problems can then have a large impact on automated object recognition systems that depend on edge information. A new method of directed edge detection can be used to limit the number of edges returned based on a particular feature. This results in a cleaner image that is easier for vectorization. Vectorized edges from this process could then feed an object recognition system where the edge data would also contain information as to what type of feature it bordered.
Invariant visual object recognition and shape processing in rats
Zoccolan, Davide
2015-01-01
Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision. PMID:25561421
Amesz, Sarah; Tessari, Alessia; Ottoboni, Giovanni; Marsden, Jon
2016-01-01
To explore the relationship between laterality recognition after stroke and impairments in attention, 3D object rotation and functional ability. Observational cross-sectional study. Acute care teaching hospital. Thirty-two acute and sub-acute people with stroke and 36 healthy, age-matched controls. Laterality recognition, attention and mental rotation of objects. Within the stroke group, the relationship between laterality recognition and functional ability, neglect, hemianopia and dyspraxia were further explored. People with stroke were significantly less accurate (69% vs 80%) and showed delayed reaction times (3.0 vs 1.9 seconds) when determining the laterality of a pictured hand. Deficits either in accuracy or reaction times were seen in 53% of people with stroke. The accuracy of laterality recognition was associated with reduced functional ability (R(2) = 0.21), less accurate mental rotation of objects (R(2) = 0.20) and dyspraxia (p = 0.03). Implicit motor imagery is affected in a significant number of patients after stroke with these deficits related to lesions to the motor networks as well as other deficits seen after stroke. This research provides new insights into how laterality recognition is related to a number of other deficits after stroke, including the mental rotation of 3D objects, attention and dyspraxia. Further research is required to determine if treatment programmes can improve deficits in laterality recognition and impact functional outcomes after stroke.
Dissociated active and passive tactile shape recognition: a case study of pure tactile apraxia.
Valenza, N; Ptak, R; Zimine, I; Badan, M; Lazeyras, F; Schnider, A
2001-11-01
Disorders of tactile object recognition (TOR) may result from primary motor or sensory deficits or higher cognitive impairment of tactile shape representations or semantic memory. Studies with healthy participants suggest the existence of exploratory motor procedures directly linked to the extraction of specific properties of objects. A pure deficit of these procedures without concomitant gnostic disorders has never been described in a brain-damaged patient. Here, we present a patient with a right hemispheric infarction who, in spite of intact sensorimotor functions, had impaired TOR with the left hand. Recognition of 2D shapes and objects was severely deficient under the condition of spontaneous exploration. Tactile exploration of shapes was disorganized and exploratory procedures, such as the contour-following strategy, which is necessary to identify the precise shape of an object, were severely disturbed. However, recognition of 2D shapes under manually or verbally guided exploration and the recognition of shapes traced on the skin were intact, indicating a dissociation in shape recognition between active and passive touch. Functional MRI during sensory stimulation of the left hand showed preserved activation of the spared primary sensory cortex in the right hemisphere. We interpret the deficit of our patient as a pure tactile apraxia without tactile agnosia, i.e. a specific inability to use tactile feedback to generate the exploratory procedures necessary for tactile shape recognition.
Tactile recognition and localization using object models: the case of polyhedra on a plane.
Gaston, P C; Lozano-Perez, T
1984-03-01
This paper discusses how data from multiple tactile sensors may be used to identify and locate one object, from among a set of known objects. We use only local information from sensors: 1) the position of contact points and 2) ranges of surface normals at the contact points. The recognition and localization process is structured as the development and pruning of a tree of consistent hypotheses about pairings between contact points and object surfaces. In this paper, we deal with polyhedral objects constrained to lie on a known plane, i.e., having three degrees of positioning freedom relative to the sensors. We illustrate the performance of the algorithm by simulation.
ERIC Educational Resources Information Center
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2012-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued…
Teaching Object Permanence: An Action Research Study
ERIC Educational Resources Information Center
Bruce, Susan M.; Vargas, Claudia
2013-01-01
"Object permanence," also known as "object concept" in the field of visual impairment, is one of the most important early developmental milestones. The achievement of object permanence is associated with the onset of representational thought and language. Object permanence is important to orientation, including the recognition of landmarks.…
The evolution of meaning: spatio-temporal dynamics of visual object recognition.
Clarke, Alex; Taylor, Kirsten I; Tyler, Lorraine K
2011-08-01
Research on the spatio-temporal dynamics of visual object recognition suggests a recurrent, interactive model whereby an initial feedforward sweep through the ventral stream to prefrontal cortex is followed by recurrent interactions. However, critical questions remain regarding the factors that mediate the degree of recurrent interactions necessary for meaningful object recognition. The novel prediction we test here is that recurrent interactivity is driven by increasing semantic integration demands as defined by the complexity of semantic information required by the task and driven by the stimuli. To test this prediction, we recorded magnetoencephalography data while participants named living and nonliving objects during two naming tasks. We found that the spatio-temporal dynamics of neural activity were modulated by the level of semantic integration required. Specifically, source reconstructed time courses and phase synchronization measures showed increased recurrent interactions as a function of semantic integration demands. These findings demonstrate that the cortical dynamics of object processing are modulated by the complexity of semantic information required from the visual input.
Incremental concept learning with few training examples and hierarchical classification
NASA Astrophysics Data System (ADS)
Bouma, Henri; Eendebak, Pieter T.; Schutte, Klamer; Azzopardi, George; Burghouts, Gertjan J.
2015-10-01
Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible with only a few training samples. Secondly, we show that novel objects can be added incrementally without retraining existing objects, which is important for fast interaction. Thirdly, we show that an unbalanced number of positive training samples leads to biased classifier scores that can be corrected by modifying weights. Fourthly, we show that the detector performance can deteriorate due to hard-negative mining for similar or closely related classes (e.g., for Barbie and dress, because the doll is wearing a dress). This can be solved by our hierarchical classification. We introduce a new dataset, which we call TOSO, and use it to demonstrate the effectiveness of the proposed method for the localization and recognition of multiple objects in images.
Multiple degree of freedom object recognition using optical relational graph decision nets
NASA Technical Reports Server (NTRS)
Casasent, David P.; Lee, Andrew J.
1988-01-01
Multiple-degree-of-freedom object recognition concerns objects with no stable rest position with all scale, rotation, and aspect distortions possible. It is assumed that the objects are in a fairly benign background, so that feature extractors are usable. In-plane distortion invariance is provided by use of a polar-log coordinate transform feature space, and out-of-plane distortion invariance is provided by linear discriminant function design. Relational graph decision nets are considered for multiple-degree-of-freedom pattern recognition. The design of Fisher (1936) linear discriminant functions and synthetic discriminant function for use at the nodes of binary and multidecision nets is discussed. Case studies are detailed for two-class and multiclass problems. Simulation results demonstrate the robustness of the processors to quantization of the filter coefficients and to noise.
Language comprehenders retain implied shape and orientation of objects.
Pecher, Diane; van Dantzig, Saskia; Zwaan, Rolf A; Zeelenberg, René
2009-06-01
According to theories of embodied cognition, language comprehenders simulate sensorimotor experiences to represent the meaning of what they read. Previous studies have shown that picture recognition is better if the object in the picture matches the orientation or shape implied by a preceding sentence. In order to test whether strategic imagery may explain previous findings, language comprehenders first read a list of sentences in which objects were mentioned. Only once the complete list had been read was recognition memory tested with pictures. Recognition performance was better if the orientation or shape of the object matched that implied by the sentence, both immediately after reading the complete list of sentences and after a 45-min delay. These results suggest that previously found match effects were not due to strategic imagery and show that details of sensorimotor simulations are retained over longer periods.
Behavior analysis of video object in complicated background
NASA Astrophysics Data System (ADS)
Zhao, Wenting; Wang, Shigang; Liang, Chao; Wu, Wei; Lu, Yang
2016-10-01
This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What's more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.
Rosselli, Federica B.; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide
2015-01-01
In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning. PMID:25814936
The role of colour in implicit and explicit memory performance.
Vernon, David; Lloyd-Jones, Toby J
2003-07-01
We present two experiments that examine the effects of colour transformation between study and test (from black and white to colour and vice versa, of from incorrectly coloured to correctly coloured and vice versa) on implicit and explicit measures of memory for diagnostically coloured natural objects (e.g., yellow banana). For naming and coloured-object decision (i.e., deciding whether an object is correctly coloured), there were shorter response times to correctly coloured-objects than to black-and-white and incorrectly coloured-objects. Repetition priming was equivalent for the different stimulus types. Colour transformation did not influence priming of picture naming, but for coloured-object decision priming was evident only for objects remaining the same from study to test. This was the case for both naming and coloured-object decision as study tasks. When participants were asked to consciously recognize objects that they had named or made coloured-object decisions to previously, whilst ignoring their colour, colour transformation reduced recognition efficiency. We discuss these results in terms of the flexibility of object representations that mediate priming and recognition.
Two Speed Factors of Visual Recognition Independently Correlated with Fluid Intelligence
Tachibana, Ryosuke; Namba, Yuri; Noguchi, Yasuki
2014-01-01
Growing evidence indicates a moderate but significant relationship between processing speed in visuo-cognitive tasks and general intelligence. On the other hand, findings from neuroscience proposed that the primate visual system consists of two major pathways, the ventral pathway for objects recognition and the dorsal pathway for spatial processing and attentive analysis. Previous studies seeking for visuo-cognitive factors of human intelligence indicated a significant correlation between fluid intelligence and the inspection time (IT), an index for a speed of object recognition performed in the ventral pathway. We thus presently examined a possibility that neural processing speed in the dorsal pathway also represented a factor of intelligence. Specifically, we used the mental rotation (MR) task, a popular psychometric measure for mental speed of spatial processing in the dorsal pathway. We found that the speed of MR was significantly correlated with intelligence scores, while it had no correlation with one’s IT (recognition speed of visual objects). Our results support the new possibility that intelligence could be explained by two types of mental speed, one related to object recognition (IT) and another for manipulation of mental images (MR). PMID:24825574
Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J
2017-06-01
Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.
Speckle-learning-based object recognition through scattering media.
Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun
2015-12-28
We experimentally demonstrated object recognition through scattering media based on direct machine learning of a number of speckle intensity images. In the experiments, speckle intensity images of amplitude or phase objects on a spatial light modulator between scattering plates were captured by a camera. We used the support vector machine for binary classification of the captured speckle intensity images of face and non-face data. The experimental results showed that speckles are sufficient for machine learning.
Cultural Diversity and Civic Education: Two Versions of the Fragmentation Objection
ERIC Educational Resources Information Center
Shorten, Andrew
2010-01-01
According to the "fragmentation objection" to multiculturalism, practices of cultural recognition undermine political stability, and this counts as a reason to be sceptical about the public recognition of minority cultures, as well as about multiculturalism construed more broadly as a public policy. Civic education programmes, designed to promote…
Multimedia Security System for Security and Medical Applications
ERIC Educational Resources Information Center
Zhou, Yicong
2010-01-01
This dissertation introduces a new multimedia security system for the performance of object recognition and multimedia encryption in security and medical applications. The system embeds an enhancement and multimedia encryption process into the traditional recognition system in order to improve the efficiency and accuracy of object detection and…
Poth, Christian H.; Schneider, Werner X.
2016-01-01
Human vision is organized in discrete processing episodes (e.g., eye fixations or task-steps). Object information must be transmitted across episodes to enable episodic short-term recognition: recognizing whether a current object has been seen in a previous episode. We ask whether episodic short-term recognition presupposes that objects have been encoded into capacity-limited visual working memory (VWM), which retains visual information for report. Alternatively, it could rely on the activation of visual features or categories that occurs before encoding into VWM. We assessed the dependence of episodic short-term recognition on VWM by a new paradigm combining letter report and probe recognition. Participants viewed displays of 10 letters and reported as many as possible after a retention interval (whole report). Next, participants viewed a probe letter and indicated whether it had been one of the 10 letters (probe recognition). In Experiment 1, probe recognition was more accurate for letters that had been encoded into VWM (reported letters) compared with non-encoded letters (non-reported letters). Interestingly, those letters that participants reported in their whole report had been near to one another within the letter displays. This suggests that the encoding into VWM proceeded in a spatially clustered manner. In Experiment 2, participants reported only one of 10 letters (partial report) and probes either referred to this letter, to letters that had been near to it, or far from it. Probe recognition was more accurate for near than for far letters, although none of these letters had to be reported. These findings indicate that episodic short-term recognition is constrained to a small number of simultaneously presented objects that have been encoded into VWM. PMID:27713722
Poth, Christian H; Schneider, Werner X
2016-01-01
Human vision is organized in discrete processing episodes (e.g., eye fixations or task-steps). Object information must be transmitted across episodes to enable episodic short-term recognition: recognizing whether a current object has been seen in a previous episode. We ask whether episodic short-term recognition presupposes that objects have been encoded into capacity-limited visual working memory (VWM), which retains visual information for report. Alternatively, it could rely on the activation of visual features or categories that occurs before encoding into VWM. We assessed the dependence of episodic short-term recognition on VWM by a new paradigm combining letter report and probe recognition. Participants viewed displays of 10 letters and reported as many as possible after a retention interval (whole report). Next, participants viewed a probe letter and indicated whether it had been one of the 10 letters (probe recognition). In Experiment 1, probe recognition was more accurate for letters that had been encoded into VWM (reported letters) compared with non-encoded letters (non-reported letters). Interestingly, those letters that participants reported in their whole report had been near to one another within the letter displays. This suggests that the encoding into VWM proceeded in a spatially clustered manner. In Experiment 2, participants reported only one of 10 letters (partial report) and probes either referred to this letter, to letters that had been near to it, or far from it. Probe recognition was more accurate for near than for far letters, although none of these letters had to be reported. These findings indicate that episodic short-term recognition is constrained to a small number of simultaneously presented objects that have been encoded into VWM.
Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex.
Malach, R; Reppas, J B; Benson, R R; Kwong, K K; Jiang, H; Kennedy, W A; Ledden, P J; Brady, T J; Rosen, B R; Tootell, R B
1995-01-01
The stages of integration leading from local feature analysis to object recognition were explored in human visual cortex by using the technique of functional magnetic resonance imaging. Here we report evidence for object-related activation. Such activation was located at the lateral-posterior aspect of the occipital lobe, just abutting the posterior aspect of the motion-sensitive area MT/V5, in a region termed the lateral occipital complex (LO). LO showed preferential activation to images of objects, compared to a wide range of texture patterns. This activation was not caused by a global difference in the Fourier spatial frequency content of objects versus texture images, since object images produced enhanced LO activation compared to textures matched in power spectra but randomized in phase. The preferential activation to objects also could not be explained by different patterns of eye movements: similar levels of activation were observed when subjects fixated on the objects and when they scanned the objects with their eyes. Additional manipulations such as spatial frequency filtering and a 4-fold change in visual size did not affect LO activation. These results suggest that the enhanced responses to objects were not a manifestation of low-level visual processing. A striking demonstration that activity in LO is uniquely correlated to object detectability was produced by the "Lincoln" illusion, in which blurring of objects digitized into large blocks paradoxically increases their recognizability. Such blurring led to significant enhancement of LO activation. Despite the preferential activation to objects, LO did not seem to be involved in the final, "semantic," stages of the recognition process. Thus, objects varying widely in their recognizability (e.g., famous faces, common objects, and unfamiliar three-dimensional abstract sculptures) activated it to a similar degree. These results are thus evidence for an intermediate link in the chain of processing stages leading to object recognition in human visual cortex. Images Fig. 1 Fig. 2 Fig. 3 PMID:7667258
Formal implementation of a performance evaluation model for the face recognition system.
Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young
2008-01-01
Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.
3-Dimensional Scene Perception during Active Electrolocation in a Weakly Electric Pulse Fish
von der Emde, Gerhard; Behr, Katharina; Bouton, Béatrice; Engelmann, Jacob; Fetz, Steffen; Folde, Caroline
2010-01-01
Weakly electric fish use active electrolocation for object detection and orientation in their environment even in complete darkness. The African mormyrid Gnathonemus petersii can detect object parameters, such as material, size, shape, and distance. Here, we tested whether individuals of this species can learn to identify 3-dimensional objects independently of the training conditions and independently of the object's position in space (rotation-invariance; size-constancy). Individual G. petersii were trained in a two-alternative forced-choice procedure to electrically discriminate between a 3-dimensional object (S+) and several alternative objects (S−). Fish were then tested whether they could identify the S+ among novel objects and whether single components of S+ were sufficient for recognition. Size-constancy was investigated by presenting the S+ together with a larger version at different distances. Rotation-invariance was tested by rotating S+ and/or S− in 3D. Our results show that electrolocating G. petersii could (1) recognize an object independently of the S− used during training. When only single components of a complex S+ were offered, recognition of S+ was more or less affected depending on which part was used. (2) Object-size was detected independently of object distance, i.e. fish showed size-constancy. (3) The majority of the fishes tested recognized their S+ even if it was rotated in space, i.e. these fishes showed rotation-invariance. (4) Object recognition was restricted to the near field around the fish and failed when objects were moved more than about 4 cm away from the animals. Our results indicate that even in complete darkness our G. petersii were capable of complex 3-dimensional scene perception using active electrolocation. PMID:20577635
GPU Accelerated Symmetry Transform for Object Saliency
significantly reduced execution time for VGA-sized images. The symmetry transform has potential for use in finding salient regions containing objects, and also may be applicable to stable object keypoints for recognition .
ERIC Educational Resources Information Center
Wood, Justin N.; Wood, Samantha M. W.
2018-01-01
How do newborns learn to recognize objects? According to temporal learning models in computational neuroscience, the brain constructs object representations by extracting smoothly changing features from the environment. To date, however, it is unknown whether newborns depend on smoothly changing features to build invariant object representations.…
A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity
Lomp, Oliver; Faubel, Christian; Schöner, Gregor
2017-01-01
Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145
Tactile Recognition and Localization Using Object Models: The Case of Polyhedra on a Plane.
1983-03-01
poor force resolution, but high spatial resolution. We feel that the viability of this recognition approach has important implications on the design of...of the touched object: 1. Surface point - On the basis of sensor readings, some points on the sensor can be identified as being in contact with...the sensor’s shape and location in space are known, one can determine the position of some point on the touched object, to within some uncertainty
Stephenson, Jennifer
2009-03-01
Communication symbols for students with severe intellectual disabilities often take the form of computer-generated line drawings. This study investigated the effects of the match between color and shape of line drawings and the objects they represented on drawing recognition and use. The match or non-match between color and shape of the objects and drawings did not have an effect on participants' ability to match drawings to objects, or to use drawings to make choices.
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Editor)
1988-01-01
The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.
Ursino, Mauro; Magosso, Elisa; Cuppini, Cristiano
2009-02-01
Synchronization of neural activity in the gamma band is assumed to play a significant role not only in perceptual processing, but also in higher cognitive functions. Here, we propose a neural network of Wilson-Cowan oscillators to simulate recognition of abstract objects, each represented as a collection of four features. Features are ordered in topological maps of oscillators connected via excitatory lateral synapses, to implement a similarity principle. Experience on previous objects is stored in long-range synapses connecting the different topological maps, and trained via timing dependent Hebbian learning (previous knowledge principle). Finally, a downstream decision network detects the presence of a reliable object representation, when all features are oscillating in synchrony. Simulations performed giving various simultaneous objects to the network (from 1 to 4), with some missing and/or modified properties suggest that the network can reconstruct objects, and segment them from the other simultaneously present objects, even in case of deteriorated information, noise, and moderate correlation among the inputs (one common feature). The balance between sensitivity and specificity depends on the strength of the Hebbian learning. Achieving a correct reconstruction in all cases, however, requires ad hoc selection of the oscillation frequency. The model represents an attempt to investigate the interactions among topological maps, autoassociative memory, and gamma-band synchronization, for recognition of abstract objects.
Cortical Thickness in Fusiform Face Area Predicts Face and Object Recognition Performance
McGugin, Rankin W.; Van Gulick, Ana E.; Gauthier, Isabel
2016-01-01
The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to non-face objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here we show an effect of expertise with non-face objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally-defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. While subjects with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects, but rather living and non-living objects. PMID:26439272
Advances in the behavioural testing and network imaging of rodent recognition memory
Kinnavane, Lisa; Albasser, Mathieu M.; Aggleton, John P.
2015-01-01
Research into object recognition memory has been galvanised by the introduction of spontaneous preference tests for rodents. The standard task, however, contains a number of inherent shortcomings that reduce its power. Particular issues include the problem that individual trials are time consuming, so limiting the total number of trials in any condition. In addition, the spontaneous nature of the behaviour and the variability between test objects add unwanted noise. To combat these issues, the ‘bow-tie maze’ was introduced. Although still based on the spontaneous preference of novel over familiar stimuli, the ability to give multiple trials within a session without handling the rodents, as well as using the same objects as both novel and familiar samples on different trials, overcomes key limitations in the standard task. Giving multiple trials within a single session also creates new opportunities for functional imaging of object recognition memory. A series of studies are described that examine the expression of the immediate-early gene, c-fos. Object recognition memory is associated with increases in perirhinal cortex and area Te2 c-fos activity. When rats explore novel objects the pathway from the perirhinal cortex to lateral entorhinal cortex, and then to the dentate gyrus and CA3, is engaged. In contrast, when familiar objects are explored the pathway from the perirhinal cortex to lateral entorhinal cortex, and then to CA1, takes precedence. The switch to the perforant pathway (novel stimuli) from the temporoammonic pathway (familiar stimuli) may assist the enhanced associative learning promoted by novel stimuli. PMID:25106740
Body-wide anatomy recognition in PET/CT images
NASA Astrophysics Data System (ADS)
Wang, Huiqian; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Zhao, Liming; Torigian, Drew A.
2015-03-01
With the rapid growth of positron emission tomography/computed tomography (PET/CT)-based medical applications, body-wide anatomy recognition on whole-body PET/CT images becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem and seldom studied due to unclear anatomy reference frame and low spatial resolution of PET images as well as low contrast and spatial resolution of the associated low-dose CT images. We previously developed an automatic anatomy recognition (AAR) system [15] whose applicability was demonstrated on diagnostic computed tomography (CT) and magnetic resonance (MR) images in different body regions on 35 objects. The aim of the present work is to investigate strategies for adapting the previous AAR system to low-dose CT and PET images toward automated body-wide disease quantification. Our adaptation of the previous AAR methodology to PET/CT images in this paper focuses on 16 objects in three body regions - thorax, abdomen, and pelvis - and consists of the following steps: collecting whole-body PET/CT images from existing patient image databases, delineating all objects in these images, modifying the previous hierarchical models built from diagnostic CT images to account for differences in appearance in low-dose CT and PET images, automatically locating objects in these images following object hierarchy, and evaluating performance. Our preliminary evaluations indicate that the performance of the AAR approach on low-dose CT images achieves object localization accuracy within about 2 voxels, which is comparable to the accuracies achieved on diagnostic contrast-enhanced CT images. Object recognition on low-dose CT images from PET/CT examinations without requiring diagnostic contrast-enhanced CT seems feasible.
Object recognition with hierarchical discriminant saliency networks.
Han, Sunhyoung; Vasconcelos, Nuno
2014-01-01
The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and computer vision literatures. This demonstrates benefits for all the functional enhancements of the HDSN, the class tuning inherent to discriminant saliency, and saliency layers based on templates of increasing target selectivity and invariance. Altogether, these experiments suggest that there are non-trivial benefits in integrating attention and recognition.
Graded effects in hierarchical figure-ground organization: reply to Peterson (1999).
Vecera, S P; O'Reilly, R C
2000-06-01
An important issue in vision research concerns the order of visual processing. S. P. Vecera and R. C. O'Reilly (1998) presented an interactive, hierarchical model that placed figure-ground segregation prior to object recognition. M. A. Peterson (1999) critiqued this model, arguing that because it used ambiguous stimulus displays, figure-ground processing did not precede object processing. In the current article, the authors respond to Peterson's (1999) interpretation of ambiguity in the model and her interpretation of what it means for figure-ground processing to come before object recognition. The authors argue that complete stimulus ambiguity is not critical to the model and that figure-ground precedes object recognition architecturally in the model. The arguments are supported with additional simulation results and an experiment, demonstrating that top-down inputs can influence figure-ground organization in displays that contain stimulus cues.
Prefrontal Engagement during Source Memory Retrieval Depends on the Prior Encoding Task
Kuo, Trudy Y.; Van Petten, Cyma
2008-01-01
The prefrontal cortex is strongly engaged by some, but not all, episodic memory tests. Prior work has shown that source recognition tests—those that require memory for conjunctions of studied attributes—yield deficient performance in patients with prefrontal damage and greater prefrontal activity in healthy subjects, as compared to simple recognition tests. Here, we tested the hypothesis that there is no intrinsic relationship between the prefrontal cortex and source memory, but that the prefrontal cortex is engaged by the demand to retrieve weakly encoded relationships. Subjects attempted to remember object/color conjunctions after an encoding task that focused on object identity alone, and an integrative encoding task that encouraged attention to object/color relationships. After the integrative encoding task, the late prefrontal brain electrical activity that typically occurs in source memory tests was eliminated. Earlier brain electrical activity related to successful recognition of the objects was unaffected by the nature of prior encoding. PMID:16839287
SEMI-SUPERVISED OBJECT RECOGNITION USING STRUCTURE KERNEL
Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Ling, Fan
2013-01-01
Object recognition is a fundamental problem in computer vision. Part-based models offer a sparse, flexible representation of objects, but suffer from difficulties in training and often use standard kernels. In this paper, we propose a positive definite kernel called “structure kernel”, which measures the similarity of two part-based represented objects. The structure kernel has three terms: 1) the global term that measures the global visual similarity of two objects; 2) the part term that measures the visual similarity of corresponding parts; 3) the spatial term that measures the spatial similarity of geometric configuration of parts. The contribution of this paper is to generalize the discriminant capability of local kernels to complex part-based object models. Experimental results show that the proposed kernel exhibit higher accuracy than state-of-art approaches using standard kernels. PMID:23666108
Social enrichment improves social recognition memory in male rats.
Toyoshima, Michimasa; Yamada, Kazuo; Sugita, Manami; Ichitani, Yukio
2018-05-01
The social environment is thought to have a strong impact on cognitive functions. In the present study, we investigated whether social enrichment could affect rats' memory ability using the "Different Objects Task (DOT)," in which the levels of memory load could be modulated by changing the number of objects to be remembered. In addition, we applied the DOT to a social discrimination task using unfamiliar conspecific juveniles instead of objects. Animals were housed in one of the three different housing conditions after weaning [postnatal day (PND) 21]: social-separated (1 per cage), standard (3 per cage), or social-enriched (10 per cage) conditions. The object and social recognition tasks were conducted on PND 60. In the sample phase, the rats were allowed to explore a field in which 3, 4, or 5 different, unfamiliar stimuli (conspecific juveniles through a mesh or objects) were presented. In the test phase conducted after a 5-min delay, social-separated rats were able to discriminate the novel conspecific from the familiar ones only under the condition in which three different conspecifics were presented; social-enriched rats managed to recognize the novel conspecific even under the condition of five different conspecifics. On the other hand, in the object recognition task, both social-separated and social-enriched rats were able to discriminate the novel object from the familiar ones under the condition of five different objects. These results suggest that social enrichment can enhance social, but not object, memory span.
Saneyoshi, Ayako; Michimata, Chikashi
2009-12-01
Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to a different position on geon A. The Categorical task consisted of the original and the categorically transformed objects. The Coordinate task consisted of the original and the coordinately transformed objects. The original object was presented to the central visual field, followed by a comparison object presented to the right or left visual half-fields (RVF and LVF). The results showed an RVF advantage for the Categorical task and an LVF advantage for the Coordinate task. The possibility that categorical and coordinate spatial processing subsystems would be basic computational elements for between- and within-category object recognition was discussed.
Mechanisms and Neural Basis of Object and Pattern Recognition: A Study with Chess Experts
ERIC Educational Resources Information Center
Bilalic, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang
2010-01-01
Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and…
Ultra-FDst Object Recognition from Few Spikes
2005-07-01
Ultra-fast Object Recognition from Few Spikes Chou Hung, Gabriel Kreiman , Tomaso Poggio & James J. DiCarlo AI Memo 2005-022 July 2005 CBCL Memo 253...authors, Chou Hung and Gabriel Kreiman , contributed equally to this work. Supplementary Material is available at http://ramonycajal.mit.edu... kreiman /resources/ultrafast/. _____________________________________________________________________________ This report describes research done at
Orientation-Invariant Object Recognition: Evidence from Repetition Blindness
ERIC Educational Resources Information Center
Harris, Irina M.; Dux, Paul E.
2005-01-01
The question of whether object recognition is orientation-invariant or orientation-dependent was investigated using a repetition blindness (RB) paradigm. In RB, the second occurrence of a repeated stimulus is less likely to be reported, compared to the occurrence of a different stimulus, if it occurs within a short time of the first presentation.…
Computing with Connections in Visual Recognition of Origami Objects.
ERIC Educational Resources Information Center
Sabbah, Daniel
1985-01-01
Summarizes an initial foray in tackling artificial intelligence problems using a connectionist approach. The task chosen is visual recognition of Origami objects, and the questions answered are how to construct a connectionist network to represent and recognize projected Origami line drawings and the advantages such an approach would have. (30…
Developmental Trajectories of Part-Based and Configural Object Recognition in Adolescence
ERIC Educational Resources Information Center
Juttner, Martin; Wakui, Elley; Petters, Dean; Kaur, Surinder; Davidoff, Jules
2013-01-01
Three experiments assessed the development of children's part and configural (part-relational) processing in object recognition during adolescence. In total, 312 school children aged 7-16 years and 80 adults were tested in 3-alternative forced choice (3-AFC) tasks. They judged the correct appearance of upright and inverted presented familiar…
View Combination: A Generalization Mechanism for Visual Recognition
ERIC Educational Resources Information Center
Friedman, Alinda; Waller, David; Thrash, Tyler; Greenauer, Nathan; Hodgson, Eric
2011-01-01
We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four "views" of a two dimensional visual array derived from a three-dimensional…
Developmental Changes in Visual Object Recognition between 18 and 24 Months of Age
ERIC Educational Resources Information Center
Pereira, Alfredo F.; Smith, Linda B.
2009-01-01
Two experiments examined developmental changes in children's visual recognition of common objects during the period of 18 to 24 months. Experiment 1 examined children's ability to recognize common category instances that presented three different kinds of information: (1) richly detailed and prototypical instances that presented both local and…
Word-to-picture recognition is a function of motor components mappings at the stage of retrieval.
Brouillet, Denis; Brouillet, Thibaut; Milhau, Audrey; Heurley, Loïc; Vagnot, Caroline; Brunel, Lionel
2016-10-01
Embodied approaches of cognition argue that retrieval involves the re-enactment of both sensory and motor components of the desired remembering. In this study, we investigated the effect of motor action performed to produce the response in a recognition task when this action is compatible with the affordance of the objects that have to be recognised. In our experiment, participants were first asked to learn a list of words referring to graspable objects, and then told to make recognition judgements on pictures. The pictures represented objects where the graspable part was either pointing to the same or to the opposite side of the "Yes" response key. Results show a robust effect of compatibility between objects affordance and response hand. Moreover, this compatibility improves participants' ability of discrimination, suggesting that motor components are relevant cue for memory judgement at the stage of retrieval in a recognition task. More broadly, our data highlight that memory judgements are a function of motor components mappings at the stage of retrieval. © 2015 International Union of Psychological Science.
Soulé, Jonathan; Penke, Zsuzsa; Kanhema, Tambudzai; Alme, Maria Nordheim; Laroche, Serge; Bramham, Clive R.
2008-01-01
Long-term recognition memory requires protein synthesis, but little is known about the coordinate regulation of specific genes. Here, we examined expression of the plasticity-associated immediate early genes (Arc, Zif268, and Narp) in the dentate gyrus following long-term object-place recognition learning in rats. RT-PCR analysis from dentate gyrus tissue collected shortly after training did not reveal learning-specific changes in Arc mRNA expression. In situ hybridization and immunohistochemistry were therefore used to assess possible sparse effects on gene expression. Learning about objects increased the density of granule cells expressing Arc, and to a lesser extent Narp, specifically in the dorsal blade of the dentate gyrus, while Zif268 expression was elevated across both blades. Thus, object-place recognition triggers rapid, blade-specific upregulation of plasticity-associated immediate early genes. Furthermore, Western blot analysis of dentate gyrus homogenates demonstrated concomitant upregulation of three postsynaptic density proteins (Arc, PSD-95, and α-CaMKII) with key roles in long-term synaptic plasticity and long-term memory. PMID:19190776
NASA Technical Reports Server (NTRS)
Hung, Stephen H. Y.
1989-01-01
A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.
Optimization of Visual Information Presentation for Visual Prosthesis.
Guo, Fei; Yang, Yuan; Gao, Yong
2018-01-01
Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.
Optimization of Visual Information Presentation for Visual Prosthesis
Gao, Yong
2018-01-01
Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769
Moreno, Hayarelis C; de Brugada, Isabel; Carias, Diamela; Gallo, Milagros
2013-11-01
Choline is an essential nutrient required for early development. Previous studies have shown that prenatal choline availability influences adult memory abilities depending on the medial temporal lobe integrity. The relevance of prenatal choline availability on object recognition memory was assessed in adult Wistar rats. Three groups of pregnant Wistar rats were fed from E12 to E18 with choline-deficient (0 g/kg choline chloride), standard (1.1 g/kg choline chloride), or choline-supplemented (5 g/kg choline chloride) diets. The offspring was cross-fostered to rat dams fed a standard diet during pregnancy and tested at the age of 3 months in an object recognition memory task applying retention tests 24 and 48 hours after acquisition. Although no significant differences have been found in the performance of the three groups during the first retention test, the supplemented group exhibited improved memory compared with both the standard and the deficient group in the second retention test, 48 hours after acquisition. In addition, at the second retention test the deficient group did not differ from chance. Taken together, the results support the notion of a long-lasting beneficial effect of prenatal choline supplementation on object recognition memory which is evident when the rats reach adulthood. The results are discussed in terms of their relevance for improving the understanding of the cholinergic involvement in object recognition memory and the implications of the importance of maternal diet for lifelong cognitive abilities.
Three-dimensional model-based object recognition and segmentation in cluttered scenes.
Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn
2006-10-01
Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.
Cadieu, Charles F.; Hong, Ha; Yamins, Daniel L. K.; Pinto, Nicolas; Ardila, Diego; Solomon, Ethan A.; Majaj, Najib J.; DiCarlo, James J.
2014-01-01
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. PMID:25521294
Constraints in distortion-invariant target recognition system simulation
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Razzaque, Md A.
2000-11-01
Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.
Nava-Mesa, Mauricio O; Lamprea, Marisol R; Múnera, Alejandro
2013-11-01
Acute stress induces short-term object recognition memory impairment and elicits endogenous opioid system activation. The aim of this study was thus to evaluate whether opiate system activation mediates the acute stress-induced object recognition memory changes. Adult male Wistar rats were trained in an object recognition task designed to test both short- and long-term memory. Subjects were randomly assigned to receive an intraperitoneal injection of saline, 1 mg/kg naltrexone or 3 mg/kg naltrexone, four and a half hours before the sample trial. Five minutes after the injection, half the subjects were submitted to movement restraint during four hours while the other half remained in their home cages. Non-stressed subjects receiving saline (control) performed adequately during the short-term memory test, while stressed subjects receiving saline displayed impaired performance. Naltrexone prevented such deleterious effect, in spite of the fact that it had no intrinsic effect on short-term object recognition memory. Stressed subjects receiving saline and non-stressed subjects receiving naltrexone performed adequately during the long-term memory test; however, control subjects as well as stressed subjects receiving a high dose of naltrexone performed poorly. Control subjects' dissociated performance during both memory tests suggests that the short-term memory test induced a retroactive interference effect mediated through light opioid system activation; such effect was prevented either by low dose naltrexone administration or by strongly activating the opioid system through acute stress. Both short-term memory retrieval impairment and long-term memory improvement observed in stressed subjects may have been mediated through strong opioid system activation, since they were prevented by high dose naltrexone administration. Therefore, the activation of the opioid system plays a dual modulating role in object recognition memory. Copyright © 2013 Elsevier Inc. All rights reserved.
A biologically inspired neural network model to transformation invariant object recognition
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Li, Yaqin; Siddiqui, Faraz
2007-09-01
Transformation invariant image recognition has been an active research area due to its widespread applications in a variety of fields such as military operations, robotics, medical practices, geographic scene analysis, and many others. The primary goal for this research is detection of objects in the presence of image transformations such as changes in resolution, rotation, translation, scale and occlusion. We investigate a biologically-inspired neural network (NN) model for such transformation-invariant object recognition. In a classical training-testing setup for NN, the performance is largely dependent on the range of transformation or orientation involved in training. However, an even more serious dilemma is that there may not be enough training data available for successful learning or even no training data at all. To alleviate this problem, a biologically inspired reinforcement learning (RL) approach is proposed. In this paper, the RL approach is explored for object recognition with different types of transformations such as changes in scale, size, resolution and rotation. The RL is implemented in an adaptive critic design (ACD) framework, which approximates the neuro-dynamic programming of an action network and a critic network, respectively. Two ACD algorithms such as Heuristic Dynamic Programming (HDP) and Dual Heuristic dynamic Programming (DHP) are investigated to obtain transformation invariant object recognition. The two learning algorithms are evaluated statistically using simulated transformations in images as well as with a large-scale UMIST face database with pose variations. In the face database authentication case, the 90° out-of-plane rotation of faces from 20 different subjects in the UMIST database is used. Our simulations show promising results for both designs for transformation-invariant object recognition and authentication of faces. Comparing the two algorithms, DHP outperforms HDP in learning capability, as DHP takes fewer steps to perform a successful recognition task in general. Further, the residual critic error in DHP is generally smaller than that of HDP, and DHP achieves a 100% success rate more frequently than HDP for individual objects/subjects. On the other hand, HDP is more robust than the DHP as far as success rate across the database is concerned when applied in a stochastic and uncertain environment, and the computational time involved in DHP is more.
Young Children's Self-Generated Object Views and Object Recognition
ERIC Educational Resources Information Center
James, Karin H.; Jones, Susan S.; Smith, Linda B.; Swain, Shelley N.
2014-01-01
Two important and related developments in children between 18 and 24 months of age are the rapid expansion of object name vocabularies and the emergence of an ability to recognize objects from sparse representations of their geometric shapes. In the same period, children also begin to show a preference for planar views (i.e., views of objects held…
Han, Ren-Wen; Zhang, Rui-San; Xu, Hong-Jiao; Chang, Min; Peng, Ya-Li; Wang, Rui
2013-07-01
Neuropeptide S (NPS), the endogenous ligand of NPSR, has been shown to promote arousal and anxiolytic-like effects. According to the predominant distribution of NPSR in brain tissues associated with learning and memory, NPS has been reported to modulate cognitive function in rodents. Here, we investigated the role of NPS in memory formation, and determined whether NPS could mitigate memory impairment induced by selective N-methyl-D-aspartate receptor antagonist MK801, muscarinic cholinergic receptor antagonist scopolamine or Aβ₁₋₄₂ in mice, using novel object and object location recognition tasks. Intracerebroventricular (i.c.v.) injection of 1 nmol NPS 5 min after training not only facilitated object recognition memory formation, but also prolonged memory retention in both tasks. The improvement of object recognition memory induced by NPS could be blocked by the selective NPSR antagonist SHA 68, indicating pharmacological specificity. Then, we found that i.c.v. injection of NPS reversed memory disruption induced by MK801, scopolamine or Aβ₁₋₄₂ in both tasks. In summary, our results indicate that NPS facilitates memory formation and prolongs the retention of memory through activation of the NPSR, and mitigates amnesia induced by blockage of glutamatergic or cholinergic system or by Aβ₁₋₄₂, suggesting that NPS/NPSR system may be a new target for enhancing memory and treating amnesia. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fleming, Stephen A; Dilger, Ryan N
2017-03-15
Novelty preference paradigms have been widely used to study recognition memory and its neural substrates. The piglet model continues to advance the study of neurodevelopment, and as such, tasks that use novelty preference will serve especially useful due to their translatable nature to humans. However, there has been little use of this behavioral paradigm in the pig, and previous studies using the novel object recognition paradigm in piglets have yielded inconsistent results. The current study was conducted to determine if piglets were capable of displaying a novelty preference. Herein a series of experiments were conducted using novel object recognition or location in 3- and 4-week-old piglets. In the novel object recognition task, piglets were able to discriminate between novel and sample objects after delays of 2min, 1h, 1 day, and 2 days (all P<0.039) at both ages. Performance was sex-dependent, as females could perform both 1- and 2-day delays (P<0.036) and males could perform the 2-day delay (P=0.008) but not the 1-day delay (P=0.347). Furthermore, 4-week-old piglets and females tended to exhibit greater exploratory behavior compared with males. Such performance did not extend to novel location recognition tasks, as piglets were only able to discriminate between novel and sample locations after a short delay (P>0.046). In conclusion, this study determined that piglets are able to perform the novel object and location recognition tasks at 3-to-4 weeks of age, however performance was dependent on sex, age, and delay. Copyright © 2016 Elsevier B.V. All rights reserved.
Remembering the snake in the grass: Threat enhances recognition but not source memory.
Meyer, Miriam Magdalena; Bell, Raoul; Buchner, Axel
2015-12-01
Research on the influence of emotion on source memory has yielded inconsistent findings. The object-based framework (Mather, 2007) predicts that negatively arousing stimuli attract attention, resulting in enhanced within-object binding, and, thereby, enhanced source memory for intrinsic context features of emotional stimuli. To test this prediction, we presented pictures of threatening and harmless animals, the color of which had been experimentally manipulated. In a memory test, old-new recognition for the animals and source memory for their color was assessed. In all 3 experiments, old-new recognition was better for the more threatening material, which supports previous reports of an emotional memory enhancement. This recognition advantage was due to the emotional properties of the stimulus material, and not specific for snake stimuli. However, inconsistent with the prediction of the object-based framework, intrinsic source memory was not affected by emotion. (c) 2015 APA, all rights reserved).
Change blindness and visual memory: visual representations get rich and act poor.
Varakin, D Alexander; Levin, Daniel T
2006-02-01
Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.
Identification and location of catenary insulator in complex background based on machine vision
NASA Astrophysics Data System (ADS)
Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao
2018-04-01
It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-06-10
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1993-01-01
A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.
Visual Object Pattern Separation Varies in Older Adults
ERIC Educational Resources Information Center
Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.
2013-01-01
Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…
ERIC Educational Resources Information Center
Bartko, Susan J.; Winters, Boyer D.; Cowell, Rosemary A.; Saksida, Lisa M.; Bussey, Timothy J.
2007-01-01
The perirhinal cortex (PRh) has a well-established role in object recognition memory. More recent studies suggest that PRh is also important for two-choice visual discrimination tasks. Specifically, it has been suggested that PRh contains conjunctive representations that help resolve feature ambiguity, which occurs when a task cannot easily be…
The Fundamentals of Thermal Imaging Systems.
1979-05-10
detection , recognition, or identification, of real ’coene objects aire discussed. It is hoped that the text will be useful to FLIR designers, evaluators...AND ANDERSON EXPERIMENT ........................ 205 Appendix F - BASIC SNR AND DETECTIVITY RELATIONS ................................... 209 Appendix... detection , recognition, or identification, of real scene objects are discussed. I• It is hoped that the material in the text will be useful to
ERIC Educational Resources Information Center
Pezze, Marie A.; Marshall, Hayley J.; Fone, Kevin C. F.; Cassaday, Helen J.
2017-01-01
Previous in vivo electrophysiological studies suggest that the anterior cingulate cortex (ACgx) is an important substrate of novel object recognition (NOR) memory. However, intervention studies are needed to confirm this conclusion and permanent lesion studies cannot distinguish effects on encoding and retrieval. The interval between encoding and…
AMPA Receptor Endocytosis in Rat Perirhinal Cortex Underlies Retrieval of Object Memory
ERIC Educational Resources Information Center
Cazakoff, Brittany N.; Howland, John G.
2011-01-01
Mechanisms consistent with long-term depression in the perirhinal cortex (PRh) play a fundamental role in object recognition memory; however, whether AMPA receptor endocytosis is involved in distinct phases of recognition memory is not known. To address this question, we used local PRh infusions of the cell membrane-permeable Tat-GluA2[subscript…
The role of line junctions in object recognition: The case of reading musical notation.
Wong, Yetta Kwailing; Wong, Alan C-N
2018-04-30
Previous work has shown that line junctions are informative features for visual perception of objects, letters, and words. However, the sources of such sensitivity and their generalizability to other object categories are largely unclear. We addressed these questions by studying perceptual expertise in reading musical notation, a domain in which individuals with different levels of expertise are readily available. We observed that removing line junctions created by the contact between musical notes and staff lines selectively impaired recognition performance in experts and intermediate readers, but not in novices. The degree of performance impairment was predicted by individual fluency in reading musical notation. Our findings suggest that line junctions provide diagnostic information about object identity across various categories, including musical notation. However, human sensitivity to line junctions does not readily transfer from familiar to unfamiliar object categories, and has to be acquired through perceptual experience with the specific objects.
The effect of implied orientation derived from verbal context on picture recognition.
Stanfield, R A; Zwaan, R A
2001-03-01
Perceptual symbol systems assume an analogue relationship between a symbol and its referent, whereas amodal symbol systems assume an arbitrary relationship between a symbol and its referent. According to perceptual symbol theories, the complete representation of an object, called a simulation, should reflect physical characteristics of the object. Amodal theories, in contrast, do not make this prediction. We tested the hypothesis, derived from perceptual symbol theories, that people mentally represent the orientation of an object implied by a verbal description. Orientation (vertical-horizontal) was manipulated by having participants read a sentence that implicitly suggested a particular orientation for an object. Then recognition latencies to pictures of the object in each of the two orientations were measured. Pictures matching the orientation of the object implied by the sentence were responded to faster than pictures that did not match the orientation. This finding is interpreted as offering support for theories positing perceptual symbol systems.
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2015-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects. PMID:22425615
Foley, Nicholas C; Grossberg, Stephen; Mingolla, Ennio
2012-08-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects. Copyright © 2012 Elsevier Inc. All rights reserved.
Pohit, M; Sharma, J
2015-05-10
Image recognition in the presence of both rotation and translation is a longstanding problem in correlation pattern recognition. Use of log polar transform gives a solution to this problem, but at a cost of losing the vital phase information from the image. The main objective of this paper is to develop an algorithm based on Fourier slice theorem for measuring the simultaneous rotation and translation of an object in a 2D plane. The algorithm is applicable for any arbitrary object shift for full 180° rotation.
Visual object recognition and tracking
NASA Technical Reports Server (NTRS)
Chang, Chu-Yin (Inventor); English, James D. (Inventor); Tardella, Neil M. (Inventor)
2010-01-01
This invention describes a method for identifying and tracking an object from two-dimensional data pictorially representing said object by an object-tracking system through processing said two-dimensional data using at least one tracker-identifier belonging to the object-tracking system for providing an output signal containing: a) a type of the object, and/or b) a position or an orientation of the object in three-dimensions, and/or c) an articulation or a shape change of said object in said three dimensions.
Comparing visual representations across human fMRI and computational vision
Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.
2013-01-01
Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227
The Color “Fruit”: Object Memories Defined by Color
Lewis, David E.; Pearson, Joel; Khuu, Sieu K.
2013-01-01
Most fruits and other highly color-diagnostic objects have color as a central aspect of their identity, which can facilitate detection and visual recognition. It has been theorized that there may be a large amount of overlap between the neural representations of these objects and processing involved in color perception. In accordance with this theory we sought to determine if the recognition of highly color diagnostic fruit objects could be facilitated by the visual presentation of their known color associates. In two experiments we show that color associate priming is possible, but contingent upon multiple factors. Color priming was found to be maximally effective for the most highly color diagnostic fruits, when low spatial-frequency information was present in the image, and when determination of the object's specific identity, not merely its category, was required. These data illustrate the importance of color for determining the identity of certain objects, and support the theory that object knowledge involves sensory specific systems. PMID:23717677
Data-centric method for object observation through scattering media
NASA Astrophysics Data System (ADS)
Tanida, Jun; Horisaki, Ryoichi
2018-03-01
A data-centric method is introduced for object observation through scattering media. A large number of training pairs are used to characterize the relation between the object and the observation signals based on machine learning. Using the method object information can be retrieved even from strongly-disturbed signals. As potential applications, object recognition, imaging, and focusing through scattering media were demonstrated.
Event-related potentials during word mapping to object shape predict toddlers' vocabulary size
Borgström, Kristina; Torkildsen, Janne von Koss; Lindgren, Magnus
2015-01-01
What role does attention to different object properties play in early vocabulary development? This longitudinal study using event-related potentials in combination with behavioral measures investigated 20- and 24-month-olds' (n = 38; n = 34; overlapping n = 24) ability to use object shape and object part information in word-object mapping. The N400 component was used to measure semantic priming by images containing shape or detail information. At 20 months, the N400 to words primed by object shape varied in topography and amplitude depending on vocabulary size, and these differences predicted productive vocabulary size at 24 months. At 24 months, when most of the children had vocabularies of several hundred words, the relation between vocabulary size and the N400 effect in a shape context was weaker. Detached object parts did not function as word primes regardless of age or vocabulary size, although the part-objects were identified behaviorally. The behavioral measure, however, also showed relatively poor recognition of the part-objects compared to the shape-objects. These three findings provide new support for the link between shape recognition and early vocabulary development. PMID:25762957
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-01-01
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition. PMID:29786665
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-05-22
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.
NASA Astrophysics Data System (ADS)
Millán, María S.
2012-10-01
On the verge of the 50th anniversary of Vander Lugt’s formulation for pattern matching based on matched filtering and optical correlation, we acknowledge the very intense research activity developed in the field of correlation-based pattern recognition during this period of time. The paper reviews some domains that appeared as emerging fields in the last years of the 20th century and have been developed later on in the 21st century. Such is the case of three-dimensional (3D) object recognition, biometric pattern matching, optical security and hybrid optical-digital processors. 3D object recognition is a challenging case of multidimensional image recognition because of its implications in the recognition of real-world objects independent of their perspective. Biometric recognition is essentially pattern recognition for which the personal identification is based on the authentication of a specific physiological characteristic possessed by the subject (e.g. fingerprint, face, iris, retina, and multifactor combinations). Biometric recognition often appears combined with encryption-decryption processes to secure information. The optical implementations of correlation-based pattern recognition processes still rely on the 4f-correlator, the joint transform correlator, or some of their variants. But the many applications developed in the field have been pushing the systems for a continuous improvement of their architectures and algorithms, thus leading towards merged optical-digital solutions.
Soh, Harold; Demiris, Yiannis
2014-01-01
Human beings not only possess the remarkable ability to distinguish objects through tactile feedback but are further able to improve upon recognition competence through experience. In this work, we explore tactile-based object recognition with learners capable of incremental learning. Using the sparse online infinite Echo-State Gaussian process (OIESGP), we propose and compare two novel discriminative and generative tactile learners that produce probability distributions over objects during object grasping/palpation. To enable iterative improvement, our online methods incorporate training samples as they become available. We also describe incremental unsupervised learning mechanisms, based on novelty scores and extreme value theory, when teacher labels are not available. We present experimental results for both supervised and unsupervised learning tasks using the iCub humanoid, with tactile sensors on its five-fingered anthropomorphic hand, and 10 different object classes. Our classifiers perform comparably to state-of-the-art methods (C4.5 and SVM classifiers) and findings indicate that tactile signals are highly relevant for making accurate object classifications. We also show that accurate "early" classifications are possible using only 20-30 percent of the grasp sequence. For unsupervised learning, our methods generate high quality clusterings relative to the widely-used sequential k-means and self-organising map (SOM), and we present analyses into the differences between the approaches.
Yuan, Tao; Zheng, Xinqi; Hu, Xuan; Zhou, Wei; Wang, Wei
2014-01-01
Objective and effective image quality assessment (IQA) is directly related to the application of optical remote sensing images (ORSI). In this study, a new IQA method of standardizing the target object recognition rate (ORR) is presented to reflect quality. First, several quality degradation treatments with high-resolution ORSIs are implemented to model the ORSIs obtained in different imaging conditions; then, a machine learning algorithm is adopted for recognition experiments on a chosen target object to obtain ORRs; finally, a comparison with commonly used IQA indicators was performed to reveal their applicability and limitations. The results showed that the ORR of the original ORSI was calculated to be up to 81.95%, whereas the ORR ratios of the quality-degraded images to the original images were 65.52%, 64.58%, 71.21%, and 73.11%. The results show that these data can more accurately reflect the advantages and disadvantages of different images in object identification and information extraction when compared with conventional digital image assessment indexes. By recognizing the difference in image quality from the application effect perspective, using a machine learning algorithm to extract regional gray scale features of typical objects in the image for analysis, and quantitatively assessing quality of ORSI according to the difference, this method provides a new approach for objective ORSI assessment.
Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images.
Udupa, Jayaram K; Odhner, Dewey; Zhao, Liming; Tong, Yubing; Matsumoto, Monica M S; Ciesielski, Krzysztof C; Falcao, Alexandre X; Vaideeswaran, Pavithra; Ciesielski, Victoria; Saboury, Babak; Mohammadianrasanani, Syedmehrdad; Sin, Sanghun; Arens, Raanan; Torigian, Drew A
2014-07-01
To make Quantitative Radiology (QR) a reality in radiological practice, computerized body-wide Automatic Anatomy Recognition (AAR) becomes essential. With the goal of building a general AAR system that is not tied to any specific organ system, body region, or image modality, this paper presents an AAR methodology for localizing and delineating all major organs in different body regions based on fuzzy modeling ideas and a tight integration of fuzzy models with an Iterative Relative Fuzzy Connectedness (IRFC) delineation algorithm. The methodology consists of five main steps: (a) gathering image data for both building models and testing the AAR algorithms from patient image sets existing in our health system; (b) formulating precise definitions of each body region and organ and delineating them following these definitions; (c) building hierarchical fuzzy anatomy models of organs for each body region; (d) recognizing and locating organs in given images by employing the hierarchical models; and (e) delineating the organs following the hierarchy. In Step (c), we explicitly encode object size and positional relationships into the hierarchy and subsequently exploit this information in object recognition in Step (d) and delineation in Step (e). Modality-independent and dependent aspects are carefully separated in model encoding. At the model building stage, a learning process is carried out for rehearsing an optimal threshold-based object recognition method. The recognition process in Step (d) starts from large, well-defined objects and proceeds down the hierarchy in a global to local manner. A fuzzy model-based version of the IRFC algorithm is created by naturally integrating the fuzzy model constraints into the delineation algorithm. The AAR system is tested on three body regions - thorax (on CT), abdomen (on CT and MRI), and neck (on MRI and CT) - involving a total of over 35 organs and 130 data sets (the total used for model building and testing). The training and testing data sets are divided into equal size in all cases except for the neck. Overall the AAR method achieves a mean accuracy of about 2 voxels in localizing non-sparse blob-like objects and most sparse tubular objects. The delineation accuracy in terms of mean false positive and negative volume fractions is 2% and 8%, respectively, for non-sparse objects, and 5% and 15%, respectively, for sparse objects. The two object groups achieve mean boundary distance relative to ground truth of 0.9 and 1.5 voxels, respectively. Some sparse objects - venous system (in the thorax on CT), inferior vena cava (in the abdomen on CT), and mandible and naso-pharynx (in neck on MRI, but not on CT) - pose challenges at all levels, leading to poor recognition and/or delineation results. The AAR method fares quite favorably when compared with methods from the recent literature for liver, kidneys, and spleen on CT images. We conclude that separation of modality-independent from dependent aspects, organization of objects in a hierarchy, encoding of object relationship information explicitly into the hierarchy, optimal threshold-based recognition learning, and fuzzy model-based IRFC are effective concepts which allowed us to demonstrate the feasibility of a general AAR system that works in different body regions on a variety of organs and on different modalities. Copyright © 2014 Elsevier B.V. All rights reserved.
Pereira, Luciana M; Bastos, Cristiane P; de Souza, Jéssica M; Ribeiro, Fabíola M; Pereira, Grace S
2014-10-01
In rodents, 17β-estradiol (E2) enhances hippocampal function and improves performance in several memory tasks. Regarding the object recognition paradigm, E2 commonly act as a cognitive enhancer. However, the types of estrogen receptor (ER) involved, as well as the underlying molecular mechanisms are still under investigation. In the present study, we asked whether E2 enhances object recognition memory by activating ERα and/or ERβ in the hippocampus of Swiss female mice. First, we showed that immediately post-training intraperitoneal (i.p.) injection of E2 (0.2 mg/kg) allowed object recognition memory to persist 48 h in ovariectomized (OVX) Swiss female mice. This result indicates that Swiss female mice are sensitive to the promnesic effects of E2 and is in accordance with other studies, which used C57/BL6 female mice. To verify if the activation of hippocampal ERα or ERβ would be sufficient to improve object memory, we used PPT and DPN, which are selective ERα and ERβ agonists, respectively. We found that PPT, but not DPN, improved object memory in Swiss female mice. However, DPN was able to improve memory in C57/BL6 female mice, which is in accordance with other studies. Next, we tested if the E2 effect on improving object memory depends on ER activation in the hippocampus. Thus, we tested if the infusion of intra-hippocampal TPBM and PHTPP, selective antagonists of ERα and ERβ, respectively, would block the memory enhancement effect of E2. Our results showed that TPBM, but not PHTPP, blunted the promnesic effect of E2, strongly suggesting that in Swiss female mice, the ERα and not the ERβ is the receptor involved in the promnesic effect of E2. It was already demonstrated that E2, as well as PPT and DPN, increase the phospho-ERK2 level in the dorsal hippocampus of C57/BL6 mice. Here we observed that PPT increased phospho-ERK1, while DPN decreased phospho-ERK2 in the dorsal hippocampus of Swiss female mice subjected to the object recognition sample phase. Taken together, our results suggest that the type of receptor as well as the molecular mechanism used by E2 to improve object memory may differ in Swiss female mice. Copyright © 2014 Elsevier Inc. All rights reserved.
Transient increase in Zn2+ in hippocampal CA1 pyramidal neurons causes reversible memory deficit.
Takeda, Atsushi; Takada, Shunsuke; Nakamura, Masatoshi; Suzuki, Miki; Tamano, Haruna; Ando, Masaki; Oku, Naoto
2011-01-01
The translocation of synaptic Zn(2+) to the cytosolic compartment has been studied to understand Zn(2+) neurotoxicity in neurological diseases. However, it is unknown whether the moderate increase in Zn(2+) in the cytosolic compartment affects memory processing in the hippocampus. In the present study, the moderate increase in cytosolic Zn(2+) in the hippocampus was induced with clioquinol (CQ), a zinc ionophore. Zn(2+) delivery by Zn-CQ transiently attenuated CA1 long-term potentiation (LTP) in hippocampal slices prepared 2 h after i.p. injection of Zn-CQ into rats, when intracellular Zn(2+) levels was transiently increased in the CA1 pyramidal cell layer, followed by object recognition memory deficit. Object recognition memory was transiently impaired 30 min after injection of ZnCl(2) into the CA1, but not after injection into the dentate gyrus that did not significantly increase intracellular Zn(2+) in the granule cell layer of the dentate gyrus. Object recognition memory deficit may be linked to the preferential increase in Zn(2+) and/or the preferential vulnerability to Zn(2+) in CA1 pyramidal neurons. In the case of the cytosolic increase in endogenous Zn(2+) in the CA1 induced by 100 mM KCl, furthermore, object recognition memory was also transiently impaired, while ameliorated by co-injection of CaEDTA to block the increase in cytosolic Zn(2+). The present study indicates that the transient increase in cytosolic Zn(2+) in CA1 pyramidal neurons reversibly impairs object recognition memory.
Transient Increase in Zn2+ in Hippocampal CA1 Pyramidal Neurons Causes Reversible Memory Deficit
Takeda, Atsushi; Takada, Shunsuke; Nakamura, Masatoshi; Suzuki, Miki; Tamano, Haruna; Ando, Masaki; Oku, Naoto
2011-01-01
The translocation of synaptic Zn2+ to the cytosolic compartment has been studied to understand Zn2+ neurotoxicity in neurological diseases. However, it is unknown whether the moderate increase in Zn2+ in the cytosolic compartment affects memory processing in the hippocampus. In the present study, the moderate increase in cytosolic Zn2+ in the hippocampus was induced with clioquinol (CQ), a zinc ionophore. Zn2+ delivery by Zn-CQ transiently attenuated CA1 long-term potentiation (LTP) in hippocampal slices prepared 2 h after i.p. injection of Zn-CQ into rats, when intracellular Zn2+ levels was transiently increased in the CA1 pyramidal cell layer, followed by object recognition memory deficit. Object recognition memory was transiently impaired 30 min after injection of ZnCl2 into the CA1, but not after injection into the dentate gyrus that did not significantly increase intracellular Zn2+ in the granule cell layer of the dentate gyrus. Object recognition memory deficit may be linked to the preferential increase in Zn2+ and/or the preferential vulnerability to Zn2+ in CA1 pyramidal neurons. In the case of the cytosolic increase in endogenous Zn2+ in the CA1 induced by 100 mM KCl, furthermore, object recognition memory was also transiently impaired, while ameliorated by co-injection of CaEDTA to block the increase in cytosolic Zn2+. The present study indicates that the transient increase in cytosolic Zn2+ in CA1 pyramidal neurons reversibly impairs object recognition memory. PMID:22163318
Christie, Lori-Ann; Saunders, Richard C.; Kowalska, Danuta, M.; MacKay, William A.; Head, Elizabeth; Cotman, Carl W.; Milgram, Norton W.
2014-01-01
To examine the effects of rhinal and dorsolateral prefrontal cortex lesions on object and spatial recognition memory in canines, we used a protocol in which both an object (delayed non-matching to sample, or DNMS) and a spatial (delayed non-matching to position or DNMP) recognition task were administered daily. The tasks used similar procedures such that only the type of stimulus information to be remembered differed. Rhinal cortex (RC) lesions produced a selective deficit on the DNMS task, both in retention of the task rules at short delays and in object recognition memory. By contrast, performance on the DNMP task remained intact at both short and long delay intervals in RC animals. Subjects who received dorsolateral prefrontal cortex (dlPFC) lesions were impaired on a spatial task at a short, 5-sec delay, suggesting disrupted retention of the general task rules, however, this impairment was transient; long-term spatial memory performance was unaffected in dlPFC subjects. The present results provide support for the involvement of the RC in object, but not visuospatial, processing and recognition memory, whereas the dlPFC appears to mediate retention of a non-matching rule. These findings support theories of functional specialization within the medial temporal lobe and frontal cortex and suggest that rhinal and dorsolateral prefrontal cortices in canines are functionally similar to analogous regions in other mammals. PMID:18792072
Ballesteros, Soledad; Mayas, Julia
2014-01-01
In the present study, we investigated the effects of selective attention at encoding on conceptual object priming (Experiment 1) and old-new recognition memory (Experiment 2) tasks in young and older adults. The procedures of both experiments included encoding and memory test phases separated by a short delay. At encoding, the picture outlines of two familiar objects, one in blue and the other in green, were presented to the left and to the right of fixation. In Experiment 1, participants were instructed to attend to the picture outline of a certain color and to classify the object as natural or artificial. After a short delay, participants performed a natural/artificial speeded conceptual classification task with repeated attended, repeated unattended, and new pictures. In Experiment 2, participants at encoding memorized the attended pictures and classify them as natural or artificial. After the encoding phase, they performed an old-new recognition memory task. Consistent with previous findings with perceptual priming tasks, we found that conceptual object priming, like explicit memory, required attention at encoding. Significant priming was obtained in both age groups, but only for those pictures that were attended at encoding. Although older adults were slower than young adults, both groups showed facilitation for attended pictures. In line with previous studies, young adults had better recognition memory than older adults.
Ballesteros, Soledad; Mayas, Julia
2015-01-01
In the present study, we investigated the effects of selective attention at encoding on conceptual object priming (Experiment 1) and old–new recognition memory (Experiment 2) tasks in young and older adults. The procedures of both experiments included encoding and memory test phases separated by a short delay. At encoding, the picture outlines of two familiar objects, one in blue and the other in green, were presented to the left and to the right of fixation. In Experiment 1, participants were instructed to attend to the picture outline of a certain color and to classify the object as natural or artificial. After a short delay, participants performed a natural/artificial speeded conceptual classification task with repeated attended, repeated unattended, and new pictures. In Experiment 2, participants at encoding memorized the attended pictures and classify them as natural or artificial. After the encoding phase, they performed an old–new recognition memory task. Consistent with previous findings with perceptual priming tasks, we found that conceptual object priming, like explicit memory, required attention at encoding. Significant priming was obtained in both age groups, but only for those pictures that were attended at encoding. Although older adults were slower than young adults, both groups showed facilitation for attended pictures. In line with previous studies, young adults had better recognition memory than older adults. PMID:25628588
Symbolic Play Connects to Language through Visual Object Recognition
ERIC Educational Resources Information Center
Smith, Linda B.; Jones, Susan S.
2011-01-01
Object substitutions in play (e.g. using a box as a car) are strongly linked to language learning and their absence is a diagnostic marker of language delay. Classic accounts posit a symbolic function that underlies both words and object substitutions. Here we show that object substitutions depend on developmental changes in visual object…
How Does Using Object Names Influence Visual Recognition Memory?
ERIC Educational Resources Information Center
Richler, Jennifer J.; Palmeri, Thomas J.; Gauthier, Isabel
2013-01-01
Two recent lines of research suggest that explicitly naming objects at study influences subsequent memory for those objects at test. Lupyan (2008) suggested that naming "impairs" memory by a representational shift of stored representations of named objects toward the prototype (labeling effect). MacLeod, Gopie, Hourihan, Neary, and Ozubko (2010)…
Objects predict fixations better than early saliency.
Einhäuser, Wolfgang; Spain, Merrielle; Perona, Pietro
2008-11-20
Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as "saliency maps," are often built on the assumption that "early" features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to "interesting" objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated.
Detailed 3D representations for object recognition and modeling.
Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad
2013-11-01
Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.
Automatic target recognition apparatus and method
Baumgart, Chris W.; Ciarcia, Christopher A.
2000-01-01
An automatic target recognition apparatus (10) is provided, having a video camera/digitizer (12) for producing a digitized image signal (20) representing an image containing therein objects which objects are to be recognized if they meet predefined criteria. The digitized image signal (20) is processed within a video analysis subroutine (22) residing in a computer (14) in a plurality of parallel analysis chains such that the objects are presumed to be lighter in shading than the background in the image in three of the chains and further such that the objects are presumed to be darker than the background in the other three chains. In two of the chains the objects are defined by surface texture analysis using texture filter operations. In another two of the chains the objects are defined by background subtraction operations. In yet another two of the chains the objects are defined by edge enhancement processes. In each of the analysis chains a calculation operation independently determines an error factor relating to the probability that the objects are of the type which should be recognized, and a probability calculation operation combines the results of the analysis chains.
A rodent model for the study of invariant visual object recognition
Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.
2009-01-01
The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704
ERIC Educational Resources Information Center
Bukach, Cindy M.; Bub, Daniel N.; Masson, Michael E. J.; Lindsay, D. Stephen
2004-01-01
Studies of patients with category-specific agnosia (CSA) have given rise to multiple theories of object recognition, most of which assume the existence of a stable, abstract semantic memory system. We applied an episodic view of memory to questions raised by CSA in a series of studies examining normal observers' recall of newly learned attributes…
ERIC Educational Resources Information Center
Brown, Malcolm Watson; Warburton, Elizabeth Clea; Barker, Gareth Robert Isaac; Bashir, Zafar Iqbal
2006-01-01
Recognition memory, involving the ability to discriminate between a novel and familiar object, depends on the integrity of the perirhinal cortex (PRH). Glutamate, the main excitatory neurotransmitter in the cortex, is essential for many types of memory processes. Of the subtypes of glutamate receptor, metabotropic receptors (mGluRs) have received…
The Development of Adaptive Decision Making: Recognition-Based Inference in Children and Adolescents
ERIC Educational Resources Information Center
Horn, Sebastian S.; Ruggeri, Azzurra; Pachur, Thorsten
2016-01-01
Judgments about objects in the world are often based on probabilistic information (or cues). A frugal judgment strategy that utilizes memory (i.e., the ability to discriminate between known and unknown objects) as a cue for inference is the recognition heuristic (RH). The usefulness of the RH depends on the structure of the environment,…
An Approach to Object Recognition: Aligning Pictorial Descriptions.
1986-12-01
PERFORMING 0RGANIZATION NAMIE ANDORS IS551. PROGRAM ELEMENT. PROJECT. TASK Artificial Inteligence Laboratory AREKA A WORK UNIT NUMBERS ( 545 Technology... ARTIFICIAL INTELLIGENCE LABORATORY A.I. Memo No. 931 December, 1986 AN APPROACH TO OBJECT RECOGNITION: ALIGNING PICTORIAL DESCRIPTIONS Shimon Ullman...within the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. Support for the A.I. Laboratory’s artificial intelligence
ERIC Educational Resources Information Center
Chapman, Michael
1987-01-01
Explores development of cognitive representation in 20 infants 12 to 24 months of age with regard to (l) their understanding of agency in symbolic play (agent use), (2) recognition of their own mirror image, and (3) object permanence. Results were generally consistent with developmental sequences predicted by Fischer's Skill Theory for agent use…
ERIC Educational Resources Information Center
Fortress, Ashley M.; Fan, Lu; Orr, Patrick T.; Zhao, Zaorui; Frick, Karyn M.
2013-01-01
The mammalian target of rapamycin (mTOR) signaling pathway is an important regulator of protein synthesis and is essential for various forms of hippocampal memory. Here, we asked whether the enhancement of object recognition memory consolidation produced by dorsal hippocampal infusion of 17[Beta]-estradiol (E[subscript 2]) is dependent on mTOR…
NASA Astrophysics Data System (ADS)
Cyganek, Boguslaw; Smolka, Bogdan
2015-02-01
In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.
Online Feature Transformation Learning for Cross-Domain Object Category Recognition.
Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold
2017-06-09
In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.
Automatic image database generation from CAD for 3D object recognition
NASA Astrophysics Data System (ADS)
Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.
1993-06-01
The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.
Kim, Min-Beom; Chung, Won-Ho; Choi, Jeesun; Hong, Sung Hwa; Cho, Yang-Sun; Park, Gyuseok; Lee, Sangmin
2014-06-01
The object was to evaluate speech perception improvement through Bluetooth-implemented hearing aids in hearing-impaired adults. Thirty subjects with bilateral symmetric moderate sensorineural hearing loss participated in this study. A Bluetooth-implemented hearing aid was fitted unilaterally in all study subjects. Objective speech recognition score and subjective satisfaction were measured with a Bluetooth-implemented hearing aid to replace the acoustic connection from either a cellular phone or a loudspeaker system. In each system, participants were assigned to 4 conditions: wireless speech signal transmission into hearing aid (wireless mode) in quiet or noisy environment and conventional speech signal transmission using external microphone of hearing aid (conventional mode) in quiet or noisy environment. Also, participants completed questionnaires to investigate subjective satisfaction. Both cellular phone and loudspeaker system situation, participants showed improvements in sentence and word recognition scores with wireless mode compared to conventional mode in both quiet and noise conditions (P < .001). Participants also reported subjective improvements, including better sound quality, less noise interference, and better accuracy naturalness, when using the wireless mode (P < .001). Bluetooth-implemented hearing aids helped to improve subjective and objective speech recognition performances in quiet and noisy environments during the use of electronic audio devices.
Comparing object recognition from binary and bipolar edge images for visual prostheses.
Jung, Jae-Hyun; Pu, Tian; Peli, Eli
2016-11-01
Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition.
Exploiting core knowledge for visual object recognition.
Schurgin, Mark W; Flombaum, Jonathan I
2017-03-01
Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Object and event recognition for stroke rehabilitation
NASA Astrophysics Data System (ADS)
Ghali, Ahmed; Cunningham, Andrew S.; Pridmore, Tony P.
2003-06-01
Stroke is a major cause of disability and health care expenditure around the world. Existing stroke rehabilitation methods can be effective but are costly and need to be improved. Even modest improvements in the effectiveness of rehabilitation techniques could produce large benefits in terms of quality of life. The work reported here is part of an ongoing effort to integrate virtual reality and machine vision technologies to produce innovative stroke rehabilitation methods. We describe a combined object recognition and event detection system that provides real time feedback to stroke patients performing everyday kitchen tasks necessary for independent living, e.g. making a cup of coffee. The image plane position of each object, including the patient"s hand, is monitored using histogram-based recognition methods. The relative positions of hand and objects are then reported to a task monitor that compares the patient"s actions against a model of the target task. A prototype system has been constructed and is currently undergoing technical and clinical evaluation.
ERIC Educational Resources Information Center
Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio
2009-01-01
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified…
Intrinsic and contextual features in object recognition.
Schlangen, Derrick; Barenholtz, Elan
2015-01-28
The context in which an object is found can facilitate its recognition. Yet, it is not known how effective this contextual information is relative to the object's intrinsic visual features, such as color and shape. To address this, we performed four experiments using rendered scenes with novel objects. In each experiment, participants first performed a visual search task, searching for a uniquely shaped target object whose color and location within the scene was experimentally manipulated. We then tested participants' tendency to use their knowledge of the location and color information in an identification task when the objects' images were degraded due to blurring, thus eliminating the shape information. In Experiment 1, we found that, in the absence of any diagnostic intrinsic features, participants identified objects based purely on their locations within the scene. In Experiment 2, we found that participants combined an intrinsic feature, color, with contextual location in order to uniquely specify an object. In Experiment 3, we found that when an object's color and location information were in conflict, participants identified the object using both sources of information equally. Finally, in Experiment 4, we found that participants used whichever source of information-either color or location-was more statistically reliable in order to identify the target object. Overall, these experiments show that the context in which objects are found can play as important a role as intrinsic features in identifying the objects. © 2015 ARVO.
The roles of scene priming and location priming in object-scene consistency effects
Heise, Nils; Ansorge, Ulrich
2014-01-01
Presenting consistent objects in scenes facilitates object recognition as compared to inconsistent objects. Yet the mechanisms by which scenes influence object recognition are still not understood. According to one theory, consistent scenes facilitate visual search for objects at expected places. Here, we investigated two predictions following from this theory: If visual search is responsible for consistency effects, consistency effects could be weaker (1) with better-primed than less-primed object locations, and (2) with less-primed than better-primed scenes. In Experiments 1 and 2, locations of objects were varied within a scene to a different degree (one, two, or four possible locations). In addition, object-scene consistency was studied as a function of progressive numbers of repetitions of the backgrounds. Because repeating locations and backgrounds could facilitate visual search for objects, these repetitions might alter the object-scene consistency effect by lowering of location uncertainty. Although we find evidence for a significant consistency effect, we find no clear support for impacts of scene priming or location priming on the size of the consistency effect. Additionally, we find evidence that the consistency effect is dependent on the eccentricity of the target objects. These results point to only small influences of priming to object-scene consistency effects but all-in-all the findings can be reconciled with a visual-search explanation of the consistency effect. PMID:24910628
Object recognition in images via a factor graph model
NASA Astrophysics Data System (ADS)
He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu
2018-04-01
Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.
Hierarchical Context Modeling for Video Event Recognition.
Wang, Xiaoyang; Ji, Qiang
2016-10-11
Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.
NASA Astrophysics Data System (ADS)
Marra, Kyle; Graham, Brett; Carouso, Samantha; Cox, David
2012-02-01
While the application of local cortical cooling has recently become a focus of neurological research, extended localized deactivation deep within brain structures is still unexplored. Using a wirelessly controlled thermoelectric (Peltier) device and water-based heat sink, we have achieved inactivating temperatures (<20 C) at greater depths (>8 mm) than previously reported. After implanting the device into Long Evans rats' basolateral amygdala (BLA), an inhibitory brain center that controls anxiety and fear, we ran an open field test during which anxiety-driven behavioral tendencies were observed to decrease during cooling, thus confirming the device's effect on behavior. Our device will next be implanted in the rats' temporal association cortex (TeA) and recordings from our signal-tracing multichannel microelectrodes will measure and compare activated and deactivated neuronal activity so as to isolate and study the TeA signals responsible for object recognition. Having already achieved a top performing computational face-recognition system, the lab will utilize this TeA activity data to generalize its computational efforts of face recognition to achieve general object recognition.
Shi, Hai-Shui; Yin, Xi; Song, Li; Guo, Qing-Jun; Luo, Xiang-Heng
2012-02-01
Accumulating evidence has implicated neuropeptides in modulating recognition, learning and memory. However, to date, no study has investigated the effects of neuropeptide Trefoil factor 3 (TFF3) on the process of learning and memory. In the present study, we evaluated the acute effects of TFF3 administration (0.1 and 0.5mg/kg, i.p.) on the acquisition and retention of object recognition memory in mice. We found that TFF3 administration significantly enhanced both short-term and long-term memory during the retention test, conducted 90 min and 24h after training respectively. Remarkably, acute TFF3 administration transformed a learning event that would not normally result in long-term memory into an event retained for a long-term period and produced no effect on locomotor activity in mice. In conclusion, the present results provide an important role of TFF3 in improving object recognition memory and reserving it for a longer time, which suggests a potential therapeutic application for diseases with recognition and memory impairment. Copyright © 2011 Elsevier B.V. All rights reserved.
Image-based automatic recognition of larvae
NASA Astrophysics Data System (ADS)
Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai
2010-08-01
As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.
NASA Astrophysics Data System (ADS)
Syryamkim, V. I.; Kuznetsov, D. N.; Kuznetsova, A. S.
2018-05-01
Image recognition is an information process implemented by some information converter (intelligent information channel, recognition system) having input and output. The input of the system is fed with information about the characteristics of the objects being presented. The output of the system displays information about which classes (generalized images) the recognized objects are assigned to. When creating and operating an automated system for pattern recognition, a number of problems are solved, while for different authors the formulations of these tasks, and the set itself, do not coincide, since it depends to a certain extent on the specific mathematical model on which this or that recognition system is based. This is the task of formalizing the domain, forming a training sample, learning the recognition system, reducing the dimensionality of space.
Semantic memory in object use.
Silveri, Maria Caterina; Ciccarelli, Nicoletta
2009-10-01
We studied five patients with semantic memory disorders, four with semantic dementia and one with herpes simplex virus encephalitis, to investigate the involvement of semantic conceptual knowledge in object use. Comparisons between patients who had semantic deficits of different severity, as well as the follow-up, showed that the ability to use objects was largely preserved when the deficit was mild but progressively decayed as the deficit became more severe. Naming was generally more impaired than object use. Production tasks (pantomime execution and actual object use) and comprehension tasks (pantomime recognition and action recognition) as well as functional knowledge about objects were impaired when the semantic deficit was severe. Semantic and unrelated errors were produced during object use, but actions were always fluent and patients performed normally on a novel tools task in which the semantic demand was minimal. Patients with severe semantic deficits scored borderline on ideational apraxia tasks. Our data indicate that functional semantic knowledge is crucial for using objects in a conventional way and suggest that non-semantic factors, mainly non-declarative components of memory, might compensate to some extent for semantic disorders and guarantee some residual ability to use very common objects independently of semantic knowledge.
Mathematics Education: Student Terminal Goals, Program Goals, and Behavioral Objectives.
ERIC Educational Resources Information Center
Mesa Public Schools, AZ.
Behavioral objectives are listed for the primary, intermediate and junior high mathematics curriculum in the Mesa Public Schools (Arizona). Lists of specific objectives are given by level for sets, symbol recognition, number operations, mathematical structures, measurement and problem solving skills. (JP)
An Intelligent Systems Approach to Automated Object Recognition: A Preliminary Study
Maddox, Brian G.; Swadley, Casey L.
2002-01-01
Attempts at fully automated object recognition systems have met with varying levels of success over the years. However, none of the systems have achieved high enough accuracy rates to be run unattended. One of the reasons for this may be that they are designed from the computer's point of view and rely mainly on image-processing methods. A better solution to this problem may be to make use of modern advances in computational intelligence and distributed processing to try to mimic how the human brain is thought to recognize objects. As humans combine cognitive processes with detection techniques, such a system would combine traditional image-processing techniques with computer-based intelligence to determine the identity of various objects in a scene.
Learning to distinguish similar objects
NASA Astrophysics Data System (ADS)
Seibert, Michael; Waxman, Allen M.; Gove, Alan N.
1995-04-01
This paper describes how the similarities and differences among similar objects can be discovered during learning to facilitate recognition. The application domain is single views of flying model aircraft captured in silhouette by a CCD camera. The approach was motivated by human psychovisual and monkey neurophysiological data. The implementation uses neural net processing mechanisms to build a hierarchy that relates similar objects to superordinate classes, while simultaneously discovering the salient differences between objects within a class. Learning and recognition experiments both with and without the class similarity and difference learning show the effectiveness of the approach on this visual data. To test the approach, the hierarchical approach was compared to a non-hierarchical approach, and was found to improve the average percentage of correctly classified views from 77% to 84%.
Data-driven indexing mechanism for the recognition of polyhedral objects
NASA Astrophysics Data System (ADS)
McLean, Stewart; Horan, Peter; Caelli, Terry M.
1992-02-01
This paper is concerned with the problem of searching large model databases. To date, most object recognition systems have concentrated on the problem of matching using simple searching algorithms. This is quite acceptable when the number of object models is small. However, in the future, general purpose computer vision systems will be required to recognize hundreds or perhaps thousands of objects and, in such circumstances, efficient searching algorithms will be needed. The problem of searching a large model database is one which must be addressed if future computer vision systems are to be at all effective. In this paper we present a method we call data-driven feature-indexed hypothesis generation as one solution to the problem of searching large model databases.
Reading K-3. Instructional Objectives Exchange. A Project of the Center for the Study of Evaluation.
ERIC Educational Resources Information Center
California Univ., Los Angeles. Center for the Study of Evaluation.
Three hundred and ninety-seven objectives and related evaluation items for reading in grades kindergarten through three are presented for the teacher and administrator in this collection developed by the Instructional Objectives Exchange (IOX). The objectives are organized into the categories of word recognition, comprehension, and study skills,…
Function Follows Form: Activation of Shape and Function Features during Object Identification
ERIC Educational Resources Information Center
Yee, Eiling; Huffstetler, Stacy; Thompson-Schill, Sharon L.
2011-01-01
Most theories of semantic memory characterize knowledge of a given object as comprising a set of semantic features. But how does conceptual activation of these features proceed during object identification? We present the results of a pair of experiments that demonstrate that object recognition is a dynamically unfolding process in which function…
Infants' Recognition of Objects Using Canonical Color
ERIC Educational Resources Information Center
Kimura, Atsushi; Wada, Yuji; Yang, Jiale; Otsuka, Yumiko; Dan, Ippeita; Masuda, Tomohiro; Kanazawa, So; Yamaguchi, Masami K.
2010-01-01
We explored infants' ability to recognize the canonical colors of daily objects, including two color-specific objects (human face and fruit) and a non-color-specific object (flower), by using a preferential looking technique. A total of 58 infants between 5 and 8 months of age were tested with a stimulus composed of two color pictures of an object…
Genetic specificity of face recognition.
Shakeshaft, Nicholas G; Plomin, Robert
2015-10-13
Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities.
Genetic specificity of face recognition
Shakeshaft, Nicholas G.; Plomin, Robert
2015-01-01
Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities. PMID:26417086
Evidence for perceptual deficits in associative visual (prosop)agnosia: a single-case study.
Delvenne, Jean François; Seron, Xavier; Coyette, Françoise; Rossion, Bruno
2004-01-01
Associative visual agnosia is classically defined as normal visual perception stripped of its meaning [Archiv für Psychiatrie und Nervenkrankheiten 21 (1890) 22/English translation: Cognitive Neuropsychol. 5 (1988) 155]: these patients cannot access to their stored visual memories to categorize the objects nonetheless perceived correctly. However, according to an influential theory of visual agnosia [Farah, Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision, MIT Press, Cambridge, MA, 1990], visual associative agnosics necessarily present perceptual deficits that are the cause of their impairment at object recognition Here we report a detailed investigation of a patient with bilateral occipito-temporal lesions strongly impaired at object and face recognition. NS presents normal drawing copy, and normal performance at object and face matching tasks as used in classical neuropsychological tests. However, when tested with several computer tasks using carefully controlled visual stimuli and taking both his accuracy rate and response times into account, NS was found to have abnormal performances at high-level visual processing of objects and faces. Albeit presenting a different pattern of deficits than previously described in integrative agnosic patients such as HJA and LH, his deficits were characterized by an inability to integrate individual parts into a whole percept, as suggested by his failure at processing structurally impossible three-dimensional (3D) objects, an absence of face inversion effects and an advantage at detecting and matching single parts. Taken together, these observations question the idea of separate visual representations for object/face perception and object/face knowledge derived from investigations of visual associative (prosop)agnosia, and they raise some methodological issues in the analysis of single-case studies of (prosop)agnosic patients.
Malin, David H; Lee, David R; Goyarzu, Pilar; Chang, Yu-Hsuan; Ennis, Lalanya J; Beckett, Elizabeth; Shukitt-Hale, Barbara; Joseph, James A
2011-03-01
Previously, 4 mo of a blueberry-enriched (BB) antioxidant diet prevented impaired object recognition memory in aging rats. Experiment 1 determined whether 1- and 2-mo BB diets would have a similar effect and whether the benefits would disappear promptly after terminating the diets. Experiment 2 determined whether a 1-mo BB diet could subsequently reverse existing object memory impairment in aging rats. In experiment 1, Fischer-344 rats were maintained on an appropriate control diet or on 1 or 2 mo of the BB diet before testing object memory at 19 mo postnatally. In experiment 2, rats were tested for object recognition memory at 19 mo and again at 20 mo after 1 mo of maintenance on a 2% BB or control diet. In experiment 1, the control group performed no better than chance, whereas the 1- and 2-mo BB diet groups performed similarly and significantly better than controls. The 2-mo BB-diet group, but not the 1-mo group, maintained its performance over a subsequent month on a standard laboratory diet. In experiment 2, the 19-mo-old rats performed near chance. At 20 mo of age, the rats subsequently maintained on the BB diet significantly increased their object memory scores, whereas the control diet group exhibited a non-significant decline. The change in object memory scores differed significantly between the two diet groups. These results suggest that a considerable degree of age-related object memory decline can be prevented and reversed by brief maintenance on BB diets. Copyright © 2011 Elsevier Inc. All rights reserved.
Study on cognitive impairment in diabetic rats by different behavioral experiments
NASA Astrophysics Data System (ADS)
Yu-bin, Ji; Zeng-yi, Li; Guo-song, Xin; Chi, Wei; Hong-jian, Zhu
2017-12-01
Object recognition test and Y maze test are widely used in learning and memory behavior evaluation techniques and methods. It was found that in the new object recognition experiment, the diabetic rats did more slowly than the normal rats in the discrimination of the old and new objects, and the learning and memory of the rats in the diabetic rats were injured. And the ratio of retention time and the number of errors in the Y maze test was much higher than that in the blank control group. These two methods can reflect the cognitive impairment in diabetic rats.
Sountsov, Pavel; Santucci, David M; Lisman, John E
2011-01-01
Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated.
Sountsov, Pavel; Santucci, David M.; Lisman, John E.
2011-01-01
Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated. PMID:22125522
Automated Target Acquisition, Recognition and Tracking (ATTRACT). Phase 1
NASA Technical Reports Server (NTRS)
Abdallah, Mahmoud A.
1995-01-01
The primary objective of phase 1 of this research project is to conduct multidisciplinary research that will contribute to fundamental scientific knowledge in several of the USAF critical technology areas. Specifically, neural networks, signal processing techniques, and electro-optic capabilities are utilized to solve problems associated with automated target acquisition, recognition, and tracking. To accomplish the stated objective, several tasks have been identified and were executed.
Bennetts, Rachel J; Mole, Joseph; Bate, Sarah
2017-09-01
Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.
Meuwese, Julia D I; van Loon, Anouk M; Lamme, Victor A F; Fahrenfort, Johannes J
2014-05-01
Perceptual decisions seem to be made automatically and almost instantly. Constructing a unitary subjective conscious experience takes more time. For example, when trying to avoid a collision with a car on a foggy road you brake or steer away in a reflex, before realizing you were in a near accident. This subjective aspect of object recognition has been given little attention. We used metacognition (assessed with confidence ratings) to measure subjective experience during object detection and object categorization for degraded and masked objects, while objective performance was matched. Metacognition was equal for degraded and masked objects, but categorization led to higher metacognition than did detection. This effect turned out to be driven by a difference in metacognition for correct rejection trials, which seemed to be caused by an asymmetry of the distractor stimulus: It does not contain object-related information in the detection task, whereas it does contain such information in the categorization task. Strikingly, this asymmetry selectively impacted metacognitive ability when objective performance was matched. This finding reveals a fundamental difference in how humans reflect versus act on information: When matching the amount of information required to perform two tasks at some objective level of accuracy (acting), metacognitive ability (reflecting) is still better in tasks that rely on positive evidence (categorization) than in tasks that rely more strongly on an absence of evidence (detection).
Fan, Lu; Zhao, Zaorui; Orr, Patrick T.; Chambers, Cassie H.; Lewis, Michael C.; Frick, Karyn M.
2010-01-01
We previously demonstrated that dorsal hippocampal extracellular signal-regulated kinase (ERK) activation is necessary for 17β-estradiol (E2) to enhance novel object recognition in young ovariectomized mice (Fernandez et al., 2008). Here, we asked whether E2 has similar memory-enhancing effects in middle-aged and aged ovariectomized mice, and whether these effects depend on ERK and phosphatidylinositol 3-kinase (PI3K)/Akt activation. We first demonstrated that intracerebroventricular (ICV) E2 or intrahippocampal (IH) E2 infusion immediately after object recognition training enhanced memory consolidation in middle-aged, but not aged, females. The E2-induced enhancement in middle-aged females was blocked by IH inhibition of ERK or PI3K activation. IH or ICV E2 infusion in middle-aged females increased phosphorylation of p42 ERK in the dorsal hippocampus 15, but not 5, min after infusion, an effect that was blocked by IH inhibition of ERK or PI3K activation. Dorsal hippocampal PI3K and Akt phosphorylation was increased 5 min after IH or ICV E2 infusion in middle-aged, but not aged, females. ICV E2 infusion also increased PI3K phosphorylation after 15 min, and this effect was blocked by IH PI3K, but not ERK, inhibition. These data demonstrate for the first time that activation of dorsal hippocampal PI3K/Akt and ERK signaling pathways is necessary for E2 to enhance object recognition memory in middle-aged females. They also reveal that similar dorsal hippocampal signaling pathways mediate E2-induced object recognition memory enhancement in young and middle-aged females, and that the inability of E2 to activate these pathways may underlie its failure to enhance object recognition in aged females. PMID:20335475
Recognition of upper airway and surrounding structures at MRI in pediatric PCOS and OSAS
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, J. K.; Odhner, D.; Sin, Sanghun; Arens, Raanan
2013-03-01
Obstructive Sleep Apnea Syndrome (OSAS) is common in obese children with risk being 4.5 fold compared to normal control subjects. Polycystic Ovary Syndrome (PCOS) has recently been shown to be associated with OSAS that may further lead to significant cardiovascular and neuro-cognitive deficits. We are investigating image-based biomarkers to understand the architectural and dynamic changes in the upper airway and the surrounding hard and soft tissue structures via MRI in obese teenage children to study OSAS. At the previous SPIE conferences, we presented methods underlying Fuzzy Object Models (FOMs) for Automatic Anatomy Recognition (AAR) based on CT images of the thorax and the abdomen. The purpose of this paper is to demonstrate that the AAR approach is applicable to a different body region and image modality combination, namely in the study of upper airway structures via MRI. FOMs were built hierarchically, the smaller sub-objects forming the offspring of larger parent objects. FOMs encode the uncertainty and variability present in the form and relationships among the objects over a study population. Totally 11 basic objects (17 including composite) were modeled. Automatic recognition for the best pose of FOMs in a given image was implemented by using four methods - a one-shot method that does not require search, another three searching methods that include Fisher Linear Discriminate (FLD), a b-scale energy optimization strategy, and optimum threshold recognition method. In all, 30 multi-fold cross validation experiments based on 15 patient MRI data sets were carried out to assess the accuracy of recognition. The results indicate that the objects can be recognized with an average location error of less than 5 mm or 2-3 voxels. Then the iterative relative fuzzy connectedness (IRFC) algorithm was adopted for delineation of the target organs based on the recognized results. The delineation results showed an overall FP and TP volume fraction of 0.02 and 0.93.
Forms Of Memory For Representation Of Visual Objects
1991-02-14
description system that functions independently of the episodic memory system that is damaged in amnesia and supports explicit remembering. Miscellaneous...well as semantic and functional information about an object, are preserved in the episodic system. 4. Priming and recognition of depth-cued, 3D objects A...requirement should serve to enhance an object’s distinctiveness in episodic memory . We also predicted robust priming for symmetric objects; this is because
Dynamic information processing states revealed through neurocognitive models of object semantics
Clarke, Alex
2015-01-01
Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations – a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time. PMID:25745632
Fragility of haptic memory in human full-term newborns.
Lejeune, Fleur; Borradori Tolsa, Cristina; Gentaz, Edouard; Barisnikov, Koviljka
2018-05-31
Numerous studies have established that newborns can memorize tactile information about the specific features of an object with their hands and detect differences with another object. However, the robustness of haptic memory abilities has already been examined in preterm newborns and in full-term infants, but not yet in full-term newborns. This research is aimed to better understand the robustness of haptic memory abilities at birth by examining the effects of a change in the objects' temperature and haptic interference. Sixty-eight full-term newborns (mean postnatal age: 2.5 days) were included. The two experiments were conducted in three phases: habituation (repeated presentation of the same object, a prism or cylinder in the newborn's hand), discrimination (presentation of a novel object), and recognition (presentation of the familiar object). In Experiment 1, the change in the objects' temperature was controlled during the three phases. Results reveal that newborns can memorize specific features that differentiate prism and cylinder shapes by touch, and discriminate between them, but surprisingly they did not show evidence of recognizing them after interference. As no significant effect of the temperature condition was observed in habituation, discrimination and recognition abilities, these findings suggest that discrimination abilities in newborns may be determined by the detection of shape differences. Overall, it seems that the ontogenesis of haptic recognition memory is not linear. The developmental schedule is likely crucial for haptic development between 34 and 40 GW. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.
2009-08-01
Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.
Automated Recognition of 3D Features in GPIR Images
NASA Technical Reports Server (NTRS)
Park, Han; Stough, Timothy; Fijany, Amir
2007-01-01
A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.
Tafazoli, Sina; Safaai, Houman; De Franceschi, Gioia; Rosselli, Federica Bianca; Vanzella, Walter; Riggi, Margherita; Buffolo, Federica; Panzeri, Stefano; Zoccolan, Davide
2017-01-01
Rodents are emerging as increasingly popular models of visual functions. Yet, evidence that rodent visual cortex is capable of advanced visual processing, such as object recognition, is limited. Here we investigate how neurons located along the progression of extrastriate areas that, in the rat brain, run laterally to primary visual cortex, encode object information. We found a progressive functional specialization of neural responses along these areas, with: (1) a sharp reduction of the amount of low-level, energy-related visual information encoded by neuronal firing; and (2) a substantial increase in the ability of both single neurons and neuronal populations to support discrimination of visual objects under identity-preserving transformations (e.g., position and size changes). These findings strongly argue for the existence of a rat object-processing pathway, and point to the rodents as promising models to dissect the neuronal circuitry underlying transformation-tolerant recognition of visual objects. DOI: http://dx.doi.org/10.7554/eLife.22794.001 PMID:28395730
EPO improved neurologic outcome in rat pups late after traumatic brain injury.
Schober, Michelle E; Requena, Daniela F; Rodesch, Christopher K
2018-05-01
In adult rats, erythropoietin improved outcomes early and late after traumatic brain injury, associated with increased levels of Brain Derived Neurotrophic Factor. Using our model of pediatric traumatic brain injury, controlled cortical impact in 17-day old rats, we previously showed that erythropoietin increased hippocampal neuronal fraction in the first two days after injury. Erythropoietin also decreased activation of caspase3, an apoptotic enzyme modulated by Brain Derived Neurotrophic Factor, and improved Novel Object Recognition testing 14 days after injury. Data on long-term effects of erythropoietin on Brain Derived Neurotrophic Factor expression, histology and cognitive function after developmental traumatic brain injury are lacking. We hypothesized that erythropoietin would increase Brain Derived Neurotrophic Factor and improve long-term object recognition in rat pups after controlled cortical impact, associated with increased neuronal fraction in the hippocampus. Rats pups received erythropoietin or vehicle at 1, 24, and 48 h and 7 days after injury or sham surgery followed by histology at 35 days, Novel Object Recognition testing at adulthood, and Brain Derived Neurotrophic Factor measurements early and late after injury. Erythropoietin improved Novel Object Recognition performance and preserved hippocampal volume, but not neuronal fraction, late after injury. Improved object recognition in erythropoietin treated rats was associated with preserved hippocampal volume late after traumatic brain injury. Erythropoietin is approved to treat various pediatric conditions. Coupled with exciting experimental and clinical studies suggesting it is beneficial after neonatal hypoxic ischemic brain injury, our preliminary findings support further study of erythropoietin use after developmental traumatic brain injury. Copyright © 2018 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Kinnavane, L; Amin, E; Horne, M; Aggleton, J P
2014-01-01
The present study examined immediate-early gene expression in the perirhinal cortex of rats with hippocampal lesions. The goal was to test those models of recognition memory which assume that the perirhinal cortex can function independently of the hippocampus. The c-fos gene was targeted, as its expression in the perirhinal cortex is strongly associated with recognition memory. Four groups of rats were examined. Rats with hippocampal lesions and their surgical controls were given either a recognition memory task (novel vs. familiar objects) or a relative recency task (objects with differing degrees of familiarity). Perirhinal Fos expression in the hippocampal-lesioned groups correlated with both recognition and recency performance. The hippocampal lesions, however, had no apparent effect on overall levels of perirhinal or entorhinal cortex c-fos expression in response to novel objects, with only restricted effects being seen in the recency condition. Network analyses showed that whereas the patterns of parahippocampal interactions were differentially affected by novel or familiar objects, these correlated networks were not altered by hippocampal lesions. Additional analyses in control rats revealed two modes of correlated medial temporal activation. Novel stimuli recruited the pathway from the lateral entorhinal cortex (cortical layer II or III) to hippocampal field CA3, and thence to CA1. Familiar stimuli recruited the direct pathway from the lateral entorhinal cortex (principally layer III) to CA1. The present findings not only reveal the independence from the hippocampus of some perirhinal systems associated with recognition memory, but also show how novel stimuli engage hippocampal subfields in qualitatively different ways from familiar stimuli. PMID:25264133
NASA Astrophysics Data System (ADS)
Yu, Francis T. S.; Jutamulia, Suganda
2008-10-01
Contributors; Preface; 1. Pattern recognition with optics Francis T. S. Yu and Don A. Gregory; 2. Hybrid neural networks for nonlinear pattern recognition Taiwei Lu; 3. Wavelets, optics, and pattern recognition Yao Li and Yunglong Sheng; 4. Applications of the fractional Fourier transform to optical pattern recognition David Mendlovic, Zeev Zalesky and Haldum M. Oxaktas; 5. Optical implementation of mathematical morphology Tien-Hsin Chao; 6. Nonlinear optical correlators with improved discrimination capability for object location and recognition Leonid P. Yaroslavsky; 7. Distortion-invariant quadratic filters Gregory Gheen; 8. Composite filter synthesis as applied to pattern recognition Shizhou Yin and Guowen Lu; 9. Iterative procedures in electro-optical pattern recognition Joseph Shamir; 10. Optoelectronic hybrid system for three-dimensional object pattern recognition Guoguang Mu, Mingzhe Lu and Ying Sun; 11. Applications of photrefractive devices in optical pattern recognition Ziangyang Yang; 12. Optical pattern recognition with microlasers Eung-Gi Paek; 13. Optical properties and applications of bacteriorhodopsin Q. Wang Song and Yu-He Zhang; 14. Liquid-crystal spatial light modulators Aris Tanone and Suganda Jutamulia; 15. Representations of fully complex functions on real-time spatial light modulators Robert W. Cohn and Laurence G. Hassbrook; Index.
Havranek, Tomas; Zatkova, Martina; Lestanova, Zuzana; Bacova, Zuzana; Mravec, Boris; Hodosy, Julius; Strbak, Vladimir; Bakos, Jan
2015-06-01
Brain oxytocin regulates a variety of social and affiliative behaviors and affects also learning and memory. However, mechanisms of its action at the level of neuronal circuits are not fully understood. The present study tests the hypothesis that molecular factors required for memory formation and synaptic plasticity, including brain-derived neurotrophic factor, neural growth factor, nestin, microtubule-associated protein 2 (MAP2), and synapsin I, are enhanced by central administration of oxytocin. We also investigated whether oxytocin enhances object recognition and acts as anxiolytic agent. Therefore, male Wistar rats were infused continuously with oxytocin (20 ng/µl) via an osmotic minipump into the lateral cerebral ventricle for 7 days; controls were infused with vehicle. The object recognition test, open field test, and elevated plus maze test were performed on the sixth, seventh, and eighth days from starting the infusion. No significant effects of oxytocin on anxious-like behavior were observed. The object recognition test showed that oxytocin-treated rats significantly preferred unknown objects. Oxytocin treatment significantly increased gene expression and protein levels of neurotrophins, MAP2, and synapsin I in the hippocampus. No changes were observed in nestin expression. Our results provide the first direct evidence implicating oxytocin as a regulator of brain plasticity at the level of changes of neuronal growth factors, cytoskeletal proteins, and behavior. The data support assumption that oxytocin is important for short-term hippocampus-dependent memory. © 2015 Wiley Periodicals, Inc.
Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.
Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus
2017-01-01
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
Deficits in long-term recognition memory reveal dissociated subtypes in congenital prosopagnosia.
Stollhoff, Rainer; Jost, Jürgen; Elze, Tobias; Kennerknecht, Ingo
2011-01-25
The study investigates long-term recognition memory in congenital prosopagnosia (CP), a lifelong impairment in face identification that is present from birth. Previous investigations of processing deficits in CP have mostly relied on short-term recognition tests to estimate the scope and severity of individual deficits. We firstly report on a controlled test of long-term (one year) recognition memory for faces and objects conducted with a large group of participants with CP. Long-term recognition memory is significantly impaired in eight CP participants (CPs). In all but one case, this deficit was selective to faces and didn't extend to intra-class recognition of object stimuli. In a test of famous face recognition, long-term recognition deficits were less pronounced, even after accounting for differences in media consumption between controls and CPs. Secondly, we combined test results on long-term and short-term recognition of faces and objects, and found a large heterogeneity in severity and scope of individual deficits. Analysis of the observed heterogeneity revealed a dissociation of CP into subtypes with a homogeneous phenotypical profile. Thirdly, we found that among CPs self-assessment of real-life difficulties, based on a standardized questionnaire, and experimentally assessed face recognition deficits are strongly correlated. Our results demonstrate that controlled tests of long-term recognition memory are needed to fully assess face recognition deficits in CP. Based on controlled and comprehensive experimental testing, CP can be dissociated into subtypes with a homogeneous phenotypical profile. The CP subtypes identified align with those found in prosopagnosia caused by cortical lesions; they can be interpreted with respect to a hierarchical neural system for face perception.
Deficits in Long-Term Recognition Memory Reveal Dissociated Subtypes in Congenital Prosopagnosia
Stollhoff, Rainer; Jost, Jürgen; Elze, Tobias; Kennerknecht, Ingo
2011-01-01
The study investigates long-term recognition memory in congenital prosopagnosia (CP), a lifelong impairment in face identification that is present from birth. Previous investigations of processing deficits in CP have mostly relied on short-term recognition tests to estimate the scope and severity of individual deficits. We firstly report on a controlled test of long-term (one year) recognition memory for faces and objects conducted with a large group of participants with CP. Long-term recognition memory is significantly impaired in eight CP participants (CPs). In all but one case, this deficit was selective to faces and didn't extend to intra-class recognition of object stimuli. In a test of famous face recognition, long-term recognition deficits were less pronounced, even after accounting for differences in media consumption between controls and CPs. Secondly, we combined test results on long-term and short-term recognition of faces and objects, and found a large heterogeneity in severity and scope of individual deficits. Analysis of the observed heterogeneity revealed a dissociation of CP into subtypes with a homogeneous phenotypical profile. Thirdly, we found that among CPs self-assessment of real-life difficulties, based on a standardized questionnaire, and experimentally assessed face recognition deficits are strongly correlated. Our results demonstrate that controlled tests of long-term recognition memory are needed to fully assess face recognition deficits in CP. Based on controlled and comprehensive experimental testing, CP can be dissociated into subtypes with a homogeneous phenotypical profile. The CP subtypes identified align with those found in prosopagnosia caused by cortical lesions; they can be interpreted with respect to a hierarchical neural system for face perception. PMID:21283572
Drane, Daniel L.; Loring, David W.; Voets, Natalie L.; Price, Michele; Ojemann, Jeffrey G.; Willie, Jon T.; Saindane, Amit M.; Phatak, Vaishali; Ivanisevic, Mirjana; Millis, Scott; Helmers, Sandra L.; Miller, John W.; Meador, Kimford J.; Gross, Robert E.
2015-01-01
SUMMARY OBJECTIVES Temporal lobe epilepsy (TLE) patients experience significant deficits in category-related object recognition and naming following standard surgical approaches. These deficits may result from a decoupling of core processing modules (e.g., language, visual processing, semantic memory), due to “collateral damage” to temporal regions outside the hippocampus following open surgical approaches. We predicted stereotactic laser amygdalohippocampotomy (SLAH) would minimize such deficits because it preserves white matter pathways and neocortical regions critical for these cognitive processes. METHODS Tests of naming and recognition of common nouns (Boston Naming Test) and famous persons were compared with nonparametric analyses using exact tests between a group of nineteen patients with medically-intractable mesial TLE undergoing SLAH (10 dominant, 9 nondominant), and a comparable series of TLE patients undergoing standard surgical approaches (n=39) using a prospective, non-randomized, non-blinded, parallel group design. RESULTS Performance declines were significantly greater for the dominant TLE patients undergoing open resection versus SLAH for naming famous faces and common nouns (F=24.3, p<.0001, η2=.57, & F=11.2, p<.001, η2=.39, respectively), and for the nondominant TLE patients undergoing open resection versus SLAH for recognizing famous faces (F=3.9, p<.02, η2=.19). When examined on an individual subject basis, no SLAH patients experienced any performance declines on these measures. In contrast, 32 of the 39 undergoing standard surgical approaches declined on one or more measures for both object types (p<.001, Fisher’s exact test). Twenty-one of 22 left (dominant) TLE patients declined on one or both naming tasks after open resection, while 11 of 17 right (non-dominant) TLE patients declined on face recognition. SIGNIFICANCE Preliminary results suggest 1) naming and recognition functions can be spared in TLE patients undergoing SLAH, and 2) the hippocampus does not appear to be an essential component of neural networks underlying name retrieval or recognition of common objects or famous faces. PMID:25489630
Tian, Yingli; Yang, Xiaodong; Yi, Chucai; Arditi, Aries
2013-04-01
Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.
Binding neutral information to emotional contexts: Brain dynamics of long-term recognition memory.
Ventura-Bort, Carlos; Löw, Andreas; Wendt, Julia; Moltó, Javier; Poy, Rosario; Dolcos, Florin; Hamm, Alfons O; Weymar, Mathias
2016-04-01
There is abundant evidence in memory research that emotional stimuli are better remembered than neutral stimuli. However, effects of an emotionally charged context on memory for associated neutral elements is also important, particularly in trauma and stress-related disorders, where strong memories are often activated by neutral cues due to their emotional associations. In the present study, we used event-related potentials (ERPs) to investigate long-term recognition memory (1-week delay) for neutral objects that had been paired with emotionally arousing or neutral scenes during encoding. Context effects were clearly evident in the ERPs: An early frontal ERP old/new difference (300-500 ms) was enhanced for objects encoded in unpleasant compared to pleasant and neutral contexts; and a late central-parietal old/new difference (400-700 ms) was observed for objects paired with both pleasant and unpleasant contexts but not for items paired with neutral backgrounds. Interestingly, objects encoded in emotional contexts (and novel objects) also prompted an enhanced frontal early (180-220 ms) positivity compared to objects paired with neutral scenes indicating early perceptual significance. The present data suggest that emotional--particularly unpleasant--backgrounds strengthen memory for items encountered within these contexts and engage automatic and explicit recognition processes. These results could help in understanding binding mechanisms involved in the activation of trauma-related memories by neutral cues.
Tian, YingLi; Yang, Xiaodong; Yi, Chucai; Arditi, Aries
2012-01-01
Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech. PMID:23630409
Wu, Lin; Wang, Yang; Pan, Shirui
2017-12-01
It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.
2017-01-01
The role of stereo disparity in the recognition of 3-dimensional (3D) object shape remains an unresolved issue for theoretical models of the human visual system. We examined this issue using high-density (128 channel) recordings of event-related potentials (ERPs). A recognition memory task was used in which observers were trained to recognize a subset of complex, multipart, 3D novel objects under conditions of either (bi-) monocular or stereo viewing. In a subsequent test phase they discriminated previously trained targets from untrained distractor objects that shared either local parts, 3D spatial configuration, or neither dimension, across both previously seen and novel viewpoints. The behavioral data showed a stereo advantage for target recognition at untrained viewpoints. ERPs showed early differential amplitude modulations to shape similarity defined by local part structure and global 3D spatial configuration. This occurred initially during an N1 component around 145–190 ms poststimulus onset, and then subsequently during an N2/P3 component around 260–385 ms poststimulus onset. For mono viewing, amplitude modulation during the N1 was greatest between targets and distracters with different local parts for trained views only. For stereo viewing, amplitude modulation during the N2/P3 was greatest between targets and distracters with different global 3D spatial configurations and generalized across trained and untrained views. The results show that image classification is modulated by stereo information about the local part, and global 3D spatial configuration of object shape. The findings challenge current theoretical models that do not attribute functional significance to stereo input during the computation of 3D object shape. PMID:29022728
Scene recognition following locomotion around a scene.
Motes, Michael A; Finlay, Cory A; Kozhevnikov, Maria
2006-01-01
Effects of locomotion on scene-recognition reaction time (RT) and accuracy were studied. In experiment 1, observers memorized an 11-object scene and made scene-recognition judgments on subsequently presented scenes from the encoded view or different views (ie scenes were rotated or observers moved around the scene, both from 40 degrees to 360 degrees). In experiment 2, observers viewed different 5-object scenes on each trial and made scene-recognition judgments from the encoded view or after moving around the scene, from 36 degrees to 180 degrees. Across experiments, scene-recognition RT increased (in experiment 2 accuracy decreased) with angular distance between encoded and judged views, regardless of how the viewpoint changes occurred. The findings raise questions about conditions in which locomotion produces spatially updated representations of scenes.
Object classification for obstacle avoidance
NASA Astrophysics Data System (ADS)
Regensburger, Uwe; Graefe, Volker
1991-03-01
Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.
Category-Specificity in Visual Object Recognition
ERIC Educational Resources Information Center
Gerlach, Christian
2009-01-01
Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not: some brain damaged patients are more impaired in recognizing natural objects than artefacts whereas others show the opposite impairment. Category-effects have also been demonstrated in neurologically intact subjects, but the…
When a Picasso is a "Picasso": the entry point in the identification of visual art.
Belke, B; Leder, H; Harsanyi, G; Carbon, C C
2010-02-01
We investigated whether art is distinguished from other real world objects in human cognition, in that art allows for a special memorial representation and identification based on artists' specific stylistic appearances. Testing art-experienced viewers, converging empirical evidence from three experiments, which have proved sensitive to addressing the question of initial object recognition, suggest that identification of visual art is at the subordinate level of the producing artist. Specifically, in a free naming task it was found that art-objects as opposed to non-art-objects were most frequently named with subordinate level categories, with the artist's name as the most frequent category (Experiment 1). In a category-verification task (Experiment 2), art-objects were recognized faster than non-art-objects on the subordinate level with the artist's name. In a conceptual priming task, subordinate primes of artists' names facilitated matching responses to art-objects but subordinate primes did not facilitate responses to non-art-objects (Experiment 3). Collectively, these results suggest that the artist's name has a special status in the memorial representation of visual art and serves as a predominant entry point in recognition in art perception. Copyright 2009 Elsevier B.V. All rights reserved.
Towards building a team of intelligent robots
NASA Technical Reports Server (NTRS)
Varanasi, Murali R.; Mehrotra, R.
1987-01-01
Topics addressed include: collision-free motion planning of multiple robot arms; two-dimensional object recognition; and pictorial databases (storage and sharing of the representations of three-dimensional objects).
The Initial Development of Object Knowledge by a Learning Robot
Modayil, Joseph; Kuipers, Benjamin
2008-01-01
We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that form spatio-temporal clusters of sensory experience, percepts that represent properties for the tracked objects, classes that support efficient generalization from past experience, and actions that reliably change object percepts. We evaluate how well this intrinsically acquired object knowledge can be used to solve externally specified tasks including object recognition and achieving goals that require both planning and continuous control. PMID:19953188
McMenamin, Brenton W.; Deason, Rebecca G.; Steele, Vaughn R.; Koutstaal, Wilma; Marsolek, Chad J.
2014-01-01
Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. PMID:25528436
McMenamin, Brenton W; Deason, Rebecca G; Steele, Vaughn R; Koutstaal, Wilma; Marsolek, Chad J
2015-02-01
Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Siakaluk, Paul D.; Pexman, Penny M.; Aguilera, Laura; Owen, William J.; Sears, Christopher R.
2008-01-01
We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., "mask") and a set of low BOI…
The Effect of Inversion on 3- to 5-Year-Olds' Recognition of Face and Nonface Visual Objects
ERIC Educational Resources Information Center
Picozzi, Marta; Cassia, Viola Macchi; Turati, Chiara; Vescovo, Elena
2009-01-01
This study compared the effect of stimulus inversion on 3- to 5-year-olds' recognition of faces and two nonface object categories matched with faces for a number of attributes: shoes (Experiment 1) and frontal images of cars (Experiments 2 and 3). The inversion effect was present for faces but not shoes at 3 years of age (Experiment 1). Analogous…
Leveraging Cognitive Context for Object Recognition
2014-06-01
learned from large image databases. We build upon this concept by exploring cognitive context, demonstrating how rich dynamic context provided by...context that people rely upon as they perceive the world. Context in ACT-R/E takes the form of associations between related concepts that are learned ...and accuracy of object recognition. Context is most often viewed as a static concept, learned from large image databases. We build upon this concept by
Pitsikas, Nikolaos; Sakellaridis, Nikolaos
2007-10-01
The effects of the non-competitive N-methyl-D-aspartate (NMDA) receptor antagonist memantine on recognition memory were investigated in the rat by using the object recognition task. In addition, a possible interaction between memantine and the nitric oxide (NO) donor molsidomine in antagonizing extinction of recognition memory was also evaluated utilizing the same behavioral procedure. In a first dose-response study, post-training administration of memantine (10 and 20, but not 3 mg/kg) antagonized recognition memory deficits in the rat, suggesting that memantine modulates storage and/or retrieval of information. In a subsequent study, combination of sub-threshold doses of memantine (3 mg/kg) and the NO donor molsidomine (1 mg/kg) counteracted delay-dependent impairments in the same task. Neither memantine (3 mg/kg) nor molsidomine (1 mg/kg) alone reduced object recognition performance deficits. The present findings indicate a) that memantine is involved in recognition memory and b) support a functional interaction between memantine and molsidomine on recognition memory mechanisms.
Glucocorticoid effects on object recognition memory require training-associated emotional arousal.
Okuda, Shoki; Roozendaal, Benno; McGaugh, James L
2004-01-20
Considerable evidence implicates glucocorticoid hormones in the regulation of memory consolidation and memory retrieval. The present experiments investigated whether the influence of these hormones on memory depends on the level of emotional arousal induced by the training experience. We investigated this issue in male Sprague-Dawley rats by examining the effects of immediate posttraining systemic injections of the glucocorticoid corticosterone on object recognition memory under two conditions that differed in their training-associated emotional arousal. In rats that were not previously habituated to the experimental context, corticosterone (0.3, 1.0, or 3.0 mg/kg, s.c.) administered immediately after a 3-min training trial enhanced 24-hr retention performance in an inverted-U shaped dose-response relationship. In contrast, corticosterone did not affect 24-hr retention of rats that received extensive prior habituation to the experimental context and, thus, had decreased novelty-induced emotional arousal during training. Additionally, immediate posttraining administration of corticosterone to nonhabituated rats, in doses that enhanced 24-hr retention, impaired object recognition performance at a 1-hr retention interval whereas corticosterone administered after training to well-habituated rats did not impair 1-hr retention. Thus, the present findings suggest that training-induced emotional arousal may be essential for glucocorticoid effects on object recognition memory.
Are face representations depth cue invariant?
Dehmoobadsharifabadi, Armita; Farivar, Reza
2016-06-01
The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.
[Visual Texture Agnosia in Humans].
Suzuki, Kyoko
2015-06-01
Visual object recognition requires the processing of both geometric and surface properties. Patients with occipital lesions may have visual agnosia, which is impairment in the recognition and identification of visually presented objects primarily through their geometric features. An analogous condition involving the failure to recognize an object by its texture may exist, which can be called visual texture agnosia. Here we present two cases with visual texture agnosia. Case 1 had left homonymous hemianopia and right upper quadrantanopia, along with achromatopsia, prosopagnosia, and texture agnosia, because of damage to his left ventromedial occipitotemporal cortex and right lateral occipito-temporo-parietal cortex due to multiple cerebral embolisms. Although he showed difficulty matching and naming textures of real materials, he could readily name visually presented objects by their contours. Case 2 had right lower quadrantanopia, along with impairment in stereopsis and recognition of texture in 2D images, because of subcortical hemorrhage in the left occipitotemporal region. He failed to recognize shapes based on texture information, whereas shape recognition based on contours was well preserved. Our findings, along with those of three reported cases with texture agnosia, indicate that there are separate channels for processing texture, color, and geometric features, and that the regions around the left collateral sulcus are crucial for texture processing.
Hargreaves, Ian S; Pexman, Penny M
2014-05-01
According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.
Spoken Word Recognition in Toddlers Who Use Cochlear Implants
Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.
2010-01-01
Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921
Kopp, Franziska; Lindenberger, Ulman
2011-07-01
Joint attention develops during the first year of life but little is known about its effects on long-term memory. We investigated whether joint attention modulates long-term memory in 9-month-old infants. Infants were familiarized with visually presented objects in either of two conditions that differed in the degree of joint attention (high versus low). EEG indicators in response to old and novel objects were probed directly after the familiarization phase (immediate recognition), and following a 1-week delay (delayed recognition). In immediate recognition, the amplitude of positive slow-wave activity was modulated by joint attention. In the delayed recognition, the amplitude of the Pb component differentiated between high and low joint attention. In addition, the positive slow-wave amplitude during immediate and delayed recognition correlated with the frequency of infants' looks to the experimenter during familiarization. Under both high- and low-joint-attention conditions, the processing of unfamiliar objects was associated with an enhanced Nc component. Our results show that the degree of joint attention modulates EEG during immediate and delayed recognition. We conclude that joint attention affects long-term memory processing in 9-month-old infants by enhancing the relevance of attended items. © 2010 Blackwell Publishing Ltd.
Comparing object recognition from binary and bipolar edge images for visual prostheses
Jung, Jae-Hyun; Pu, Tian; Peli, Eli
2017-01-01
Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition. PMID:28458481
Nguyen, Dat Tien; Park, Kang Ryoung
2016-07-21
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264
Stephenson, Jennifer
2007-03-01
Line drawings are commonly used as communication symbols for individuals with severe intellectual disabilities. This study investigated the effect of color on the recognition and use of line drawings by young children with severe intellectual disabilities and poor verbal comprehension who were beginning picture users. Drawings where the color of the picture matched the object and where the color of the drawing did not match the object were used, as well as black and white line drawings. Tentative findings suggest that some students with intellectual disabilities may find it more difficult to recognize and line drawings where the color does not match the object compared to line drawings where the color of the drawing does match the color of the object.
Keijser, Jan N; van Heuvelen, Marieke J G; Nyakas, Csaba; Tóth, Kata; Schoemaker, Regien G; Zeinstra, Edzard; van der Zee, Eddy A
2017-01-01
Whole body vibration (WBV) is a form of physical stimulation via mechanical vibrations transmitted to a subject. It is assumed that WBV induces sensory stimulation in cortical brain regions through the activation of skin and muscle receptors responding to the vibration. The effects of WBV on muscle strength are well described. However, little is known about the impact of WBV on the brain. Recently, it was shown in humans that WBV improves attention in an acute WBV protocol. Preclinical research is needed to unravel the underlying brain mechanism. As a first step, we examined whether chronic WBV improves attention in mice. A custom made vibrating platform for mice with low intensity vibrations was used. Male CD1 mice (3 months of age) received five weeks WBV (30 Hz; 1.9 G), five days a week with sessions of five (n=12) or 30 (n=10) minutes. Control mice (pseudo-WBV; n=12 and 10 for the five and 30 minute sessions, respectively) were treated in a similar way, but did not receive the actual vibration. Object recognition tasks were used as an attention test (novel and spatial object recognition - the primary outcome measure). A Balance beam was used for motor performance, serving as a secondary outcome measure. WBV sessions of five (but not WBV sessions of 30 minutes) improved balance beam performance (mice gained 28% in time needed to cross the beam) and novel object recognition (mice paid significantly more attention to the novel object) as compared to pseudo WBV, but no change was found for spatial object performance (mice did not notice the relocation). Although 30 minutes WBV sessions were not beneficial, it did not impair either attention or motor performance. These results show that brief sessions of WBV improve, next to motor performance, attention for object recognition, but not spatial cues of the objects. The selective improvement of attention in mice opens the avenue to unravel the underlying brain mechanisms.
Advanced miniature processing handware for ATR applications
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Daud, Taher (Inventor); Thakoor, Anikumar (Inventor)
2003-01-01
A Hybrid Optoelectronic Neural Object Recognition System (HONORS), is disclosed, comprising two major building blocks: (1) an advanced grayscale optical correlator (OC) and (2) a massively parallel three-dimensional neural-processor. The optical correlator, with its inherent advantages in parallel processing and shift invariance, is used for target of interest (TOI) detection and segmentation. The three-dimensional neural-processor, with its robust neural learning capability, is used for target classification and identification. The hybrid optoelectronic neural object recognition system, with its powerful combination of optical processing and neural networks, enables real-time, large frame, automatic target recognition (ATR).
Target recognition based on convolutional neural network
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian
2017-11-01
One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.
Learning object-to-class kernels for scene classification.
Zhang, Lei; Zhen, Xiantong; Shao, Ling
2014-08-01
High-level image representations have drawn increasing attention in visual recognition, e.g., scene classification, since the invention of the object bank. The object bank represents an image as a response map of a large number of pretrained object detectors and has achieved superior performance for visual recognition. In this paper, based on the object bank representation, we propose the object-to-class (O2C) distances to model scene images. In particular, four variants of O2C distances are presented, and with the O2C distances, we can represent the images using the object bank by lower-dimensional but more discriminative spaces, called distance spaces, which are spanned by the O2C distances. Due to the explicit computation of O2C distances based on the object bank, the obtained representations can possess more semantic meanings. To combine the discriminant ability of the O2C distances to all scene classes, we further propose to kernalize the distance representation for the final classification. We have conducted extensive experiments on four benchmark data sets, UIUC-Sports, Scene-15, MIT Indoor, and Caltech-101, which demonstrate that the proposed approaches can significantly improve the original object bank approach and achieve the state-of-the-art performance.
[Influence of object material and inter-trial interval on novel object recognition test in mice].
Li, Sheng-jian; Huang, Zhu-yan; Ye, Yi-lu; Yu, Yue-ping; Zhang, Wei-ping; Wei, Er-qing; Zhang, Qi
2014-05-01
To investigate the efficacy of novel object recognition (NOR) test in assessment of learning and memory ability in ICR mice in different experimental conditions. One hundred and thirty male ICR mice were randomly divided into 10 groups: 4 groups for different inter-trial intervals (ITI: 10 min, 90 min, 4 h, 24 h), 4 groups for different object materials (wood-wood, plastic-plastic, plastic-wood, wood-plastic) and 2 groups for repeated test (measured once a day or every 3 days, totally three times in each group). The locomotor tracks in the open field were recorded. The amount of time spent exploring the novel and familiar objects, the discrimination ratio (DR) and the discrimination index (DI) were analyzed. Compared with familiar object, DR and DI of novel object were both increased at ITI of 10 min and 90 min (P<0.01). Exploring time, DR and DI were greatly influenced by different object materials. DR and DI remained stable by using identical object material. NOR test could be done repeatedly in the same batch of mice. NOR test can be used to assess the learning and memory ability in mice at shorter ITI and with identical material. It can be done repeatedly.
Grossberg, Stephen; Markowitz, Jeffrey; Cao, Yongqiang
2011-12-01
Visual object recognition is an essential accomplishment of advanced brains. Object recognition needs to be tolerant, or invariant, with respect to changes in object position, size, and view. In monkeys and humans, a key area for recognition is the anterior inferotemporal cortex (ITa). Recent neurophysiological data show that ITa cells with high object selectivity often have low position tolerance. We propose a neural model whose cells learn to simulate this tradeoff, as well as ITa responses to image morphs, while explaining how invariant recognition properties may arise in stages due to processes across multiple cortical areas. These processes include the cortical magnification factor, multiple receptive field sizes, and top-down attentive matching and learning properties that may be tuned by task requirements to attend to either concrete or abstract visual features with different levels of vigilance. The model predicts that data from the tradeoff and image morph tasks emerge from different levels of vigilance in the animals performing them. This result illustrates how different vigilance requirements of a task may change the course of category learning, notably the critical features that are attended and incorporated into learned category prototypes. The model outlines a path for developing an animal model of how defective vigilance control can lead to symptoms of various mental disorders, such as autism and amnesia. Copyright © 2011 Elsevier Ltd. All rights reserved.
Learning and Forgetting New Names and Objects in MCI and AD
ERIC Educational Resources Information Center
Gronholm-Nyman, Petra; Rinne, Juha O.; Laine, Matti
2010-01-01
We studied how subjects with mild cognitive impairment (MCI), early Alzheimer's disease (AD) and age-matched controls learned and maintained the names of unfamiliar objects that were trained with or without semantic support (object definitions). Naming performance, phonological cueing, incidental learning of the definitions and recognition of the…
Neural network application for thermal image recognition of low-resolution objects
NASA Astrophysics Data System (ADS)
Fang, Yi-Chin; Wu, Bo-Wen
2007-02-01
In the ever-changing situation on a battle field, accurate recognition of a distant object is critical to a commander's decision-making and the general public's safety. Efficiently distinguishing between an enemy's armoured vehicles and ordinary civilian houses under all weather conditions has become an important research topic. This study presents a system for recognizing an armoured vehicle by distinguishing marks and contours. The characteristics of 12 different shapes and 12 characters are used to explore thermal image recognition under the circumstance of long distance and low resolution. Although the recognition capability of human eyes is superior to that of artificial intelligence under normal conditions, it tends to deteriorate substantially under long-distance and low-resolution scenarios. This study presents an effective method for choosing features and processing images. The artificial neural network technique is applied to further improve the probability of accurate recognition well beyond the limit of the recognition capability of human eyes.
Warmth of familiarity and chill of error: affective consequences of recognition decisions.
Chetverikov, Andrey
2014-04-01
The present research aimed to assess the effect of recognition decision on subsequent affective evaluations of recognised and non-recognised objects. Consistent with the proposed account of post-decisional preferences, results showed that the effect of recognition on preferences depends upon objective familiarity. If stimuli are recognised, liking ratings are positively associated with exposure frequency; if stimuli are not recognised, this link is either absent (Experiment 1) or negative (Experiments 2 and 3). This interaction between familiarity and recognition exists even when recognition accuracy is at chance level and the "mere exposure" effect is absent. Finally, data obtained from repeated measurements of preferences and using manipulations of task order confirm that recognition decisions have a causal influence on preferences. The findings suggest that affective evaluation can provide fine-grained access to the efficacy of cognitive processing even in simple cognitive tasks.
Object recognition and pose estimation of planar objects from range data
NASA Technical Reports Server (NTRS)
Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael
1994-01-01
The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and noise.
Deep learning based hand gesture recognition in complex scenes
NASA Astrophysics Data System (ADS)
Ni, Zihan; Sang, Nong; Tan, Cheng
2018-03-01
Recently, region-based convolutional neural networks(R-CNNs) have achieved significant success in the field of object detection, but their accuracy is not too high for small objects and similar objects, such as the gestures. To solve this problem, we present an online hard example testing(OHET) technology to evaluate the confidence of the R-CNNs' outputs, and regard those outputs with low confidence as hard examples. In this paper, we proposed a cascaded networks to recognize the gestures. Firstly, we use the region-based fully convolutional neural network(R-FCN), which is capable of the detection for small object, to detect the gestures, and then use the OHET to select the hard examples. To enhance the accuracy of the gesture recognition, we re-classify the hard examples through VGG-19 classification network to obtain the final output of the gesture recognition system. Through the contrast experiments with other methods, we can see that the cascaded networks combined with the OHET reached to the state-of-the-art results of 99.3% mAP on small and similar gestures in complex scenes.
Multi-channel feature dictionaries for RGB-D object recognition
NASA Astrophysics Data System (ADS)
Lan, Xiaodong; Li, Qiming; Chong, Mina; Song, Jian; Li, Jun
2018-04-01
Hierarchical matching pursuit (HMP) is a popular feature learning method for RGB-D object recognition. However, the feature representation with only one dictionary for RGB channels in HMP does not capture sufficient visual information. In this paper, we propose multi-channel feature dictionaries based feature learning method for RGB-D object recognition. The process of feature extraction in the proposed method consists of two layers. The K-SVD algorithm is used to learn dictionaries in sparse coding of these two layers. In the first-layer, we obtain features by performing max pooling on sparse codes of pixels in a cell. And the obtained features of cells in a patch are concatenated to generate patch jointly features. Then, patch jointly features in the first-layer are used to learn the dictionary and sparse codes in the second-layer. Finally, spatial pyramid pooling can be applied to the patch jointly features of any layer to generate the final object features in our method. Experimental results show that our method with first or second-layer features can obtain a comparable or better performance than some published state-of-the-art methods.
Bidirectional Modulation of Recognition Memory
Ho, Jonathan W.; Poeta, Devon L.; Jacobson, Tara K.; Zolnik, Timothy A.; Neske, Garrett T.; Connors, Barry W.
2015-01-01
Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects. For example, animals and humans with perirhinal damage are unable to distinguish familiar from novel objects in recognition memory tasks. In the normal brain, perirhinal neurons respond to novelty and familiarity by increasing or decreasing firing rates. Recent work also implicates oscillatory activity in the low-beta and low-gamma frequency bands in sensory detection, perception, and recognition. Using optogenetic methods in a spontaneous object exploration (SOR) task, we altered recognition memory performance in rats. In the SOR task, normal rats preferentially explore novel images over familiar ones. We modulated exploratory behavior in this task by optically stimulating channelrhodopsin-expressing perirhinal neurons at various frequencies while rats looked at novel or familiar 2D images. Stimulation at 30–40 Hz during looking caused rats to treat a familiar image as if it were novel by increasing time looking at the image. Stimulation at 30–40 Hz was not effective in increasing exploration of novel images. Stimulation at 10–15 Hz caused animals to treat a novel image as familiar by decreasing time looking at the image, but did not affect looking times for images that were already familiar. We conclude that optical stimulation of PER at different frequencies can alter visual recognition memory bidirectionally. SIGNIFICANCE STATEMENT Recognition of novelty and familiarity are important for learning, memory, and decision making. Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects, but how novelty and familiarity are encoded and transmitted in the brain is not known. Perirhinal neurons respond to novelty and familiarity by changing firing rates, but recent work suggests that brain oscillations may also be important for recognition. In this study, we showed that stimulation of the PER could increase or decrease exploration of novel and familiar images depending on the frequency of stimulation. Our findings suggest that optical stimulation of PER at specific frequencies can predictably alter recognition memory. PMID:26424881
Joint object and action recognition via fusion of partially observable surveillance imagery data
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Chan, Alex L.
2017-05-01
Partially observable group activities (POGA) occurring in confined spaces are epitomized by their limited observability of the objects and actions involved. In many POGA scenarios, different objects are being used by human operators for the conduct of various operations. In this paper, we describe the ontology of such as POGA in the context of In-Vehicle Group Activity (IVGA) recognition. Initially, we describe the virtue of ontology modeling in the context of IVGA and show how such an ontology and a priori knowledge about the classes of in-vehicle activities can be fused for inference of human actions that consequentially leads to understanding of human activity inside the confined space of a vehicle. In this paper, we treat the problem of "action-object" as a duality problem. We postulate a correlation between observed human actions and the object that is being utilized within those actions, and conversely, if an object being handled is recognized, we may be able to expect a number of actions that are likely to be performed on that object. In this study, we use partially observable human postural sequences to recognition actions. Inspired by convolutional neural networks (CNNs) learning capability, we present an architecture design using a new CNN model to learn "action-object" perception from surveillance videos. In this study, we apply a sequential Deep Hidden Markov Model (DHMM) as a post-processor to CNN to decode realized observations into recognized actions and activities. To generate the needed imagery data set for the training and testing of these new methods, we use the IRIS virtual simulation software to generate high-fidelity and dynamic animated scenarios that depict in-vehicle group activities under different operational contexts. The results of our comparative investigation are discussed and presented in detail.
How does the brain solve visual object recognition?
Zoccolan, Davide; Rust, Nicole C.
2012-01-01
Mounting evidence suggests that “core object recognition,” the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains little-understood. Here we review evidence ranging from individual neurons, to neuronal populations, to behavior, to computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical sub-networks with a common functional goal. PMID:22325196
Extraction of edge-based and region-based features for object recognition
NASA Astrophysics Data System (ADS)
Coutts, Benjamin; Ravi, Srinivas; Hu, Gongzhu; Shrikhande, Neelima
1993-08-01
One of the central problems of computer vision is object recognition. A catalogue of model objects is described as a set of features such as edges and surfaces. The same features are extracted from the scene and matched against the models for object recognition. Edges and surfaces extracted from the scenes are often noisy and imperfect. In this paper algorithms are described for improving low level edge and surface features. Existing edge extraction algorithms are applied to the intensity image to obtain edge features. Initial edges are traced by following directions of the current contour. These are improved by using corresponding depth and intensity information for decision making at branch points. Surface fitting routines are applied to the range image to obtain planar surface patches. An algorithm of region growing is developed that starts with a coarse segmentation and uses quadric surface fitting to iteratively merge adjacent regions into quadric surfaces based on approximate orthogonal distance regression. Surface information obtained is returned to the edge extraction routine to detect and remove fake edges. This process repeats until no more merging or edge improvement can take place. Both synthetic (with Gaussian noise) and real images containing multiple object scenes have been tested using the merging criteria. Results appeared quite encouraging.
The memory state heuristic: A formal model based on repeated recognition judgments.
Castela, Marta; Erdfelder, Edgar
2017-02-01
The recognition heuristic (RH) theory predicts that, in comparative judgment tasks, if one object is recognized and the other is not, the recognized one is chosen. The memory-state heuristic (MSH) extends the RH by assuming that choices are not affected by recognition judgments per se, but by the memory states underlying these judgments (i.e., recognition certainty, uncertainty, or rejection certainty). Specifically, the larger the discrepancy between memory states, the larger the probability of choosing the object in the higher state. The typical RH paradigm does not allow estimation of the underlying memory states because it is unknown whether the objects were previously experienced or not. Therefore, we extended the paradigm by repeating the recognition task twice. In line with high threshold models of recognition, we assumed that inconsistent recognition judgments result from uncertainty whereas consistent judgments most likely result from memory certainty. In Experiment 1, we fitted 2 nested multinomial models to the data: an MSH model that formalizes the relation between memory states and binary choices explicitly and an approximate model that ignores the (unlikely) possibility of consistent guesses. Both models provided converging results. As predicted, reliance on recognition increased with the discrepancy in the underlying memory states. In Experiment 2, we replicated these results and found support for choice consistency predictions of the MSH. Additionally, recognition and choice latencies were in agreement with the MSH in both experiments. Finally, we validated critical parameters of our MSH model through a cross-validation method and a third experiment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Characterizing age-related decline of recognition memory and brain activation profile in mice.
Belblidia, Hassina; Leger, Marianne; Abdelmalek, Abdelouadoud; Quiedeville, Anne; Calocer, Floriane; Boulouard, Michel; Jozet-Alves, Christelle; Freret, Thomas; Schumann-Bard, Pascale
2018-06-01
Episodic memory decline is one of the earlier deficits occurring during normal aging in humans. The question of spatial versus non-spatial sensitivity to age-related memory decline is of importance for a full understanding of these changes. Here, we characterized the effect of normal aging on both non-spatial (object) and spatial (object location) memory performances as well as on associated neuronal activation in mice. Novel-object (NOR) and object-location (OLR) recognition tests, respectively assessing the identity and spatial features of object memory, were examined at different ages. We show that memory performances in both tests were altered by aging as early as 15 months of age: NOR memory was partially impaired whereas OLR memory was found to be fully disrupted at 15 months of age. Brain activation profiles were assessed for both tests using immunohistochemical detection of c-Fos (neuronal activation marker) in 3and 15 month-old mice. Normal performances in NOR task by 3 month-old mice were associated to an activation of the hippocampus and a trend towards an activation in the perirhinal cortex, in a way that did significantly differ with 15 month-old mice. During OLR task, brain activation took place in the hippocampus in 3 month-old but not significantly in 15 month-old mice, which were fully impaired at this task. These differential alterations of the object- and object-location recognition memory may be linked to differential alteration of the neuronal networks supporting these tasks. Copyright © 2018 Elsevier Inc. All rights reserved.
Automatic target recognition and detection in infrared imagery under cluttered background
NASA Astrophysics Data System (ADS)
Gundogdu, Erhan; Koç, Aykut; Alatan, A. Aydın.
2017-10-01
Visual object classification has long been studied in visible spectrum by utilizing conventional cameras. Since the labeled images has recently increased in number, it is possible to train deep Convolutional Neural Networks (CNN) with significant amount of parameters. As the infrared (IR) sensor technology has been improved during the last two decades, labeled images extracted from IR sensors have been started to be used for object detection and recognition tasks. We address the problem of infrared object recognition and detection by exploiting 15K images from the real-field with long-wave and mid-wave IR sensors. For feature learning, a stacked denoising autoencoder is trained in this IR dataset. To recognize the objects, the trained stacked denoising autoencoder is fine-tuned according to the binary classification loss of the target object. Once the training is completed, the test samples are propagated over the network, and the probability of the test sample belonging to a class is computed. Moreover, the trained classifier is utilized in a detect-by-classification method, where the classification is performed in a set of candidate object boxes and the maximum confidence score in a particular location is accepted as the score of the detected object. To decrease the computational complexity, the detection step at every frame is avoided by running an efficient correlation filter based tracker. The detection part is performed when the tracker confidence is below a pre-defined threshold. The experiments conducted on the real field images demonstrate that the proposed detection and tracking framework presents satisfactory results for detecting tanks under cluttered background.
NASA Astrophysics Data System (ADS)
Li, Heng; Zeng, Yajie; Lu, Zhuofan; Cao, Xiaofei; Su, Xiaofan; Sui, Xiaohong; Wang, Jing; Chai, Xinyu
2018-04-01
Objective. Retinal prosthesis devices have shown great value in restoring some sight for individuals with profoundly impaired vision, but the visual acuity and visual field provided by prostheses greatly limit recipients’ visual experience. In this paper, we employ computer vision approaches to seek to expand the perceptible visual field in patients implanted potentially with a high-density retinal prosthesis while maintaining visual acuity as much as possible. Approach. We propose an optimized content-aware image retargeting method, by introducing salient object detection based on color and intensity-difference contrast, aiming to remap important information of a scene into a small visual field and preserve their original scale as much as possible. It may improve prosthetic recipients’ perceived visual field and aid in performing some visual tasks (e.g. object detection and object recognition). To verify our method, psychophysical experiments, detecting object number and recognizing objects, are conducted under simulated prosthetic vision. As control, we use three other image retargeting techniques, including Cropping, Scaling, and seam-assisted shrinkability. Main results. Results show that our method outperforms in preserving more key features and has significantly higher recognition accuracy in comparison with other three image retargeting methods under the condition of small visual field and low-resolution. Significance. The proposed method is beneficial to expand the perceived visual field of prosthesis recipients and improve their object detection and recognition performance. It suggests that our method may provide an effective option for image processing module in future high-density retinal implants.
Deep Neural Networks as a Computational Model for Human Shape Sensitivity
Op de Beeck, Hans P.
2016-01-01
Theories of object recognition agree that shape is of primordial importance, but there is no consensus about how shape might be represented, and so far attempts to implement a model of shape perception that would work with realistic stimuli have largely failed. Recent studies suggest that state-of-the-art convolutional ‘deep’ neural networks (DNNs) capture important aspects of human object perception. We hypothesized that these successes might be partially related to a human-like representation of object shape. Here we demonstrate that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. We show that these models explain human shape judgments for several benchmark behavioral and neural stimulus sets on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments. As a whole, these results indicate that convolutional neural networks not only learn physically correct representations of object categories but also develop perceptually accurate representational spaces of shapes. An even more complete model of human object representations might be in sight by training deep architectures for multiple tasks, which is so characteristic in human development. PMID:27124699
Fusion of Multiple Sensing Modalities for Machine Vision
1994-05-31
Modeling of Non-Homogeneous 3-D Objects for Thermal and Visual Image Synthesis," Pattern Recognition, in press. U [11] Nair, Dinesh , and J. K. Aggarwal...20th AIPR Workshop: Computer Vision--Meeting the Challenges, McLean, Virginia, October 1991. Nair, Dinesh , and J. K. Aggarwal, "An Object Recognition...Computer Engineering August 1992 Sunil Gupta Ph.D. Student Mohan Kumar M.S. Student Sandeep Kumar M.S. Student Xavier Lebegue Ph.D., Computer
Studying the Sky/Planets Can Drown You in Images: Machine Learning Solutions at JPL/Caltech
NASA Technical Reports Server (NTRS)
Fayyad, U. M.
1995-01-01
JPL is working to develop a domain-independent system capable of small-scale object recognition in large image databases for science analysis. Two applications discussed are the cataloging of three billion sky objects in the Sky Image Cataloging and Analysis Tool (SKICAT) and the detection of possibly one million small volcanoes visible in the Magellan synthetic aperture radar images of Venus (JPL Adaptive Recognition Tool, JARTool).
NASA Astrophysics Data System (ADS)
Graham, James; Ternovskiy, Igor V.
2013-06-01
We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.
Bizot, Jean-Charles; Herpin, Alexandre; Pothion, Stéphanie; Pirot, Sylvain; Trovero, Fabrice; Ollat, Hélène
2005-07-01
The effect of a sulbutiamine chronic treatment on memory was studied in rats with a spatial delayed-non-match-to-sample (DNMTS) task in a radial maze and a two trial object recognition task. After completion of training in the DNMTS task, animals were subjected for 9 weeks to daily injections of either saline or sulbutiamine (12.5 or 25 mg/kg). Sulbutiamine did not modify memory in the DNMTS task but improved it in the object recognition task. Dizocilpine, impaired both acquisition and retention of the DNMTS task in the saline-treated group, but not in the two sulbutiamine-treated groups, suggesting that sulbutiamine may counteract the amnesia induced by a blockade of the N-methyl-D-aspartate glutamate receptors. Taken together, these results are in favor of a beneficial effect of sulbutiamine on working and episodic memory.
Exploring the association between visual perception abilities and reading of musical notation.
Lee, Horng-Yih
2012-06-01
In the reading of music, the acquisition of pitch information depends primarily upon the spatial position of notes as well as upon an individual's spatial processing ability. This study investigated the relationship between the ability to read single notes and visual-spatial ability. Participants with high and low single-note reading abilities were differentiated based upon differences in musical notation-reading abilities and their spatial processing; object recognition abilities were then assessed. It was found that the group with lower note-reading abilities made more errors than did the group with a higher note-reading abilities in the mental rotation task. In contrast, there was no apparent significant difference between the two groups in the object recognition task. These results suggest that note-reading may be related to visual spatial processing abilities, and not to an individual's ability with object recognition.
Creating objects and object categories for studying perception and perceptual learning.
Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay
2012-11-02
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties. Many innovative and useful methods currently exist for creating novel objects and object categories (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Kitada, Ryo; Johnsrude, Ingrid S; Kochiyama, Takanori; Lederman, Susan J
2009-10-01
Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.
Dennett, Hugh W; McKone, Elinor; Tavashmi, Raka; Hall, Ashleigh; Pidcock, Madeleine; Edwards, Mark; Duchaine, Bradley
2012-06-01
Many research questions require a within-class object recognition task matched for general cognitive requirements with a face recognition task. If the object task also has high internal reliability, it can improve accuracy and power in group analyses (e.g., mean inversion effects for faces vs. objects), individual-difference studies (e.g., correlations between certain perceptual abilities and face/object recognition), and case studies in neuropsychology (e.g., whether a prosopagnosic shows a face-specific or object-general deficit). Here, we present such a task. Our Cambridge Car Memory Test (CCMT) was matched in format to the established Cambridge Face Memory Test, requiring recognition of exemplars across view and lighting change. We tested 153 young adults (93 female). Results showed high reliability (Cronbach's alpha = .84) and a range of scores suitable both for normal-range individual-difference studies and, potentially, for diagnosis of impairment. The mean for males was much higher than the mean for females. We demonstrate independence between face memory and car memory (dissociation based on sex, plus a modest correlation between the two), including where participants have high relative expertise with cars. We also show that expertise with real car makes and models of the era used in the test significantly predicts CCMT performance. Surprisingly, however, regression analyses imply that there is an effect of sex per se on the CCMT that is not attributable to a stereotypical male advantage in car expertise.
Social cues at encoding affect memory in 4-month-old infants.
Kopp, Franziska; Lindenberger, Ulman
2012-01-01
Available evidence suggests that infants use adults' social cues for learning by the second half of the first year of life. However, little is known about the short-term or long-term effects of joint attention interactions on learning and memory in younger infants. In the present study, 4-month-old infants were familiarized with visually presented objects in either of two conditions that differed in the degree of joint attention (high vs. low). Brain activity in response to familiar and novel objects was assessed immediately after the familiarization phase (immediate recognition), and following a 1-week delay (delayed recognition). The latency of the Nc component differentiated between recognition of old versus new objects. Pb amplitude and latency were affected by joint attention in delayed recognition. Moreover, the frequency of infant gaze to the experimenter during familiarization differed between the two experimental groups and modulated the Pb response. Results show that joint attention affects the mechanisms of long-term retention in 4-month-old infants. We conclude that joint attention helps children at this young age to recognize the relevance of learned items.
Emotion and Object Processing in Parkinson's Disease
ERIC Educational Resources Information Center
Cohen, Henri; Gagne, Marie-Helene; Hess, Ursula; Pourcher, Emmanuelle
2010-01-01
The neuropsychological literature on the processing of emotions in Parkinson's disease (PD) reveals conflicting evidence about the role of the basal ganglia in the recognition of facial emotions. Hence, the present study had two objectives. One was to determine the extent to which the visual processing of emotions and objects differs in PD. The…
Recognition of 3-D symmetric objects from range images in automated assembly tasks
NASA Technical Reports Server (NTRS)
Alvertos, Nicolas; Dcunha, Ivan
1990-01-01
A new technique is presented for the three dimensional recognition of symmetric objects from range images. Beginning from the implicit representation of quadrics, a set of ten coefficients is determined for symmetric objects like spheres, cones, cylinders, ellipsoids, and parallelepipeds. Instead of using these ten coefficients trying to fit them to smooth surfaces (patches) based on the traditional way of determining curvatures, a new approach based on two dimensional geometry is used. For each symmetric object, a unique set of two dimensional curves is obtained from the various angles at which the object is intersected with a plane. Using the same ten coefficients obtained earlier and based on the discriminant method, each of these curves is classified as a parabola, circle, ellipse, or hyperbola. Each symmetric object is found to possess a unique set of these two dimensional curves whereby it can be differentiated from the others. It is shown that instead of using the three dimensional discriminant which involves evaluation of the rank of its matrix, it is sufficient to use the two dimensional discriminant which only requires three arithmetic operations.
Age-related impairments in active learning and strategic visual exploration.
Brandstatt, Kelly L; Voss, Joel L
2014-01-01
Old age could impair memory by disrupting learning strategies used by younger individuals. We tested this possibility by manipulating the ability to use visual-exploration strategies during learning. Subjects controlled visual exploration during active learning, thus permitting the use of strategies, whereas strategies were limited during passive learning via predetermined exploration patterns. Performance on tests of object recognition and object-location recall was matched for younger and older subjects for objects studied passively, when learning strategies were restricted. Active learning improved object recognition similarly for younger and older subjects. However, active learning improved object-location recall for younger subjects, but not older subjects. Exploration patterns were used to identify a learning strategy involving repeat viewing. Older subjects used this strategy less frequently and it provided less memory benefit compared to younger subjects. In previous experiments, we linked hippocampal-prefrontal co-activation to improvements in object-location recall from active learning and to the exploration strategy. Collectively, these findings suggest that age-related memory problems result partly from impaired strategies during learning, potentially due to reduced hippocampal-prefrontal co-engagement.
ROBOSIGHT: Robotic Vision System For Inspection And Manipulation
NASA Astrophysics Data System (ADS)
Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh
1989-02-01
Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.
Salient man-made structure detection in infrared images
NASA Astrophysics Data System (ADS)
Li, Dong-jie; Zhou, Fu-gen; Jin, Ting
2013-09-01
Target detection, segmentation and recognition is a hot research topic in the field of image processing and pattern recognition nowadays, among which salient area or object detection is one of core technologies of precision guided weapon. Many theories have been raised in this paper; we detect salient objects in a series of input infrared images by using the classical feature integration theory and Itti's visual attention system. In order to find the salient object in an image accurately, we present a new method to solve the edge blur problem by calculating and using the edge mask. We also greatly improve the computing speed by improving the center-surround differences method. Unlike the traditional algorithm, we calculate the center-surround differences through rows and columns separately. Experimental results show that our method is effective in detecting salient object accurately and rapidly.
Marked Object Recognition Multitouch Screen Printed Touchpad for Interactive Applications.
Nunes, Jivago Serrado; Castro, Nelson; Gonçalves, Sergio; Pereira, Nélson; Correia, Vitor; Lanceros-Mendez, Senentxu
2017-12-01
The market for interactive platforms is rapidly growing, and touchscreens have been incorporated in an increasing number of devices. Thus, the area of smart objects and devices is strongly increasing by adding interactive touch and multimedia content, leading to new uses and capabilities. In this work, a flexible screen printed sensor matrix is fabricated based on silver ink in a polyethylene terephthalate (PET) substrate. Diamond shaped capacitive electrodes coupled with conventional capacitive reading electronics enables fabrication of a highly functional capacitive touchpad, and also allows for the identification of marked objects. For the latter, the capacitive signatures are identified by intersecting points and distances between them. Thus, this work demonstrates the applicability of a low cost method using royalty-free geometries and technologies for the development of flexible multitouch touchpads for the implementation of interactive and object recognition applications.
Environmental modeling and recognition for an autonomous land vehicle
NASA Technical Reports Server (NTRS)
Lawton, D. T.; Levitt, T. S.; Mcconnell, C. C.; Nelson, P. C.
1987-01-01
An architecture for object modeling and recognition for an autonomous land vehicle is presented. Examples of objects of interest include terrain features, fields, roads, horizon features, trees, etc. The architecture is organized around a set of data bases for generic object models and perceptual structures, temporary memory for the instantiation of object and relational hypotheses, and a long term memory for storing stable hypotheses that are affixed to the terrain representation. Multiple inference processes operate over these databases. Researchers describe these particular components: the perceptual structure database, the grouping processes that operate over this, schemas, and the long term terrain database. A processing example that matches predictions from the long term terrain model to imagery, extracts significant perceptual structures for consideration as potential landmarks, and extracts a relational structure to update the long term terrain database is given.
Marked Object Recognition Multitouch Screen Printed Touchpad for Interactive Applications
Nunes, Jivago Serrado; Castro, Nelson; Pereira, Nélson; Correia, Vitor; Lanceros-Mendez, Senentxu
2017-01-01
The market for interactive platforms is rapidly growing, and touchscreens have been incorporated in an increasing number of devices. Thus, the area of smart objects and devices is strongly increasing by adding interactive touch and multimedia content, leading to new uses and capabilities. In this work, a flexible screen printed sensor matrix is fabricated based on silver ink in a polyethylene terephthalate (PET) substrate. Diamond shaped capacitive electrodes coupled with conventional capacitive reading electronics enables fabrication of a highly functional capacitive touchpad, and also allows for the identification of marked objects. For the latter, the capacitive signatures are identified by intersecting points and distances between them. Thus, this work demonstrates the applicability of a low cost method using royalty-free geometries and technologies for the development of flexible multitouch touchpads for the implementation of interactive and object recognition applications. PMID:29194414
Helweg, D A; Roitblat, H L; Nachtigall, P E; Hautus, M J
1996-01-01
We examined the ability of a bottlenose dolphin (Tursiops truncatus) to recognize aspect-dependent objects using echolocation. An aspect-dependent object such as a cube produces acoustically different echoes at different angles relative to the echolocation signal. The dolphin recognized the objects even though the objects were free to rotate and sway. A linear discriminant analysis and nearest centroid classifier could classify the objects using average amplitude, center frequency, and bandwidth of object echoes. The results show that dolphins can use varying acoustic properties to recognize constant objects and suggest that aspect-independent representations may be formed by combining information gleaned from multiple echoes.
Volumetric segmentation of range images for printed circuit board inspection
NASA Astrophysics Data System (ADS)
Van Dop, Erik R.; Regtien, Paul P. L.
1996-10-01
Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.