Science.gov

Sample records for facilitates object recognition

  1. Real object use facilitates object recognition in semantic agnosia.

    PubMed

    Morady, Kamelia; Humphreys, Glyn W

    2009-01-01

    In the present paper we show that, in patients with poor semantic representations, the naming of real objects can improve when naming takes place after patients have been asked to use the objects, compared with when they name the objects either from vision or from touch alone, or together. In addition, the patients were strongly affected by action when required to name objects that were used correctly or incorrectly by the examiner. The data suggest that actions can be cued directly from sensory-motor associations, and that patients can then name on the basis of the evoked action.

  2. Top-down facilitation of visual object recognition: object-based and context-based contributions.

    PubMed

    Fenske, Mark J; Aminoff, Elissa; Gronau, Nurit; Bar, Moshe

    2006-01-01

    The neural mechanisms subserving visual recognition are traditionally described in terms of bottom-up analysis, whereby increasingly complex aspects of the visual input are processed along a hierarchical progression of cortical regions. However, the importance of top-down facilitation in successful recognition has been emphasized in recent models and research findings. Here we consider evidence for top-down facilitation of recognition that is triggered by early information about an object, as well as by contextual associations between an object and other objects with which it typically appears. The object-based mechanism is proposed to trigger top-down facilitation of visual recognition rapidly, using a partially analyzed version of the input image (i.e., a blurred image) that is projected from early visual areas directly to the prefrontal cortex (PFC). This coarse representation activates in the PFC information that is back-projected as "initial guesses" to the temporal cortex where it presensitizes the most likely interpretations of the input object. In addition to this object-based facilitation, a context-based mechanism is proposed to trigger top-down facilitation through contextual associations between objects in scenes. These contextual associations activate predictive information about which objects are likely to appear together, and can influence the "initial guesses" about an object's identity. We have shown that contextual associations are analyzed by a network that includes the parahippocampal cortex and the retrosplenial complex. The integrated proposal described here is that object- and context-based top-down influences operate together, promoting efficient recognition by framing early information about an object within the constraints provided by a lifetime of experience with contextual associations.

  3. Neuropeptide S interacts with the basolateral amygdala noradrenergic system in facilitating object recognition memory consolidation.

    PubMed

    Han, Ren-Wen; Xu, Hong-Jiao; Zhang, Rui-San; Wang, Pei; Chang, Min; Peng, Ya-Li; Deng, Ke-Yu; Wang, Rui

    2014-01-01

    The noradrenergic activity in the basolateral amygdala (BLA) was reported to be involved in the regulation of object recognition memory. As the BLA expresses high density of receptors for Neuropeptide S (NPS), we investigated whether the BLA is involved in mediating NPS's effects on object recognition memory consolidation and whether such effects require noradrenergic activity. Intracerebroventricular infusion of NPS (1nmol) post training facilitated 24-h memory in a mouse novel object recognition task. The memory-enhancing effect of NPS could be blocked by the β-adrenoceptor antagonist propranolol. Furthermore, post-training intra-BLA infusions of NPS (0.5nmol/side) improved 24-h memory for objects, which was impaired by co-administration of propranolol (0.5μg/side). Taken together, these results indicate that NPS interacts with the BLA noradrenergic system in improving object recognition memory during consolidation.

  4. Stereo disparity facilitates view generalization during shape recognition for solid multipart objects.

    PubMed

    Cristino, Filipe; Davitt, Lina; Hayward, William G; Leek, E Charles

    2015-01-01

    Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.

  5. Paradoxical facilitation of object recognition memory after infusion of scopolamine into perirhinal cortex: implications for cholinergic system function.

    PubMed

    Winters, Boyer D; Saksida, Lisa M; Bussey, Timothy J

    2006-09-13

    The cholinergic system has long been implicated in learning and memory, yet its specific function remains unclear. In the present study, we investigated the role of cortical acetylcholine in a rodent model of declarative memory by infusing the cholinergic muscarinic receptor antagonist scopolamine into the rat perirhinal cortex during different stages (encoding, storage/consolidation, and retrieval) of the spontaneous object recognition task. Presample infusions of scopolamine significantly impaired object recognition compared with performance of the same group of rats on saline trials; this result is consistent with previous reports supporting a role for perirhinal acetylcholine in object information acquisition. Scopolamine infusions directly before the retrieval stage had no discernible effect on object recognition. However, postsample infusions of scopolamine with sample-to-infusion delays of up to 20 h significantly facilitated performance relative to postsample saline infusion trials. Additional analysis suggested that the infusion episode could cause retroactive or proactive interference with the sample object trace and that scopolamine blocked the acquisition of this interfering information, thereby facilitating recognition memory. This is, to our knowledge, the first example of improved recognition memory after administration of scopolamine. The overall pattern of results is inconsistent with a direct role for cortical acetylcholine in declarative memory consolidation or retrieval. Rather, the cholinergic input to the perirhinal cortex may facilitate acquisition by enhancing the cortical processing of incoming stimulus information.

  6. A Genetic-Algorithm-Based Explicit Description of Object Contour and its Ability to Facilitate Recognition.

    PubMed

    Wei, Hui; Tang, Xue-Song

    2015-11-01

    Shape representation is an extremely important and longstanding problem in the field of pattern recognition. Closed contour, which refers to shape contour, plays a crucial role in the comparison of shapes. Because shape contour is the most stable, distinguishable, and invariable feature of an object, it is useful to incorporate it into the recognition process. This paper proposes a method based on genetic algorithms. The proposed method can be used to identify the most common contour fragments, which can be used to represent the contours of a shape category. The common fragments clarify the particular logics included in the contours. This paper shows that the explicit representation of the shape contour contributes significantly to shape representation and object recognition.

  7. When Action Observation Facilitates Visual Perception: Activation in Visuo-Motor Areas Contributes to Object Recognition.

    PubMed

    Sim, Eun-Jin; Helbig, Hannah B; Graf, Markus; Kiefer, Markus

    2015-09-01

    Recent evidence suggests an interaction between the ventral visual-perceptual and dorsal visuo-motor brain systems during the course of object recognition. However, the precise function of the dorsal stream for perception remains to be determined. The present study specified the functional contribution of the visuo-motor system to visual object recognition using functional magnetic resonance imaging and event-related potential (ERP) during action priming. Primes were movies showing hands performing an action with an object with the object being erased, followed by a manipulable target object, which either afforded a similar or a dissimilar action (congruent vs. incongruent condition). Participants had to recognize the target object within a picture-word matching task. Priming-related reductions of brain activity were found in frontal and parietal visuo-motor areas as well as in ventral regions including inferior and anterior temporal areas. Effective connectivity analyses suggested functional influences of parietal areas on anterior temporal areas. ERPs revealed priming-related source activity in visuo-motor regions at about 120 ms and later activity in the ventral stream at about 380 ms. Hence, rapidly initiated visuo-motor processes within the dorsal stream functionally contribute to visual object recognition in interaction with ventral stream processes dedicated to visual analysis and semantic integration.

  8. CB1 receptor antagonism in the granular insular cortex or somatosensory area facilitates consolidation of object recognition memory.

    PubMed

    O'Brien, Lesley D; Sticht, Martin A; Mitchnick, Krista A; Limebeer, Cheryl L; Parker, Linda A; Winters, Boyer D

    2014-08-22

    Cannabinoid agonists typically impair memory, whereas CB1 receptor antagonists enhance memory performance under specific conditions. The insular cortex has been implicated in object memory consolidation. Here we show that infusions of the CB1 receptor antagonist SR141716 enhances long-term object recognition memory in rats in a dose-dependent manner (facilitation with 1.5, but not 0.75 or 3 μg/μL) when administered into the granular insular cortex; the SR141716 facilitation was seen with a memory delay of 72 h, but not when the delay was shorter (1 h), consistent with enhancement of memory consolidation. Moreover, a sub-group of rats with cannulas placed in the somatosensory area were also facilitated. These results highlight the robust potential of cannabinoid antagonists to facilitate object memory consolidation, as well as the capacity for insular and somatosensory cortices to contribute to object processing, perhaps through enhancement of tactile representation.

  9. Facilitated neurogenesis in the developing hippocampus after intake of theanine, an amino acid in tea leaves, and object recognition memory.

    PubMed

    Takeda, Atsushi; Sakamoto, Kazuhiro; Tamano, Haruna; Fukura, Kotaro; Inui, Naoto; Suh, Sang Won; Won, Seok-Joon; Yokogoshi, Hidehiko

    2011-10-01

    Theanine, γ-glutamylethylamide, is one of the major amino acid components in green tea. In this study, cognitive function and the related mechanism were examined in theanine-administered young rats. Newborn rats were fed theanine through dams, which were fed water containing 0.3% theanine, and then fed water containing 0.3% theanine after weaning. Theanine level in the brain was under the detectable limit 6 weeks after the start of theanine administration. Theanine administration did not influence locomotor activity in the open-field test. However, rearing behavior was significantly increased in theanine-administered rats, suggesting that exploratory activity is increased by theanine intake. Furthermore, object recognition memory was enhanced in theanine-administered rats. The increase in exploratory activity in the open-field test seems to be associated with the enhanced object recognition memory after theanine administration. On the other hand, long-term potentiation (LTP) induction at the perforant path-granule cell synapse was not changed by theanine administration. To check hippocampal neurogenesis, BrdU was injected into rats 3 weeks after the start of theanine administration, and brain-derived neurotropic factor (BDNF) level was significantly increased at this time. Theanine intake significantly increased the number of BrdU-, Ki67-, and DCX-labeled cells in the granule cell layer 6 weeks after the start of theanine administration. This study indicates that 0.3% theanine administration facilitates neurogenesis in the developing hippocampus followed by enhanced recognition memory. Theanine intake may be of benefit to the postnatal development of hippocampal function.

  10. Visual object recognition.

    PubMed

    Logothetis, N K; Sheinberg, D L

    1996-01-01

    Visual object recognition is of fundamental importance to most animals. The diversity of tasks that any biological recognition system must solve suggests that object recognition is not a single, general purpose process. In this review, we consider evidence from the fields of psychology, neuropsychology, and neurophysiology, all of which supports the idea that there are multiple systems for recognition. Data from normal adults, infants, animals, and brain damaged patients reveal a major distinction between the classification of objects at a basic category level and the identification of individual objects from a homogeneous object class. An additional distinction between object representations used for visual perception and those used for visually guided movements provides further support for a multiplicity of visual recognition systems. Recent evidence from psychophysical and neurophysiological studies indicates that one system may represent objects by combinations of multiple views, or aspects, and another may represent objects by structural primitives and their spatial interrelationships.

  11. Automatic object recognition

    NASA Technical Reports Server (NTRS)

    Ranganath, H. S.; Mcingvale, Pat; Sage, Heinz

    1988-01-01

    Geometric and intensity features are very useful in object recognition. An intensity feature is a measure of contrast between object pixels and background pixels. Geometric features provide shape and size information. A model based approach is presented for computing geometric features. Knowledge about objects and imaging system is used to estimate orientation of objects with respect to the line of sight.

  12. Top-down facilitation of visual recognition

    PubMed Central

    Bar, M.; Kassam, K. S.; Ghuman, A. S.; Boshyan, J.; Schmid, A. M.; Dale, A. M.; Hämäläinen, M. S.; Marinkovic, K.; Schacter, D. L.; Rosen, B. R.; Halgren, E.

    2006-01-01

    Cortical analysis related to visual object recognition is traditionally thought to propagate serially along a bottom-up hierarchy of ventral areas. Recent proposals gradually promote the role of top-down processing in recognition, but how such facilitation is triggered remains a puzzle. We tested a specific model, proposing that low spatial frequencies facilitate visual object recognition by initiating top-down processes projected from orbitofrontal to visual cortex. The present study combined magnetoencephalography, which has superior temporal resolution, functional magnetic resonance imaging, and a behavioral task that yields successful recognition with stimulus repetitions. Object recognition elicited differential activity that developed in the left orbitofrontal cortex 50 ms earlier than it did in recognition-related areas in the temporal cortex. This early orbitofrontal activity was directly modulated by the presence of low spatial frequencies in the image. Taken together, the dynamics we revealed provide strong support for the proposal of how top-down facilitation of object recognition is initiated, and our observations are used to derive predictions for future research. PMID:16407167

  13. Voice Congruency Facilitates Word Recognition

    PubMed Central

    Campeanu, Sandra; Craik, Fergus I. M.; Alain, Claude

    2013-01-01

    Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs) while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent) varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory. PMID:23527021

  14. Voice congruency facilitates word recognition.

    PubMed

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2013-01-01

    Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs) while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent) varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  15. Coordinate Transformations in Object Recognition

    ERIC Educational Resources Information Center

    Graf, Markus

    2006-01-01

    A basic problem of visual perception is how human beings recognize objects after spatial transformations. Three central classes of findings have to be accounted for: (a) Recognition performance varies systematically with orientation, size, and position; (b) recognition latencies are sequentially additive, suggesting analogue transformation…

  16. Cognitive object recognition system (CORS)

    NASA Astrophysics Data System (ADS)

    Raju, Chaitanya; Varadarajan, Karthik Mahesh; Krishnamurthi, Niyant; Xu, Shuli; Biederman, Irving; Kelley, Troy

    2010-04-01

    We have developed a framework, Cognitive Object Recognition System (CORS), inspired by current neurocomputational models and psychophysical research in which multiple recognition algorithms (shape based geometric primitives, 'geons,' and non-geometric feature-based algorithms) are integrated to provide a comprehensive solution to object recognition and landmarking. Objects are defined as a combination of geons, corresponding to their simple parts, and the relations among the parts. However, those objects that are not easily decomposable into geons, such as bushes and trees, are recognized by CORS using "feature-based" algorithms. The unique interaction between these algorithms is a novel approach that combines the effectiveness of both algorithms and takes us closer to a generalized approach to object recognition. CORS allows recognition of objects through a larger range of poses using geometric primitives and performs well under heavy occlusion - about 35% of object surface is sufficient. Furthermore, geon composition of an object allows image understanding and reasoning even with novel objects. With reliable landmarking capability, the system improves vision-based robot navigation in GPS-denied environments. Feasibility of the CORS system was demonstrated with real stereo images captured from a Pioneer robot. The system can currently identify doors, door handles, staircases, trashcans and other relevant landmarks in the indoor environment.

  17. Object recognition memory in zebrafish.

    PubMed

    May, Zacnicte; Morrill, Adam; Holcombe, Adam; Johnston, Travis; Gallup, Joshua; Fouad, Karim; Schalomon, Melike; Hamilton, Trevor James

    2016-01-01

    The novel object recognition, or novel-object preference (NOP) test is employed to assess recognition memory in a variety of organisms. The subject is exposed to two identical objects, then after a delay, it is placed back in the original environment containing one of the original objects and a novel object. If the subject spends more time exploring one object, this can be interpreted as memory retention. To date, this test has not been fully explored in zebrafish (Danio rerio). Zebrafish possess recognition memory for simple 2- and 3-dimensional geometrical shapes, yet it is unknown if this translates to complex 3-dimensional objects. In this study we evaluated recognition memory in zebrafish using complex objects of different sizes. Contrary to rodents, zebrafish preferentially explored familiar over novel objects. Familiarity preference disappeared after delays of 5 mins. Leopard danios, another strain of D. rerio, also preferred the familiar object after a 1 min delay. Object preference could be re-established in zebra danios by administration of nicotine tartrate salt (50mg/L) prior to stimuli presentation, suggesting a memory-enhancing effect of nicotine. Additionally, exploration biases were present only when the objects were of intermediate size (2 × 5 cm). Our results demonstrate zebra and leopard danios have recognition memory, and that low nicotine doses can improve this memory type in zebra danios. However, exploration biases, from which memory is inferred, depend on object size. These findings suggest zebrafish ecology might influence object preference, as zebrafish neophobia could reflect natural anti-predatory behaviour. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. An introduction to object recognition.

    PubMed

    Liter, J C; Bülthoff, H H

    1998-01-01

    In this report we present a general introduction to object recognition. We begin with brief discussions of the terminology used in the object recognition literature and the psychophysical tasks that are used to investigate object recognition. We then discuss models of shape representation. We dispense with the idea that shape representations are like the 3-D models used in computer aided design and explore instead models of shape representation that are based on future descriptions. As these descriptions encode only the features that are visible from a particular viewpoint, they are generally viewpoint-specific. We discuss various means of achieving viewpoint-invariant recognition using such descriptions, including reliance on diagnostic features visible from a wide range of viewpoints, storage of multiple descriptions for each object, and the use of transformation mechanisms. Finally, we discuss how differences in viewpoint dependence that are often observed for within-category and between-category recognition tasks could be due to differences in the types of features that are naturally available to distinguish among different objects in these tasks.

  19. Object recognition using metric shape.

    PubMed

    Lee, Young-Lim; Lind, Mats; Bingham, Ned; Bingham, Geoffrey P

    2012-09-15

    Most previous studies of 3D shape perception have shown a general inability to visually perceive metric shape. In line with this, studies of object recognition have shown that only qualitative differences, not quantitative or metric ones can be used effectively for object recognition. Recently, Bingham and Lind (2008) found that large perspective changes (≥ 45°) allow perception of metric shape and Lee and Bingham (2010) found that this, in turn, allowed accurate feedforward reaches-to-grasp objects varying in metric shape. We now investigated whether this information would allow accurate and effective recognition of objects that vary in respect to metric shape. Both judgment accuracies (d') and reaction times confirmed that, with the availability of visual information in large perspective changes, recognition of objects using quantitative as compared to qualitative properties was equivalent in accuracy and speed of judgments. The ability to recognize objects based on their metric shape is, therefore, a function of the availability or unavailability of requisite visual information. These issues and results are discussed in the context of the Two Visual System hypothesis of Milner and Goodale (1995, 2006). 2012 Elsevier Ltd. All rights reserved

  20. Method and System for Object Recognition Search

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor); Stubberud, Allen R. (Inventor)

    2012-01-01

    A method for object recognition using shape and color features of the object to be recognized. An adaptive architecture is used to recognize and adapt the shape and color features for moving objects to enable object recognition.

  1. Recognition of movement object collision

    NASA Astrophysics Data System (ADS)

    Chang, Hsiao Tsu; Sun, Geng-tian; Zhang, Yan

    1991-03-01

    The paper explores the collision recognition of two objects in both crisscross and revolution motions A mathematical model has been established based on the continuation theory. The objects of any shape may be regarded as being built of many 3siniplexes or their convex hulls. Therefore the collision problem of two object in motion can be reduced to the collision of two corresponding 3siinplexes on two respective objects accordingly. Thus an optimized algorithm is developed for collision avoidance which is suitable for computer control and eliminating the need for vision aid. With this algorithm computation time has been reduced significantly. This algorithm is applicable to the path planning of mobile robots And also is applicable to collision avoidance of the anthropomorphic arms grasping two complicated shaped objects. The algorithm is realized using LISP language on a VAX8350 minicomputer.

  2. Anticipatory coarticulation facilitates word recognition in toddlers.

    PubMed

    Mahr, Tristan; McMillan, Brianna T M; Saffran, Jenny R; Ellis Weismer, Susan; Edwards, Jan

    2015-09-01

    Children learn from their environments and their caregivers. To capitalize on learning opportunities, young children have to recognize familiar words efficiently by integrating contextual cues across word boundaries. Previous research has shown that adults can use phonetic cues from anticipatory coarticulation during word recognition. We asked whether 18-24 month-olds (n=29) used coarticulatory cues on the word "the" when recognizing the following noun. We performed a looking-while-listening eyetracking experiment to examine word recognition in neutral vs. facilitating coarticulatory conditions. Participants looked to the target image significantly sooner when the determiner contained facilitating coarticulatory cues. These results provide the first evidence that novice word-learners can take advantage of anticipatory sub-phonemic cues during word recognition.

  3. Disruptive camouflage impairs object recognition

    PubMed Central

    Webster, Richard J.; Hassall, Christopher; Herdman, Chris M.; Godin, Jean-Guy J.; Sherratt, Thomas N.

    2013-01-01

    Whether hiding from predators, or avoiding battlefield casualties, camouflage is widely employed to prevent detection. Disruptive coloration is a seemingly well-known camouflage mechanism proposed to function by breaking up an object's salient features (for example their characteristic outline), rendering objects more difficult to recognize. However, while a wide range of animals are thought to evade detection using disruptive patterns, there is no direct experimental evidence that disruptive coloration impairs recognition. Using humans searching for computer-generated moth targets, we demonstrate that the number of edge-intersecting patches on a target reduces the likelihood of it being detected, even at the expense of reduced background matching. Crucially, eye-tracking data show that targets with more edge-intersecting patches were looked at for longer periods prior to attack, and passed-over more frequently during search tasks. We therefore show directly that edge patches enhance survivorship by impairing recognition, confirming that disruptive coloration is a distinct camouflage strategy, not simply an artefact of background matching. PMID:24152693

  4. Recurrent Processing during Object Recognition

    PubMed Central

    O’Reilly, Randall C.; Wyatte, Dean; Herd, Seth; Mingus, Brian; Jilk, David J.

    2013-01-01

    How does the brain learn to recognize objects visually, and perform this difficult feat robustly in the face of many sources of ambiguity and variability? We present a computational model based on the biology of the relevant visual pathways that learns to reliably recognize 100 different object categories in the face of naturally occurring variability in location, rotation, size, and lighting. The model exhibits robustness to highly ambiguous, partially occluded inputs. Both the unified, biologically plausible learning mechanism and the robustness to occlusion derive from the role that recurrent connectivity and recurrent processing mechanisms play in the model. Furthermore, this interaction of recurrent connectivity and learning predicts that high-level visual representations should be shaped by error signals from nearby, associated brain areas over the course of visual learning. Consistent with this prediction, we show how semantic knowledge about object categories changes the nature of their learned visual representations, as well as how this representational shift supports the mapping between perceptual and conceptual knowledge. Altogether, these findings support the potential importance of ongoing recurrent processing throughout the brain’s visual system and suggest ways in which object recognition can be understood in terms of interactions within and between processes over time. PMID:23554596

  5. Relations among Early Object Recognition Skills: Objects and Letters

    ERIC Educational Resources Information Center

    Augustine, Elaine; Jones, Susan S.; Smith, Linda B.; Longfield, Erica

    2015-01-01

    Human visual object recognition is multifaceted and comprised of several domains of expertise. Developmental relations between young children's letter recognition and their 3-dimensional object recognition abilities are implicated on several grounds but have received little research attention. Here, we ask how preschoolers' success in recognizing…

  6. Relations among Early Object Recognition Skills: Objects and Letters

    ERIC Educational Resources Information Center

    Augustine, Elaine; Jones, Susan S.; Smith, Linda B.; Longfield, Erica

    2015-01-01

    Human visual object recognition is multifaceted and comprised of several domains of expertise. Developmental relations between young children's letter recognition and their 3-dimensional object recognition abilities are implicated on several grounds but have received little research attention. Here, we ask how preschoolers' success in recognizing…

  7. Infant Visual Attention and Object Recognition

    PubMed Central

    Reynolds, Greg D.

    2015-01-01

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. PMID:25596333

  8. Infant visual attention and object recognition.

    PubMed

    Reynolds, Greg D

    2015-05-15

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy.

  9. Relations among early object recognition skills: Objects and letters.

    PubMed

    Augustine, Elaine; Jones, Susan S; Smith, Linda B; Longfield, Erica

    2015-04-01

    Human visual object recognition is multifaceted, with several domains of expertise. Developmental relations between young children's letter recognition and their 3-dimensional object recognition abilities are implicated on several grounds but have received little research attention. Here, we ask how preschoolers' success in recognizing letters relates to their ability to recognize 3-dimensional objects from sparse shape information alone. A relation is predicted because perception of the spatial relations is critical in both domains. Seventy-three 2 ½- to 4-year-old children completed a Letter Recognition task, measuring the ability to identify a named letter among 3 letters with similar shapes, and a "Shape Caricature Recognition" task, measuring recognition of familiar objects from sparse, abstract information about their part shapes and the spatial relations among those parts. Children also completed a control "Shape Bias" task, in which success depends on recognition of overall object shape but not of relational structure. Children's success in letter recognition was positively related to their shape caricature recognition scores, but not to their shape bias scores. The results suggest that letter recognition builds upon developing skills in attending to and representing the relational structure of object shape, and that these skills are common to both 2-dimensional and 3-dimensional object perception.

  10. Breaking Object Correspondence Across Saccadic Eye Movements Deteriorates Object Recognition.

    PubMed

    Poth, Christian H; Herwig, Arvid; Schneider, Werner X

    2015-01-01

    Visual perception is based on information processing during periods of eye fixations that are interrupted by fast saccadic eye movements. The ability to sample and relate information on task-relevant objects across fixations implies that correspondence between presaccadic and postsaccadic objects is established. Postsaccadic object information usually updates and overwrites information on the corresponding presaccadic object. The presaccadic object representation is then lost. In contrast, the presaccadic object is conserved when object correspondence is broken. This helps transsaccadic memory but it may impose attentional costs on object recognition. Therefore, we investigated how breaking object correspondence across the saccade affects postsaccadic object recognition. In Experiment 1, object correspondence was broken by a brief postsaccadic blank screen. Observers made a saccade to a peripheral object which was displaced during the saccade. This object reappeared either immediately after the saccade or after the blank screen. Within the postsaccadic object, a letter was briefly presented (terminated by a mask). Observers reported displacement direction and letter identity in different blocks. Breaking object correspondence by blanking improved displacement identification but deteriorated postsaccadic letter recognition. In Experiment 2, object correspondence was broken by changing the object's contrast-polarity. There were no object displacements and observers only reported letter identity. Again, breaking object correspondence deteriorated postsaccadic letter recognition. These findings identify transsaccadic object correspondence as a key determinant of object recognition across the saccade. This is in line with the recent hypothesis that breaking object correspondence results in separate representations of presaccadic and postsaccadic objects which then compete for limited attentional processing resources (Schneider, 2013). Postsaccadic object recognition is

  11. Breaking Object Correspondence Across Saccadic Eye Movements Deteriorates Object Recognition

    PubMed Central

    Poth, Christian H.; Herwig, Arvid; Schneider, Werner X.

    2015-01-01

    Visual perception is based on information processing during periods of eye fixations that are interrupted by fast saccadic eye movements. The ability to sample and relate information on task-relevant objects across fixations implies that correspondence between presaccadic and postsaccadic objects is established. Postsaccadic object information usually updates and overwrites information on the corresponding presaccadic object. The presaccadic object representation is then lost. In contrast, the presaccadic object is conserved when object correspondence is broken. This helps transsaccadic memory but it may impose attentional costs on object recognition. Therefore, we investigated how breaking object correspondence across the saccade affects postsaccadic object recognition. In Experiment 1, object correspondence was broken by a brief postsaccadic blank screen. Observers made a saccade to a peripheral object which was displaced during the saccade. This object reappeared either immediately after the saccade or after the blank screen. Within the postsaccadic object, a letter was briefly presented (terminated by a mask). Observers reported displacement direction and letter identity in different blocks. Breaking object correspondence by blanking improved displacement identification but deteriorated postsaccadic letter recognition. In Experiment 2, object correspondence was broken by changing the object’s contrast-polarity. There were no object displacements and observers only reported letter identity. Again, breaking object correspondence deteriorated postsaccadic letter recognition. These findings identify transsaccadic object correspondence as a key determinant of object recognition across the saccade. This is in line with the recent hypothesis that breaking object correspondence results in separate representations of presaccadic and postsaccadic objects which then compete for limited attentional processing resources (Schneider, 2013). Postsaccadic object recognition

  12. Visual object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Chang, Chu-Yin (Inventor); English, James D. (Inventor); Tardella, Neil M. (Inventor)

    2010-01-01

    This invention describes a method for identifying and tracking an object from two-dimensional data pictorially representing said object by an object-tracking system through processing said two-dimensional data using at least one tracker-identifier belonging to the object-tracking system for providing an output signal containing: a) a type of the object, and/or b) a position or an orientation of the object in three-dimensions, and/or c) an articulation or a shape change of said object in said three dimensions.

  13. Recognition memory impairments caused by false recognition of novel objects.

    PubMed

    Yeung, Lok-Kin; Ryan, Jennifer D; Cowell, Rosemary A; Barense, Morgan D

    2013-11-01

    A fundamental assumption underlying most current theories of amnesia is that memory impairments arise because previously studied information either is lost rapidly or is made inaccessible (i.e., the old information appears to be new). Recent studies in rodents have challenged this view, suggesting instead that under conditions of high interference, recognition memory impairments following medial temporal lobe damage arise because novel information appears as though it has been previously seen. Here, we developed a new object recognition memory paradigm that distinguished whether object recognition memory impairments were driven by previously viewed objects being treated as if they were novel or by novel objects falsely recognized as though they were previously seen. In this indirect, eyetracking-based passive viewing task, older adults at risk for mild cognitive impairment showed false recognition to high-interference novel items (with a significant degree of feature overlap with previously studied items) but normal novelty responses to low-interference novel items (with a lower degree of feature overlap). The indirect nature of the task minimized the effects of response bias and other memory-based decision processes, suggesting that these factors cannot solely account for false recognition. These findings support the counterintuitive notion that recognition memory impairments in this memory-impaired population are not characterized by forgetting but rather are driven by the failure to differentiate perceptually similar objects, leading to the false recognition of novel objects as having been seen before.

  14. Unposed Object Recognition using an Active Approach

    DTIC Science & Technology

    2013-02-01

    viewpoints when it is necessary to gain con dence in the classi cation decision. We demonstrate the e ect of unposed objects on a state-of-the-art approach to...object recognition, then show how an active approach can increase accuracy. The active approach works by attaching con dence to recognition...prompting further inspection when con dence is low. We demonstrate a performance increase on a wide variety of objects from the RGB-D database, showing a

  15. The Role of Object Recognition in Young Infants' Object Segregation.

    ERIC Educational Resources Information Center

    Carey, Susan; Williams, Travis

    2001-01-01

    Discusses Needham's findings by asserting that they extend understanding of infant perception by showing that the memory representations infants draw upon have bound together information about shape, color, and pattern. Considers the distinction between two senses of "recognition" and asks in which sense object recognition contributes to object…

  16. The Role of Object Recognition in Young Infants' Object Segregation.

    ERIC Educational Resources Information Center

    Carey, Susan; Williams, Travis

    2001-01-01

    Discusses Needham's findings by asserting that they extend understanding of infant perception by showing that the memory representations infants draw upon have bound together information about shape, color, and pattern. Considers the distinction between two senses of "recognition" and asks in which sense object recognition contributes to object…

  17. Object Recognition Memory and the Rodent Hippocampus

    ERIC Educational Resources Information Center

    Broadbent, Nicola J.; Gaskin, Stephane; Squire, Larry R.; Clark, Robert E.

    2010-01-01

    In rodents, the novel object recognition task (NOR) has become a benchmark task for assessing recognition memory. Yet, despite its widespread use, a consensus has not developed about which brain structures are important for task performance. We assessed both the anterograde and retrograde effects of hippocampal lesions on performance in the NOR…

  18. Object Recognition Memory and the Rodent Hippocampus

    ERIC Educational Resources Information Center

    Broadbent, Nicola J.; Gaskin, Stephane; Squire, Larry R.; Clark, Robert E.

    2010-01-01

    In rodents, the novel object recognition task (NOR) has become a benchmark task for assessing recognition memory. Yet, despite its widespread use, a consensus has not developed about which brain structures are important for task performance. We assessed both the anterograde and retrograde effects of hippocampal lesions on performance in the NOR…

  19. BDNF controls object recognition memory reconsolidation.

    PubMed

    Radiske, Andressa; Rossato, Janine I; Gonzalez, Maria Carolina; Köhler, Cristiano A; Bevilaqua, Lia R; Cammarota, Martín

    2017-03-06

    Reconsolidation restabilizes memory after reactivation. Previously, we reported that the hippocampus is engaged in object recognition memory reconsolidation to allow incorporation of new information into the original engram. Here we show that BDNF is sufficient for this process, and that blockade of BDNF function in dorsal CA1 impairs updating of the reactivated recognition memory trace.

  20. Prosopagnosia and object agnosia without covert recognition.

    PubMed

    Newcombe, F; Young, A W; De Haan, E H

    1989-01-01

    Investigations of the visual recognition abilities of the patient M.S. are reported. M.S. is unable to achieve overt recognition of any familiar faces, and many everyday objects. In Task 1 he showed semantic priming from name primes but not from face primes in a name recognition task. In Task 2 he showed no advantage in learning true (face + correct name) rather than untrue (face + someone else's name) pairings of faces and names. In Task 3 semantic priming of lexical decision was only found for object picture primes that M.S. was able to recognize overtly. In Task 4 faster matching of photographs of familiar than unfamiliar objects was only found for objects that M.S. was able to recognize overtly. These findings demonstrate an absence of covert recognition effects for M.S., consistent with the view that his impairment is primarily "perceptual" in nature.

  1. Neural-Network Object-Recognition Program

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  2. Object recognition using coding schemes

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1992-12-01

    A new technique for recognizing two-dimensional objects independent of scale and orientation is presented. This technique's performance on real imagery of tactical military targets, both occluded and nonoccluded, is evaluated. The robustness of this method with respect to partial occlusion is shown. The relatively small storage requirements and fast search time make it an attractive candidate for real-time applications.

  3. Object recognition using coding schemes

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1991-08-01

    A new technique for recognizing two-dimensional objects independent of scale and orientation is presented. This technique's performance on real imagery of tactical military targets, both occluded and nonoccluded, is evaluated. This study shows that this method is robust with respect to partial occlusion. The relatively small storage requirements and fast search time make it an attractive candidate for real-time applications.

  4. Object Recognition Using Range Images.

    DTIC Science & Technology

    1985-12-01

    Modeling the Dropouts in Range Images 28 Repairing the Pixel Dropouts 33 III. Recognizing Objects from Range Scenes 38 Using Range Geometry for Scene...well as possible methods of correcting for these effects. Other factors af- fecting the correlation coefficient that were considered were pixel dropouts ...and the beam spot size of the laser. Pixel dropouts were shown to be detrimental to a range image’s correlation coefficient, but could be corrected

  5. Object recognition approach based on feature fusion

    NASA Astrophysics Data System (ADS)

    Wang, Runsheng

    2001-09-01

    Multi-sensor information fusion plays an important pole in object recognition and many other application fields. Fusion performance is tightly depended on the fusion level selected and the approach used. Feature level fusion is a potential and difficult fusion level though there might be mainly three fusion levels. Two schemes are developed for key issues of feature level fusion in this paper. In feature selecting, a normal method developed is to analyze the mutual relationship among the features that can be used, and to be applied to order features. In object recognition, a multi-level recognition scheme is developed, whose procedure can be controlled and updated by analyzing the decision result obtained in order to achieve a final reliable result. The new approach is applied to recognize work-piece objects with twelve classes in optical images and open-country objects with four classes based on infrared image sequence and MMW radar. Experimental results are satisfied.

  6. A neuromorphic system for video object recognition.

    PubMed

    Khosla, Deepak; Chen, Yang; Kim, Kyungnam

    2014-01-01

    Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS), is inspired by computational neuroscience models of feed-forward object detection and classification pipelines for processing visual data. The NEOVUS architecture is inspired by the ventral (what) and dorsal (where) streams of the mammalian visual pathway and integrates retinal processing, object detection based on form and motion modeling, and object classification based on convolutional neural networks. The object recognition performance and energy use of the NEOVUS was evaluated by the Defense Advanced Research Projects Agency (DARPA) under the Neovision2 program using three urban area video datasets collected from a mix of stationary and moving platforms. These datasets are challenging and include a large number of objects of different types in cluttered scenes, with varying illumination and occlusion conditions. In a systematic evaluation of five different teams by DARPA on these datasets, the NEOVUS demonstrated the best performance with high object recognition accuracy and the lowest energy consumption. Its energy use was three orders of magnitude lower than two independent state of the art baseline computer vision systems. The dynamic power requirement for the complete system mapped to commercial off-the-shelf (COTS) hardware that includes a 5.6 Megapixel color camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W), for an effective energy consumption of 5.45 nanoJoules (nJ) per bit of incoming video. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition toward enabling practical low-power and mobile video processing

  7. A neuromorphic system for video object recognition

    PubMed Central

    Khosla, Deepak; Chen, Yang; Kim, Kyungnam

    2014-01-01

    Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS), is inspired by computational neuroscience models of feed-forward object detection and classification pipelines for processing visual data. The NEOVUS architecture is inspired by the ventral (what) and dorsal (where) streams of the mammalian visual pathway and integrates retinal processing, object detection based on form and motion modeling, and object classification based on convolutional neural networks. The object recognition performance and energy use of the NEOVUS was evaluated by the Defense Advanced Research Projects Agency (DARPA) under the Neovision2 program using three urban area video datasets collected from a mix of stationary and moving platforms. These datasets are challenging and include a large number of objects of different types in cluttered scenes, with varying illumination and occlusion conditions. In a systematic evaluation of five different teams by DARPA on these datasets, the NEOVUS demonstrated the best performance with high object recognition accuracy and the lowest energy consumption. Its energy use was three orders of magnitude lower than two independent state of the art baseline computer vision systems. The dynamic power requirement for the complete system mapped to commercial off-the-shelf (COTS) hardware that includes a 5.6 Megapixel color camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W), for an effective energy consumption of 5.45 nanoJoules (nJ) per bit of incoming video. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition toward enabling practical low-power and mobile video processing

  8. Probabilistic view clustering in object recognition

    NASA Astrophysics Data System (ADS)

    Camps, Octavia I.; Christoffel, Douglas W.; Pathak, Anjali

    1992-11-01

    To recognize objects and to determine their poses in a scene we need to find correspondences between the features extracted from the image and those of the object models. Models are commonly represented by describing a few characteristic views of the object representing groups of views with similar properties. Most feature-based matching schemes assume that all the features that are potentially visible in a view will appear with equal probability, and the resulting matching algorithms have to allow for 'errors' without really understanding what they mean. PREMIO is an object recognition system that uses CAD models of 3D objects and knowledge of surface reflectance properties, light sources, sensor characteristics, and feature detector algorithms to estimate the probability of the features being detectable and correctly matched. The purpose of this paper is to describe the predictions generated by PREMIO, how they are combined into a single probabilistic model, and illustrative examples showing its use in object recognition.

  9. Recognition of object domain by color distribution

    NASA Technical Reports Server (NTRS)

    Mugitani, Takako; Mifune, Mitsuru; Nagata, Shigeki

    1988-01-01

    For the image processing of an object in its natural image, it is necessary to extract in advance the object to be processed from its image. To accomplish this the outer shape of an object is extracted through human instructions, which requires a great deal of time and patience. A method involving the setting of a model of color distribution on the surface of an object is described. This method automatically provides color recognition, a piece of knowledge that represents the properties of an object, from its natural image. A method for recognizing and extracting the object in the image according to the color recognized is also described.

  10. Object recognition by artificial cortical maps.

    PubMed

    Plebe, Alessio; Domenella, Rosaria Grazia

    2007-09-01

    Object recognition is one of the most important functions of the human visual system, yet one of the least understood, this despite the fact that vision is certainly the most studied function of the brain. We understand relatively well how several processes in the cortical visual areas that support recognition capabilities take place, such as orientation discrimination and color constancy. This paper proposes a model of the development of object recognition capability, based on two main theoretical principles. The first is that recognition does not imply any sort of geometrical reconstruction, it is instead fully driven by the two dimensional view captured by the retina. The second assumption is that all the processing functions involved in recognition are not genetically determined or hardwired in neural circuits, but are the result of interactions between epigenetic influences and basic neural plasticity mechanisms. The model is organized in modules roughly related to the main visual biological areas, and is implemented mainly using the LISSOM architecture, a recent neural self-organizing map model that simulates the effects of intercortical lateral connections. This paper shows how recognition capabilities, similar to those found in brain ventral visual areas, can develop spontaneously by exposure to natural images in an artificial cortical model.

  11. A Neural Network Object Recognition System

    DTIC Science & Technology

    1990-07-01

    useful for exploring different neural network configurations. There are three main computation phases of a model based object recognition system...segmentation, feature extraction, and object classification. This report focuses on the object classification stage. For segmentation, a neural network based...are available with the current system. Neural network based feature extraction may be added at a later date. The classification stage consists of a

  12. HFirst: A Temporal Approach to Object Recognition.

    PubMed

    Orchard, Garrick; Meyer, Cedric; Etienne-Cummings, Ralph; Posch, Christoph; Thakor, Nitish; Benosman, Ryad

    2015-10-01

    This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% ± 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% ± 1.9% for a new more difficult 36 class character recognition task.

  13. Integration trumps selection in object recognition.

    PubMed

    Saarela, Toni P; Landy, Michael S

    2015-03-30

    Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection.

  14. Integration trumps selection in object recognition

    PubMed Central

    Saarela, Toni P.; Landy, Michael S.

    2015-01-01

    Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154

  15. The uncrowded window of object recognition

    PubMed Central

    Pelli, Denis G; Tillman, Katharine A

    2009-01-01

    It is now emerging that vision is usually limited by object spacing rather than size. The visual system recognizes an object by detecting and then combining its features. ‘Crowding’ occurs when objects are too close together and features from several objects are combined into a jumbled percept. Here, we review the explosion of studies on crowding—in grating discrimination, letter and face recognition, visual search, selective attention, and reading—and find a universal principle, the Bouma law. The critical spacing required to prevent crowding is equal for all objects, although the effect is weaker between dissimilar objects. Furthermore, critical spacing at the cortex is independent of object position, and critical spacing at the visual field is proportional to object distance from fixation. The region where object spacing exceeds critical spacing is the ‘uncrowded window’. Observers cannot recognize objects outside of this window and its size limits the speed of reading and search. PMID:18828191

  16. Neurocomputational bases of object and face recognition.

    PubMed Central

    Biederman, I; Kalocsai, P

    1997-01-01

    A number of behavioural phenomena distinguish the recognition of faces and objects, even when members of a set of objects are highly similar. Because faces have the same parts in approximately the same relations, individuation of faces typically requires specification of the metric variation in a holistic and integral representation of the facial surface. The direct mapping of a hypercolumn-like pattern of activation onto a representation layer that preserves relative spatial filter values in a two-dimensional (2D) coordinate space, as proposed by C. von der Malsburg and his associates, may account for many of the phenomena associated with face recognition. An additional refinement, in which each column of filters (termed a 'jet') is centred on a particular facial feature (or fiducial point), allows selectivity of the input into the holistic representation to avoid incorporation of occluding or nearby surfaces. The initial hypercolumn representation also characterizes the first stage of object perception, but the image variation for objects at a given location in a 2D coordinate space may be too great to yield sufficient predictability directly from the output of spatial kernels. Consequently, objects can be represented by a structural description specifying qualitative (typically, non-accidental) characterizations of an object's parts, the attributes of the parts, and the relations among the parts, largely based on orientation and depth discontinuities (as shown by Hummel & Biederman). A series of experiments on the name priming or physical matching of complementary images (in the Fourier domain) of objects and faces documents that whereas face recognition is strongly dependent on the original spatial filter values, evidence from object recognition indicates strong invariance to these values, even when distinguishing among objects that are as similar as faces. PMID:9304687

  17. Invariant object recognition based on extended fragments.

    PubMed

    Bart, Evgeniy; Hegdé, Jay

    2012-01-01

    Visual appearance of natural objects is profoundly affected by viewing conditions such as viewpoint and illumination. Human subjects can nevertheless compensate well for variations in these viewing conditions. The strategies that the visual system uses to accomplish this are largely unclear. Previous computational studies have suggested that in principle, certain types of object fragments (rather than whole objects) can be used for invariant recognition. However, whether the human visual system is actually capable of using this strategy remains unknown. Here, we show that human observers can achieve illumination invariance by using object fragments that carry the relevant information. To determine this, we have used novel, but naturalistic, 3-D visual objects called "digital embryos." Using novel instances of whole embryos, not fragments, we trained subjects to recognize individual embryos across illuminations. We then tested the illumination-invariant object recognition performance of subjects using fragments. We found that the performance was strongly correlated with the mutual information (MI) of the fragments, provided that MI value took variations in illumination into consideration. This correlation was not attributable to any systematic differences in task difficulty between different fragments. These results reveal two important principles of invariant object recognition. First, the subjects can achieve invariance at least in part by compensating for the changes in the appearance of small local features, rather than of whole objects. Second, the subjects do not always rely on generic or pre-existing invariance of features (i.e., features whose appearance remains largely unchanged by variations in illumination), and are capable of using learning to compensate for appearance changes when necessary. These psychophysical results closely fit the predictions of earlier computational studies of fragment-based invariant object recognition.

  18. Interactive object recognition assistance: an approach to recognition starting from target objects

    NASA Astrophysics Data System (ADS)

    Geisler, Juergen; Littfass, Michael

    1999-07-01

    Recognition of target objects in remotely sensed imagery required detailed knowledge about the target object domain as well as about mapping properties of the sensing system. The art of object recognition is to combine both worlds appropriately and to provide models of target appearance with respect to sensor characteristics. Common approaches to support interactive object recognition are either driven from the sensor point of view and address the problem of displaying images in a manner adequate to the sensing system. Or they focus on target objects and provide exhaustive encyclopedic information about this domain. Our paper discusses an approach to assist interactive object recognition based on knowledge about target objects and taking into account the significance of object features with respect to characteristics of the sensed imagery, e.g. spatial and spectral resolution. An `interactive recognition assistant' takes the image analyst through the interpretation process by indicating step-by-step the respectively most significant features of objects in an actual set of candidates. The significance of object features is expressed by pregenerated trees of significance, and by the dynamic computation of decision relevance for every feature at each step of the recognition process. In the context of this approach we discuss the question of modeling and storing the multisensorial/multispectral appearances of target objects and object classes as well as the problem of an adequate dynamic human-machine-interface that takes into account various mental models of human image interpretation.

  19. Object recognition difficulty in visual apperceptive agnosia.

    PubMed

    Grossman, M; Galetta, S; D'Esposito, M

    1997-04-01

    Two patients with visual apperceptive agnosia were examined on tasks assessing the appreciation of visual material. Elementary visual functioning was relatively preserved, but they had profound difficulty recognizing and naming line drawings. More detailed evaluation revealed accurate recognition of regular geometric shapes and colors, but performance deteriorated when the shapes were made more complex visually, when multiple-choice arrays contained larger numbers of simple targets and foils, and when a mental manipulation such as a rotation was required. The recognition of letters and words was similarly compromised. Naming, recognition, and anomaly judgments of colored pictures and real objects were more accurate than similar decisions involving black-and-white line drawings. Visual imagery for shapes, letters, and objects appeared to be more accurate than visual perception of the same materials. We hypothesize that object recognition difficulty in visual apperceptive agnosia is due to two related factors: the impaired appreciation of the visual perceptual features that constitute objects, and a limitation in the cognitive resources that are available for processing demanding material within the visual modality.

  20. The Functional Architecture of Visual Object Recognition

    DTIC Science & Technology

    1991-07-01

    different forms of agnosia can provide clues to the representations underlying normal object recognition (Farah, 1990). For example, the pair-wise...patterns of deficit and sparing occur. In a review of 99 published cases of agnosia , the observed patterns of co- occurrence implicated two underlying

  1. Exploiting core knowledge for visual object recognition.

    PubMed

    Schurgin, Mark W; Flombaum, Jonathan I

    2017-03-01

    Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record

  2. Object recognition with hierarchical discriminant saliency networks

    PubMed Central

    Han, Sunhyoung; Vasconcelos, Nuno

    2014-01-01

    The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and

  3. L2 Gender Facilitation and Inhibition in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Behney, Jennifer N.

    2011-01-01

    This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…

  4. L2 Gender Facilitation and Inhibition in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Behney, Jennifer N.

    2011-01-01

    This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…

  5. Automatic recognition of partially occluded objects

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1992-09-01

    Machine recognition of partially occluded objects is essential for any realistic automatic object recognition (AOR) system. This important problem however, has been mostly ignored by the researchers and developers of AOR systems. Due to this lack of attention very little work is done even in the formulation of the problem and much less for its solution. In this paper I attempt to analyze the occlusion problem, define its various categories, and to present an approach for its solution in some of these categories. I also present some of the empirical results of the implementation of the approach on real imagery. These results have been very encouraging so far. However, more work is definitely needed to be done for the resolution of this problem.

  6. [Neural mechanisms for object and color recognition].

    PubMed

    Koyama, Shinichi; Kawamura, Mitsuru

    2007-01-01

    We reported double-dissociation between the visual processing of the edges and the surfaces of objects. Patients with lateral occipital damage showed selective impairment in the perception of edges whereas those with medial ventral occipital damage showed selective impairment in the perception of the 3D structure of the surface. Patients with medial ventral occipital damage also exhibited impaired perception of color, which is also a surface property. Those results were consistent with those from neuroimaging studies. Taken together, those studies suggest that objects may be processed in two separate pathways in the ventral occipital cortex: the edges of objects are processed in the lateral pathway and the surface of objects are processed in the medial pathway. Both edges and surfaces play important roles in object recognition, and both types of perception should be evaluated in patients with visual agnosia.

  7. The influence of color information on the recognition of color diagnostic and noncolor diagnostic objects.

    PubMed

    Bramão, Inês; Inácio, Filomena; Faísca, Luís; Reis, Alexandra; Petersson, Karl Magnus

    2011-01-01

    In the present study, the authors explore in detail the level of visual object recognition at which perceptual color information improves the recognition of color diagnostic and noncolor diagnostic objects. To address this issue, 3 object recognition tasks with different cognitive demands were designed: (a) an object verification task; (b) a category verification task; and (c) a name verification task. The authors found that perceptual color information improved color diagnostic object recognition mainly in tasks for which access to the semantic knowledge about the object was necessary to perform the task; that is, in category and name verification. In contrast, the authors found that perceptual color information facilitates noncolor diagnostic object recognition when access to the object's structural description from long-term memory was necessary--that is, object verification. In summary, the present study shows that the role of perceptual color information in object recognition is dependent on color diagnosticity.

  8. Perceptual Plasticity for Auditory Object Recognition

    PubMed Central

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  9. Infants' recognition of objects using canonical color.

    PubMed

    Kimura, Atsushi; Wada, Yuji; Yang, Jiale; Otsuka, Yumiko; Dan, Ippeita; Masuda, Tomohiro; Kanazawa, So; Yamaguchi, Masami K

    2010-03-01

    We explored infants' ability to recognize the canonical colors of daily objects, including two color-specific objects (human face and fruit) and a non-color-specific object (flower), by using a preferential looking technique. A total of 58 infants between 5 and 8 months of age were tested with a stimulus composed of two color pictures of an object placed side by side: a correctly colored picture (e.g., red strawberry) and an inappropriately colored picture (e.g., green-blue strawberry). The results showed that, overall, the 6- to 8-month-olds showed preference for the correctly colored pictures for color-specific objects, whereas they did not show preference for the correctly colored pictures for the non-color-specific object. The 5-month-olds showed no significant preference for the correctly colored pictures for all object conditions. These findings imply that the recognition of canonical color for objects emerges at 6 months of age. Copyright 2009 Elsevier Inc. All rights reserved.

  10. Automatic anatomy recognition of sparse objects

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Udupa, Jayaram K.; Odhner, Dewey; Wang, Huiqian; Tong, Yubing; Torigian, Drew A.

    2015-03-01

    A general body-wide automatic anatomy recognition (AAR) methodology was proposed in our previous work based on hierarchical fuzzy models of multitudes of objects which was not tied to any specific organ system, body region, or image modality. That work revealed the challenges encountered in modeling, recognizing, and delineating sparse objects throughout the body (compared to their non-sparse counterparts) if the models are based on the object's exact geometric representations. The challenges stem mainly from the variation in sparse objects in their shape, topology, geographic layout, and relationship to other objects. That led to the idea of modeling sparse objects not from the precise geometric representations of their samples but by using a properly designed optimal super form. This paper presents the underlying improved methodology which includes 5 steps: (a) Collecting image data from a specific population group G and body region Β and delineating in these images the objects in Β to be modeled; (b) Building a super form, S-form, for each object O in Β; (c) Refining the S-form of O to construct an optimal (minimal) super form, S*-form, which constitutes the (fuzzy) model of O; (d) Recognizing objects in Β using the S*-form; (e) Defining confounding and background objects in each S*-form for each object and performing optimal delineation. Our evaluations based on 50 3D computed tomography (CT) image sets in the thorax on four sparse objects indicate that substantially improved performance (FPVF~2%, FNVF~10%, and success where the previous approach failed) can be achieved using the new approach.

  11. Object and event recognition for stroke rehabilitation

    NASA Astrophysics Data System (ADS)

    Ghali, Ahmed; Cunningham, Andrew S.; Pridmore, Tony P.

    2003-06-01

    Stroke is a major cause of disability and health care expenditure around the world. Existing stroke rehabilitation methods can be effective but are costly and need to be improved. Even modest improvements in the effectiveness of rehabilitation techniques could produce large benefits in terms of quality of life. The work reported here is part of an ongoing effort to integrate virtual reality and machine vision technologies to produce innovative stroke rehabilitation methods. We describe a combined object recognition and event detection system that provides real time feedback to stroke patients performing everyday kitchen tasks necessary for independent living, e.g. making a cup of coffee. The image plane position of each object, including the patient"s hand, is monitored using histogram-based recognition methods. The relative positions of hand and objects are then reported to a task monitor that compares the patient"s actions against a model of the target task. A prototype system has been constructed and is currently undergoing technical and clinical evaluation.

  12. The resilience of object predictions: Early recognition across viewpoints and exemplars

    PubMed Central

    Cheung, Olivia S.; Bar, Moshe

    2013-01-01

    Recognition of everyday objects can be facilitated by top-down predictions. We have proposed that these predictions are derived from rudimentary shape information, or gist, extracted rapidly from low spatial frequencies (LSFs) in the image (Bar, 2003). Because of the coarse nature of LSF representations, we hypothesize here that such predictions can accommodate changes in viewpoint as well as facilitate the recognition of visually similar objects. In a repetition-priming task, we indeed observed significant facilitation of target recognition that was primed by LSF objects across moderate viewpoint changes, as well as across visually similar exemplars. These results suggest that the LSF representations are specific enough to activate accurate predictions, yet flexible enough to overcome small changes in visual appearance. Such gist representations facilitate object recognition by accommodating changes in visual appearance due to viewing conditions and help to generalize from familiar to novel exemplars. PMID:24234168

  13. Mechanisms of top-down facilitation in perception of visual objects studied by fMRI

    PubMed Central

    Eger, E.; Henson, R. N.; Driver, J.; Dolan, R. J.

    2008-01-01

    Summary Prior knowledge regarding the possible identity of an object facilitates its recognition from a degraded visual input, though the underlying mechanisms are unclear. Previous work implicated ventral visual cortex, but did not disambiguate whether activity-changes in these regions are causal to or merely reflect an effect of facilitated recognition. We used fMRI to study top-down influences on processing of gradually-revealed objects, by preceding each object with a name that was congruent or incongruent with the object. Congruently primed objects were recognised earlier than incongruently primed, and this was paralleled by shifts in activation profiles for ventral visual, parietal and prefrontal cortices. Prior to recognition, defined on a trial-by-trial basis, activity in ventral visual cortex rose gradually, but equivalently for congruently and incongruently primed objects. In contrast, pre-recognition activity was greater with congruent priming in lateral parietal, retrosplenial, and lateral prefrontal cortices, while functional coupling between parietal and ventral visual (and also left lateral prefrontal and parietal) cortices was enhanced in the same context. Thus, when controlling for recognition point and stimulus information, activity in ventral visual cortex mirrors recognition success, independent of condition. Facilitation by top-down cues involves lateral parietal cortex interacting with ventral visual areas, potentially explaining why parietal lesions can lead to deficits in recognising degraded objects even in the context of top-down knowledge. PMID:17101690

  14. Automatic anatomy recognition via fuzzy object models

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Odhner, Dewey; Falcão, Alexandre X.; Ciesielski, Krzysztof C.; Miranda, Paulo A. V.; Matsumoto, Monica; Grevera, George J.; Saboury, Babak; Torigian, Drew A.

    2012-02-01

    To make Quantitative Radiology a reality in routine radiological practice, computerized automatic anatomy recognition (AAR) during radiological image reading becomes essential. As part of this larger goal, last year at this conference we presented a fuzzy strategy for building body-wide group-wise anatomic models. In the present paper, we describe the further advances made in fuzzy modeling and the algorithms and results achieved for AAR by using the fuzzy models. The proposed AAR approach consists of three distinct steps: (a) Building fuzzy object models (FOMs) for each population group G. (b) By using the FOMs to recognize the individual objects in any given patient image I under group G. (c) To delineate the recognized objects in I. This paper will focus mostly on (b). FOMs are built hierarchically, the smaller sub-objects forming the offspring of larger parent objects. The hierarchical pose relationships from the parent to offspring are codified in the FOMs. Several approaches are being explored currently, grouped under two strategies, both being hierarchical: (ra1) those using search strategies; (ra2) those strategizing a one-shot approach by which the model pose is directly estimated without searching. Based on 32 patient CT data sets each from the thorax and abdomen and 25 objects modeled, our analysis indicates that objects do not all scale uniformly with patient size. Even the simplest among the (ra2) strategies of recognizing the root object and then placing all other descendants as per the learned parent-to-offspring pose relationship bring the models on an average within about 18 mm of the true locations.

  15. Activation of Supraoptic Oxytocin Neurons by Secretin Facilitates Social Recognition.

    PubMed

    Takayanagi, Yuki; Yoshida, Masahide; Takashima, Akihide; Takanami, Keiko; Yoshida, Shoma; Nishimori, Katsuhiko; Nishijima, Ichiko; Sakamoto, Hirotaka; Yamagata, Takanori; Onaka, Tatsushi

    2017-02-01

    Social recognition underlies social behavior in animals, and patients with psychiatric disorders associated with social deficits show abnormalities in social recognition. Oxytocin is implicated in social behavior and has received attention as an effective treatment for sociobehavioral deficits. Secretin receptor-deficient mice show deficits in social behavior. The relationship between oxytocin and secretin concerning social behavior remains to be determined. Expression of c-Fos in oxytocin neurons and release of oxytocin from their dendrites after secretin application were investigated. Social recognition was examined after intracerebroventricular or local injection of secretin, oxytocin, or an oxytocin receptor antagonist in rats, oxytocin receptor-deficient mice, and secretin receptor-deficient mice. Electron and light microscopic immunohistochemical analysis was also performed to determine whether oxytocin neurons extend their dendrites into the medial amygdala. Supraoptic oxytocin neurons expressed the secretin receptor. Secretin activated supraoptic oxytocin neurons and facilitated oxytocin release from dendrites. Secretin increased acquisition of social recognition in an oxytocin receptor-dependent manner. Local application of secretin into the supraoptic nucleus facilitated social recognition, and this facilitation was blocked by an oxytocin receptor antagonist injected into, but not outside of, the medial amygdala. In the medial amygdala, dendrite-like thick oxytocin processes were found to extend from the supraoptic nucleus. Furthermore, oxytocin treatment restored deficits of social recognition in secretin receptor-deficient mice. The results of our study demonstrate that secretin-induced dendritic oxytocin release from supraoptic neurons enhances social recognition. The newly defined secretin-oxytocin system may lead to a possible treatment for social deficits. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights

  16. Field of attention for instantaneous object recognition.

    PubMed

    Yao, Jian-Gao; Gao, Xin; Yan, Hong-Mei; Li, Chao-Yi

    2011-01-21

    Instantaneous object discrimination and categorization are fundamental cognitive capacities performed with the guidance of visual attention. Visual attention enables selection of a salient object within a limited area of the visual field; we referred to as "field of attention" (FA). Though there is some evidence concerning the spatial extent of object recognition, the following questions still remain unknown: (a) how large is the FA for rapid object categorization, (b) how accuracy of attention is distributed over the FA, and (c) how fast complex objects can be categorized when presented against backgrounds formed by natural scenes. To answer these questions, we used a visual perceptual task in which subjects were asked to focus their attention on a point while being required to categorize briefly flashed (20 ms) photographs of natural scenes by indicating whether or not these contained an animal. By measuring the accuracy of categorization at different eccentricities from the fixation point, we were able to determine the spatial extent and the distribution of accuracy over the FA, as well as the speed of categorizing objects using stimulus onset asynchrony (SOA). Our results revealed that subjects are able to rapidly categorize complex natural images within about 0.1 s without eye movement, and showed that the FA for instantaneous image categorization covers a visual field extending 20° × 24°, and accuracy was highest (>90%) at the center of FA and declined with increasing eccentricity. In conclusion, human beings are able to categorize complex natural images at a glance over a large extent of the visual field without eye movement.

  17. Multisensory interactions between auditory and haptic object recognition.

    PubMed

    Kassuba, Tanja; Menz, Mareike M; Röder, Brigitte; Siebner, Hartwig R

    2013-05-01

    Object manipulation produces characteristic sounds and causes specific haptic sensations that facilitate the recognition of the manipulated object. To identify the neural correlates of audio-haptic binding of object features, healthy volunteers underwent functional magnetic resonance imaging while they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG, this effect was found for haptic-to-auditory and auditory-to-haptic matching, whereas the pSTS only displayed a crossmodal matching effect for congruent auditory targets. Auditory and somatosensory association cortices showed increased activity during crossmodal object matching which was, however, independent of semantic congruency. Together, the results show multisensory interactions at different hierarchical stages of auditory and haptic object processing. Object-specific crossmodal interactions culminate in the left FG, which may provide a higher order convergence zone for conceptual object knowledge.

  18. Neural Representations that Support Invariant Object Recognition

    PubMed Central

    Goris, Robbe L. T.; Op de Beeck, Hans P.

    2008-01-01

    Neural mechanisms underlying invariant behaviour such as object recognition are not well understood. For brain regions critical for object recognition, such as inferior temporal cortex (ITC), there is now ample evidence indicating that single cells code for many stimulus aspects, implying that only a moderate degree of invariance is present. However, recent theoretical and empirical work seems to suggest that integrating responses of multiple non-invariant units may produce invariant representations at population level. We provide an explicit test for the hypothesis that a linear read-out mechanism of a pool of units resembling ITC neurons may achieve invariant performance in an identification task. A linear classifier was trained to decode a particular value in a 2-D stimulus space using as input the response pattern across a population of units. Only one dimension was relevant for the task, and the stimulus location on the irrelevant dimension (ID) was kept constant during training. In a series of identification tests, the stimulus location on the relevant dimension (RD) and ID was manipulated, yielding estimates for both the level of sensitivity and tolerance reached by the network. We studied the effects of several single-cell characteristics as well as population characteristics typically considered in the literature, but found little support for the hypothesis. While the classifier averages out effects of idiosyncratic tuning properties and inter-unit variability, its invariance is very much determined by the (hypothetical) ‘average’ neuron. Consequently, even at population level there exists a fundamental trade-off between selectivity and tolerance, and invariant behaviour does not emerge spontaneously. PMID:19242556

  19. Examining Object Location and Object Recognition Memory in Mice

    PubMed Central

    Vogel-Ciernia, Annie; Wood, Marcelo A.

    2014-01-01

    Unit Introduction The ability to store and recall our life experiences defines a person's identity. Consequently, the loss of long-term memory is a particularly devastating part of a variety of cognitive disorders, diseases and injuries. There is a great need to develop therapeutics to treat memory disorders, and thus a variety of animal models and memory paradigms have been developed. Mouse models have been widely used both to study basic disease mechanisms and to evaluate potential drug targets for therapeutic development. The relative ease of genetic manipulation of Mus musculus has led to a wide variety of genetically altered mice that model cognitive disorders ranging from Alzheimer's disease to autism. Rodents, including mice, are particularly adept at encoding and remembering spatial relationships, and these long-term spatial memories are dependent on the medial temporal lobe of the brain. These brain regions are also some of the first and most heavily impacted in disorders of human memory including Alzheimer's disease. Consequently, some of the simplest and most commonly used tests of long-term memory in mice are those that examine memory for objects and spatial relationships. However, many of these tasks, such as Morris water maze and contextual fear conditioning, are dependent upon the encoding and retrieval of emotionally aversive and inherently stressful training events. While these types of memories are important, they do not reflect the typical day-to-day experiences or memories most commonly affected in human disease. In addition, stress hormone release alone can modulate memory and thus obscure or artificially enhance these types of tasks. To avoid these sorts of confounds, we and many others have utilized tasks testing animals’ memory for object location and novel object recognition. These tasks involve exploiting rodents’ innate preference for novelty, and are inherently not stressful. In this protocol we detail how memory for object location

  20. Tactile Feedback of Object Slip Facilitates Virtual Object Manipulation.

    PubMed

    Walker, Julie M; Blank, Amy A; Shewokis, Patricia A; OMalley, Marcia K

    2015-01-01

    Recent advances in myoelectric prosthetic technology have enabled more complex movements and interactions with objects, but the lack of natural haptic feedback makes object manipulation difficult to perform. Our research effort aims to develop haptic feedback systems for improving user performance in object manipulation. Specifically, in this work, we explore the effectiveness of vibratory tactile feedback of slip information for grasping objects without slipping. A user interacts with a virtual environment to complete a virtual grasp and hold task using a Sensable Phantom. Force feedback simulates contact with objects, and vibratory tactile feedback alerts the user when a virtual object is slipping from the grasp. Using this task, we found that tactile feedback significantly improved a user's ability to detect and respond to slip and to recover the slipping object when visual feedback was not available. This advantage of tactile feedback is especially important in conjunction with force feedback, which tends to reduce a subject's grasping forces and therefore encourage more slips. Our results demonstrate the potential of slip feedback to improve a prosthesis user's ability to interact with objects with less visual attention, aiding in performance of everyday manipulation tasks.

  1. Visual appearance interacts with conceptual knowledge in object recognition

    PubMed Central

    Cheung, Olivia S.; Gauthier, Isabel

    2014-01-01

    Objects contain rich visual and conceptual information, but do these two types of information interact? Here, we examine whether visual and conceptual information interact when observers see novel objects for the first time. We then address how this interaction influences the acquisition of perceptual expertise. We used two types of novel objects (Greebles), designed to resemble either animals or tools, and two lists of words, which described non-visual attributes of people or man-made objects. Participants first judged if a word was more suitable for describing people or objects while ignoring a task-irrelevant image, and showed faster responses if the words and the unfamiliar objects were congruent in terms of animacy (e.g., animal-like objects with words that described human). Participants then learned to associate objects and words that were either congruent or not in animacy, before receiving expertise training to rapidly individuate the objects. Congruent pairing of visual and conceptual information facilitated observers' ability to become a perceptual expert, as revealed in a matching task that required visual identification at the basic or subordinate levels. Taken together, these findings show that visual and conceptual information interact at multiple levels in object recognition. PMID:25120509

  2. Touching and Hearing Unseen Objects: Multisensory Effects on Scene Recognition

    PubMed Central

    van Lier, Rob

    2016-01-01

    In three experiments, we investigated the influence of object-specific sounds on haptic scene recognition without vision. Blindfolded participants had to recognize, through touch, spatial scenes comprising six objects that were placed on a round platform. Critically, in half of the trials, object-specific sounds were played when objects were touched (bimodal condition), while sounds were turned off in the other half of the trials (unimodal condition). After first exploring the scene, two objects were swapped and the task was to report, which of the objects swapped positions. In Experiment 1, geometrical objects and simple sounds were used, while in Experiment 2, the objects comprised toy animals that were matched with semantically compatible animal sounds. In Experiment 3, we replicated Experiment 1, but now a tactile-auditory object identification task preceded the experiment in which the participants learned to identify the objects based on tactile and auditory input. For each experiment, the results revealed a significant performance increase only after the switch from bimodal to unimodal. Thus, it appears that the release of bimodal identification, from audio-tactile to tactile-only produces a benefit that is not achieved when having the reversed order in which sound was added after having experience with haptic-only. We conclude that task-related factors other than mere bimodal identification cause the facilitation when switching from bimodal to unimodal conditions. PMID:27698985

  3. The Role of Perceptual Load in Object Recognition

    ERIC Educational Resources Information Center

    Lavie, Nilli; Lin, Zhicheng; Zokaei, Nahid; Thoma, Volker

    2009-01-01

    Predictions from perceptual load theory (Lavie, 1995, 2005) regarding object recognition across the same or different viewpoints were tested. Results showed that high perceptual load reduces distracter recognition levels despite always presenting distracter objects from the same view. They also showed that the levels of distracter recognition were…

  4. Hippocampal histone acetylation regulates object recognition and the estradiol-induced enhancement of object recognition.

    PubMed

    Zhao, Zaorui; Fan, Lu; Fortress, Ashley M; Boulware, Marissa I; Frick, Karyn M

    2012-02-15

    Histone acetylation has recently been implicated in learning and memory processes, yet necessity of histone acetylation for such processes has not been demonstrated using pharmacological inhibitors of histone acetyltransferases (HATs). As such, the present study tested whether garcinol, a potent HAT inhibitor in vitro, could impair hippocampal memory consolidation and block the memory-enhancing effects of the modulatory hormone 17β-estradiol E2. We first showed that bilateral infusion of garcinol (0.1, 1, or 10 μg/side) into the dorsal hippocampus (DH) immediately after training impaired object recognition memory consolidation in ovariectomized female mice. A behaviorally effective dose of garcinol (10 μg/side) also significantly decreased DH HAT activity. We next examined whether DH infusion of a behaviorally subeffective dose of garcinol (1 ng/side) could block the effects of DH E2 infusion on object recognition and epigenetic processes. Immediately after training, ovariectomized female mice received bilateral DH infusions of vehicle, E2 (5 μg/side), garcinol (1 ng/side), or E2 plus garcinol. Forty-eight hours later, garcinol blocked the memory-enhancing effects of E2. Garcinol also reversed the E2-induced increase in DH histone H3 acetylation, HAT activity, and levels of the de novo methyltransferase DNMT3B, as well as the E2-induced decrease in levels of the memory repressor protein histone deacetylase 2. Collectively, these findings suggest that histone acetylation is critical for object recognition memory consolidation and the beneficial effects of E2 on object recognition. Importantly, this work demonstrates that the role of histone acetylation in memory processes can be studied using a HAT inhibitor.

  5. A new selective developmental deficit: Impaired object recognition with normal face recognition.

    PubMed

    Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley

    2011-05-01

    Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual

  6. Regularity Detection As A Strategy In Object Modelling And Recognition

    NASA Astrophysics Data System (ADS)

    van Gool, Luc J.; Wagemans, Johan; Oosterlinck, Andre J.

    1989-03-01

    Human subjects easily perceive and extensively use shape regularities such as symmetry or periodicity when they are confronted with the task of object description and recognition. A computer vision algorithm is presented which emulates such behaviour in that it similarly makes use of shape redundancies for the concise description and meaningful segmentation of object contours. This can be compared with the way in which designers proceed in using CAD/CAM. In order to make the problem more accessible to computer programming, the contours are analyzed in so-called 'arc length space'. This novel mapping facilitates the detection and elimination of regularities under a broad range of viewing conditions and yields a natural basis for the formulation of the corresponding model compression rules. Several of the regularities which have traditionally been treated separately, are given a unified substrate.

  7. Object recognition memory: neurobiological mechanisms of encoding, consolidation and retrieval.

    PubMed

    Winters, Boyer D; Saksida, Lisa M; Bussey, Timothy J

    2008-07-01

    Tests of object recognition memory, or the judgment of the prior occurrence of an object, have made substantial contributions to our understanding of the nature and neurobiological underpinnings of mammalian memory. Only in recent years, however, have researchers begun to elucidate the specific brain areas and neural processes involved in object recognition memory. The present review considers some of this recent research, with an emphasis on studies addressing the neural bases of perirhinal cortex-dependent object recognition memory processes. We first briefly discuss operational definitions of object recognition and the common behavioural tests used to measure it in non-human primates and rodents. We then consider research from the non-human primate and rat literature examining the anatomical basis of object recognition memory in the delayed nonmatching-to-sample (DNMS) and spontaneous object recognition (SOR) tasks, respectively. The results of these studies overwhelmingly favor the view that perirhinal cortex (PRh) is a critical region for object recognition memory. We then discuss the involvement of PRh in the different stages--encoding, consolidation, and retrieval--of object recognition memory. Specifically, recent work in rats has indicated that neural activity in PRh contributes to object memory encoding, consolidation, and retrieval processes. Finally, we consider the pharmacological, cellular, and molecular factors that might play a part in PRh-mediated object recognition memory. Recent studies in rodents have begun to indicate the remarkable complexity of the neural substrates underlying this seemingly simple aspect of declarative memory.

  8. The role of perceptual load in object recognition.

    PubMed

    Lavie, Nilli; Lin, Zhicheng; Zokaei, Nahid; Thoma, Volker

    2009-10-01

    Predictions from perceptual load theory (Lavie, 1995, 2005) regarding object recognition across the same or different viewpoints were tested. Results showed that high perceptual load reduces distracter recognition levels despite always presenting distracter objects from the same view. They also showed that the levels of distracter recognition were unaffected by a change in the distracter object view under conditions of low perceptual load. These results were found both with repetition priming measures of distracter recognition and with performance on a surprise recognition memory test. The results support load theory proposals that distracter recognition critically depends on the level of perceptual load. The implications for the role of attention in object recognition theories are discussed.

  9. Sleep deprivation impairs spontaneous object-place but not novel-object recognition in rats.

    PubMed

    Ishikawa, Hiroko; Yamada, Kazuo; Pavlides, Constantine; Ichitani, Yukio

    2014-09-19

    Effects of sleep deprivation (SD) on one-trial recognition memory were investigated in rats using either a spontaneous novel-object or object-place recognition test. Rats were allowed to explore a field in which two identical objects were presented. After a delay period, they were placed again in the same field in which either: (1) one of the two objects was replaced by another object (novel-object recognition); or (2) one of the sample objects was moved to a different place (object-place recognition), and their exploration behavior to these objects was analyzed. Four hours SD immediately after the sample phase (early SD group) disrupted object-place recognition but not novel-object recognition, while SD 4-8h after the sample phase (delayed SD group) did not affect either paradigm. The results suggest that sleep selectively promotes the consolidation of hippocampal dependent memory, and that this effect is limited to within 4h after learning.

  10. Object-based attentional facilitation and inhibition are neuropsychologically dissociated.

    PubMed

    Smith, Daniel T; Ball, Keira; Swalwell, Robert; Schenk, Thomas

    2016-01-08

    Salient peripheral cues produce a transient shift of attention which is superseded by a sustained inhibitory effect. Cueing part of an object produces an inhibitory cueing effect (ICE) that spreads throughout the object. In dynamic scenes the ICE stays with objects as they move. We examined object-centred attentional facilitation and inhibition in a patient with visual form agnosia. There was no evidence of object-centred attentional facilitation. In contrast, object-centred ICE was observed in 3 out of 4 tasks. These inhibitory effects were strongest where cues to objecthood were highly salient. These data are evidence of a neuropsychological dissociation between the facilitatory and inhibitory effects of attentional cueing. From a theoretical perspective the findings suggest that 'grouped arrays' are sufficient for object-based inhibition, but insufficient to generate object-centred attentional facilitation.

  11. Dissociation of rapid response learning and facilitation in perceptual and conceptual networks of person recognition.

    PubMed

    Valt, Christian; Klein, Christoph; Boehm, Stephan G

    2015-08-01

    Repetition priming is a prominent example of non-declarative memory, and it increases the accuracy and speed of responses to repeatedly processed stimuli. Major long-hold memory theories posit that repetition priming results from facilitation within perceptual and conceptual networks for stimulus recognition and categorization. Stimuli can also be bound to particular responses, and it has recently been suggested that this rapid response learning, not network facilitation, provides a sound theory of priming of object recognition. Here, we addressed the relevance of network facilitation and rapid response learning for priming of person recognition with a view to advance general theories of priming. In four experiments, participants performed conceptual decisions like occupation or nationality judgments for famous faces. The magnitude of rapid response learning varied across experiments, and rapid response learning co-occurred and interacted with facilitation in perceptual and conceptual networks. These findings indicate that rapid response learning and facilitation in perceptual and conceptual networks are complementary rather than competing theories of priming. Thus, future memory theories need to incorporate both rapid response learning and network facilitation as individual facets of priming.

  12. Dissociating viewpoint costs in mental rotation and object recognition.

    PubMed

    Hayward, William G; Zhou, Guomei; Gauthier, Isabel; Harris, Irina M

    2006-10-01

    In a mental rotation task, participants must determine whether two stimuli match when one undergoes a rotation in 3-D space relative to the other. The key evidence for mental rotation is the finding of a linear increase in response times as objects are rotated farther apart. This signature increase in response times is also found in recognition of rotated objects, which has led many theorists to postulate mental rotation as a key transformational procedure in object recognition. We compared mental rotation and object recognition in tasks that used the same stimuli and presentation conditions and found that, whereas mental rotation costs increased relatively linearly with rotation, object recognition costs increased only over small rotations. Taken in conjunction with a recent brain imaging study, this dissociation in behavioral performance suggests that object recognition is based on matching of image features rather than on 3-D mental transformations.

  13. Object recognition may distort size perception.

    PubMed

    Wesp, R; Peckyno, A; McCall, S; Peters, S

    2000-06-01

    Size estimation may be influenced by characteristics recalled about the object viewed. This study evaluated the influence of object familiarity on estimation of size. We compared size estimates of several familiar objects with size estimates of undefined objects matched for dimensions of pattern and color. Those estimating the size of the familiar objects made significantly larger errors than those estimating the size of the undefined objects. In a second study size estimation errors from memory were larger than when objects were directly viewed. Experience with the objects appears to decrease accuracy of estimates of size but errors may be reduced by directly observing the object.

  14. Fast neuromimetic object recognition using FPGA outperforms GPU implementations.

    PubMed

    Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph

    2013-08-01

    Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.

  15. Automatic object recognition: critical issues and current approaches

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1991-08-01

    Automatic object recognition, with its diverse applications in numerous fields of science and technology, is permeating many aspects of military and civilian industries. This paper gives an overview of the issues confronting the automatic object recognition field and the approaches being used to address these issues.

  16. Does knowing speaker sex facilitate vowel recognition at short durations?

    PubMed

    Smith, David R R

    2014-05-01

    A man, woman or child saying the same vowel do so with very different voices. The auditory system solves the complex problem of extracting what the man, woman or child has said despite substantial differences in the acoustic properties of their voices. Much of the acoustic variation between the voices of men and woman is due to changes in the underlying anatomical mechanisms for producing speech. If the auditory system knew the sex of the speaker then it could potentially correct for speaker sex related acoustic variation thus facilitating vowel recognition. This study measured the minimum stimulus duration necessary to accurately discriminate whether a brief vowel segment was spoken by a man or woman, and the minimum stimulus duration necessary to accuately recognise what vowel was spoken. Results showed that reliable vowel recognition precedesreliable speaker sex discrimination, thus questioning the use of speaker sex information in compensating for speaker sex related acoustic variation in the voice. Furthermore, the pattern of performance across experiments where the fundamental frequency and formant frequency information of speaker's voices were systematically varied, was markedly different depending on whether the task was speaker-sex discrimination or vowel recognition. This argues for there being little relationship between perception of speaker sex (indexical information) and perception of what has been said (linguistic information) at short durations.

  17. Object recognition and localization: the role of tactile sensors.

    PubMed

    Aggarwal, Achint; Kirchner, Frank

    2014-02-18

    Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This paper presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Particle Filter (BRICPPF) is based on an innovative combination of particle filters, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC) algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in ground and underwater environments using real hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses the BRICPPF for object sub-part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments.

  18. Object Recognition and Localization: The Role of Tactile Sensors

    PubMed Central

    Aggarwal, Achint; Kirchner, Frank

    2014-01-01

    Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This paper presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Particle Filter (BRICPPF) is based on an innovative combination of particle filters, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC) algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in ground and underwater environments using real hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses the BRICPPF for object sub-part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments. PMID:24553087

  19. Spinal Anesthesia Facilitates the Early Recognition of TUR Syndrome

    PubMed Central

    McGowan-Smyth, Sam; Vasdev, Nikhil; Gowrie-Mohan, Shan

    2016-01-01

    . Conclusion The features most associated with the early presentation of TUR syndrome require the patient to be conscious for detection. The use of spinal anaesthesia is therefore desirable to facilitate its early recognition. PMID:27390576

  20. Plastic modifications induced by object recognition memory processing

    PubMed Central

    Clarke, Julia Rosauro; Cammarota, Martín; Gruart, Agnès; Izquierdo, Iván; Delgado-García, José María

    2010-01-01

    Long-term potentiation (LTP) phenomenon is widely accepted as a cellular model of memory consolidation. Object recognition (OR) is a particularly useful way of studying declarative memory in rodents because it makes use of their innate preference for novel over familiar objects. In this study, mice had electrodes implanted in the hippocampal Schaffer collaterals–pyramidal CA1 pathway and were trained for OR. Field EPSPs evoked at the CA3-CA1 synapse were recorded at the moment of training and at different times thereafter. LTP-like synaptic enhancement was found 6 h posttraining. A testing session was conducted 24 h after training, in the presence of one familiar and one novel object. Hippocampal synaptic facilitation was observed during exploration of familiar and novel objects. A short depotentiation period was observed early after the test and was followed by a later phase of synaptic efficacy enhancement. Here, we show that OR memory consolidation is accompanied by transient potentiation in the hippocampal CA3-CA1 synapses, while reconsolidation of this memory requires a short-lasting phase of depotentiation that could account for its well described vulnerability. The late synaptic enhancement phase, on the other hand, would be a consequence of memory restabilization. PMID:20133798

  1. Affordance-based 3D feature for generic object recognition

    NASA Astrophysics Data System (ADS)

    Iizuka, M.; Akizuki, S.; Hashimoto, M.

    2017-03-01

    Techniques for generic object recognition, which targets everyday objects such as cups and spoons, and techniques for approach vector estimation (e.g. estimating grasp position), which are needed for carrying out tasks involving everyday objects, are considered necessary for the perceptual system of service robots. In this research, we design feature for generic object recognition so they can also be applied to approach vector estimation. To carry out tasks involving everyday objects, estimating the function of the target object is critical. Also, as the function of holding liquid is found in all cups, so a function is shared in each type (class) of everyday objects. We thus propose a generic object recognition method that can estimate the approach vector by expressing an object's function as feature. In a test of the generic object recognition of everyday objects, we confirmed that our proposed method had a 92% recognition rate. This rate was 11% higher than the mainstream generic object recognition technique of using convolutional neural network (CNN).

  2. An ERP Study on Self-Relevant Object Recognition

    ERIC Educational Resources Information Center

    Miyakoshi, Makoto; Nomura, Michio; Ohira, Hideki

    2007-01-01

    We performed an event-related potential study to investigate the self-relevance effect in object recognition. Three stimulus categories were prepared: SELF (participant's own objects), FAMILIAR (disposable and public objects, defined as objects with less-self-relevant familiarity), and UNFAMILIAR (others' objects). The participants' task was to…

  3. Contrast- and illumination-invariant object recognition from active sensation.

    PubMed

    Rentschler, Ingo; Osman, Erol; Jüttner, Martin

    2009-01-01

    It has been suggested that the deleterious effect of contrast reversal on visual recognition is unique to faces, not objects. Here we show from priming, supervised category learning, and generalization that there is no such thing as general invariance of recognition of non-face objects against contrast reversal and, likewise, changes in direction of illumination. However, when recognition varies with rendering conditions, invariance may be restored and effects of continuous learning may be reduced by providing prior object knowledge from active sensation. Our findings suggest that the degree of contrast invariance achieved reflects functional characteristics of object representations learned in a task-dependent fashion.

  4. Young Children's Self-Generated Object Views and Object Recognition

    ERIC Educational Resources Information Center

    James, Karin H.; Jones, Susan S.; Smith, Linda B.; Swain, Shelley N.

    2014-01-01

    Two important and related developments in children between 18 and 24 months of age are the rapid expansion of object name vocabularies and the emergence of an ability to recognize objects from sparse representations of their geometric shapes. In the same period, children also begin to show a preference for planar views (i.e., views of objects held…

  5. Optical Recognition And Tracking Of Objects

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1988-01-01

    Separate objects moving independently tracked simultaneously. System uses coherent optical techniques to obtain correlation between each object and reference image. Moving objects monitored by charge-coupled-device television camera, output fed to liquid-crystal television (LCTV) display. Acting as spatial light modulator, LCTV impresses images of moving objects on collimated laser beam. Beam spatially low-pass filtered to remove high-spatial-frequency television grid pattern.

  6. Infants' Recognition of Objects Using Canonical Color

    ERIC Educational Resources Information Center

    Kimura, Atsushi; Wada, Yuji; Yang, Jiale; Otsuka, Yumiko; Dan, Ippeita; Masuda, Tomohiro; Kanazawa, So; Yamaguchi, Masami K.

    2010-01-01

    We explored infants' ability to recognize the canonical colors of daily objects, including two color-specific objects (human face and fruit) and a non-color-specific object (flower), by using a preferential looking technique. A total of 58 infants between 5 and 8 months of age were tested with a stimulus composed of two color pictures of an object…

  7. Infants' Recognition of Objects Using Canonical Color

    ERIC Educational Resources Information Center

    Kimura, Atsushi; Wada, Yuji; Yang, Jiale; Otsuka, Yumiko; Dan, Ippeita; Masuda, Tomohiro; Kanazawa, So; Yamaguchi, Masami K.

    2010-01-01

    We explored infants' ability to recognize the canonical colors of daily objects, including two color-specific objects (human face and fruit) and a non-color-specific object (flower), by using a preferential looking technique. A total of 58 infants between 5 and 8 months of age were tested with a stimulus composed of two color pictures of an object…

  8. The role of nitric oxide in the object recognition memory.

    PubMed

    Pitsikas, Nikolaos

    2015-05-15

    The novel object recognition task (NORT) assesses recognition memory in animals. It is a non-rewarded paradigm that it is based on spontaneous exploratory behavior in rodents. This procedure is widely used for testing the effects of compounds on recognition memory. Recognition memory is a type of memory severely compromised in schizophrenic and Alzheimer's disease patients. Nitric oxide (NO) is sought to be an intra- and inter-cellular messenger in the central nervous system and its implication in learning and memory is well documented. Here I intended to critically review the role of NO-related compounds on different aspects of recognition memory. Current analysis shows that both NO donors and NO synthase (NOS) inhibitors are involved in object recognition memory and suggests that NO might be a promising target for cognition impairments. However, the potential neurotoxicity of NO would add a note of caution in this context.

  9. Possibility of object recognition using Altera's model based design approach

    NASA Astrophysics Data System (ADS)

    Tickle, A. J.; Wu, F.; Harvey, P. K.; Smith, J. S.

    2009-07-01

    Object recognition is an image processing task of finding a given object in a selected image or video sequence. Object recognition can be divided into two areas: one of these is decision-theoretic and deals with patterns described by quantitative descriptors, for example such as length, area, shape and texture. With this Graphical User Interface Circuitry (GUIC) methodology employed here being relatively new for object recognition systems, the aim of this work is to identify if the developed circuitry can detect certain shapes or strings within the target image. A much smaller reference image feeds the preset data for identification, tests are conducted for both binary and greyscale and the additional mathematical morphology to highlight the area within the target image with the object(s) are located is also presented. This then provides proof that basic recognition methods are valid and would allow the progression to developing decision-theoretical and learning based approaches using GUICs for use in multidisciplinary tasks.

  10. Mechanisms of object recognition: what we have learned from pigeons

    PubMed Central

    Soto, Fabian A.; Wasserman, Edward A.

    2014-01-01

    Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons. PMID:25352784

  11. Mechanisms of object recognition: what we have learned from pigeons.

    PubMed

    Soto, Fabian A; Wasserman, Edward A

    2014-01-01

    Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the "simple" brains of pigeons.

  12. Reader error, object recognition, and visual search

    NASA Astrophysics Data System (ADS)

    Kundel, Harold L.

    2004-05-01

    Small abnormalities such as hairline fractures, lung nodules and breast tumors are missed by competent radiologists with sufficient frequency to make them a matter of concern to the medical community; not only because they lead to litigation but also because they delay patient care. It is very easy to attribute misses to incompetence or inattention. To do so may be placing an unjustified stigma on the radiologists involved and may allow other radiologists to continue a false optimism that it can never happen to them. This review presents some of the fundamentals of visual system function that are relevant to understanding the search for and the recognition of small targets embedded in complicated but meaningful backgrounds like chests and mammograms. It presents a model for visual search that postulates a pre-attentive global analysis of the retinal image followed by foveal checking fixations and eventually discovery scanning. The model will be used to differentiate errors of search, recognition and decision making. The implications for computer aided diagnosis and for functional workstation design are discussed.

  13. Eye movements during object recognition in visual agnosia.

    PubMed

    Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe

    2012-07-01

    This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape.

  14. Does scene context always facilitate retrieval of visual object representations?

    PubMed

    Nakashima, Ryoichi; Yokosawa, Kazuhiko

    2011-04-01

    An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).

  15. 'Breaking' position-invariant object recognition.

    PubMed

    Cox, David D; Meier, Philip; Oertelt, Nadja; DiCarlo, James J

    2005-09-01

    While it is often assumed that objects can be recognized irrespective of where they fall on the retina, little is known about the mechanisms underlying this ability. By exposing human subjects to an altered world where some objects systematically changed identity during the transient blindness that accompanies eye movements, we induced predictable object confusions across retinal positions, effectively 'breaking' position invariance. Thus, position invariance is not a rigid property of vision but is constantly adapting to the statistics of the environment.

  16. Object recognition of ladar with support vector machine

    NASA Astrophysics Data System (ADS)

    Sun, Jian-Feng; Li, Qi; Wang, Qi

    2005-01-01

    Intensity, range and Doppler images can be obtained by using laser radar. Laser radar can detect much more object information than other detecting sensor, such as passive infrared imaging and synthetic aperture radar (SAR), so it is well suited as the sensor of object recognition. Traditional method of laser radar object recognition is extracting target features, which can be influenced by noise. In this paper, a laser radar recognition method-Support Vector Machine is introduced. Support Vector Machine (SVM) is a new hotspot of recognition research after neural network. It has well performance on digital written and face recognition. Two series experiments about SVM designed for preprocessing and non-preprocessing samples are performed by real laser radar images, and the experiments results are compared.

  17. Automatic Recognition of Object Names in Literature

    NASA Astrophysics Data System (ADS)

    Bonnin, C.; Lesteven, S.; Derriere, S.; Oberto, A.

    2008-08-01

    SIMBAD is a database of astronomical objects that provides (among other things) their bibliographic references in a large number of journals. Currently, these references have to be entered manually by librarians who read each paper. To cope with the increasing number of papers, CDS develops a tool to assist the librarians in their work, taking advantage of the Dictionary of Nomenclature of Celestial Objects, which keeps track of object acronyms and of their origin. The program searches for object names directly in PDF documents by comparing the words with all the formats stored in the Dictionary of Nomenclature. It also searches for variable star names based on constellation names and for a large list of usual names such as Aldebaran or the Crab. Object names found in the documents often correspond to several astronomical objects. The system retrieves all possible matches, displays them with their object type given by SIMBAD, and lets the librarian make the final choice. The bibliographic reference can then be automatically added to the object identifiers in the database. Besides, the systematic usage of the Dictionary of Nomenclature, which is updated manually, permitted to automatically check it and to detect errors and inconsistencies. Last but not least, the program collects some additional information such as the position of the object names in the document (in the title, subtitle, abstract, table, figure caption...) and their number of occurrences. In the future, this will permit to calculate the 'weight' of an object in a reference and to provide SIMBAD users with an important new information, which will help them to find the most relevant papers in the object reference list.

  18. Category-Specificity in Visual Object Recognition

    ERIC Educational Resources Information Center

    Gerlach, Christian

    2009-01-01

    Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not: some brain damaged patients are more impaired in recognizing natural objects than artefacts whereas others show the opposite impairment. Category-effects have also been demonstrated in neurologically intact subjects, but the…

  19. Category-Specificity in Visual Object Recognition

    ERIC Educational Resources Information Center

    Gerlach, Christian

    2009-01-01

    Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not: some brain damaged patients are more impaired in recognizing natural objects than artefacts whereas others show the opposite impairment. Category-effects have also been demonstrated in neurologically intact subjects, but the…

  20. The subjective experience of object recognition: comparing metacognition for object detection and object categorization.

    PubMed

    Meuwese, Julia D I; van Loon, Anouk M; Lamme, Victor A F; Fahrenfort, Johannes J

    2014-05-01

    Perceptual decisions seem to be made automatically and almost instantly. Constructing a unitary subjective conscious experience takes more time. For example, when trying to avoid a collision with a car on a foggy road you brake or steer away in a reflex, before realizing you were in a near accident. This subjective aspect of object recognition has been given little attention. We used metacognition (assessed with confidence ratings) to measure subjective experience during object detection and object categorization for degraded and masked objects, while objective performance was matched. Metacognition was equal for degraded and masked objects, but categorization led to higher metacognition than did detection. This effect turned out to be driven by a difference in metacognition for correct rejection trials, which seemed to be caused by an asymmetry of the distractor stimulus: It does not contain object-related information in the detection task, whereas it does contain such information in the categorization task. Strikingly, this asymmetry selectively impacted metacognitive ability when objective performance was matched. This finding reveals a fundamental difference in how humans reflect versus act on information: When matching the amount of information required to perform two tasks at some objective level of accuracy (acting), metacognitive ability (reflecting) is still better in tasks that rely on positive evidence (categorization) than in tasks that rely more strongly on an absence of evidence (detection).

  1. A Taxonomy of 3D Occluded Objects Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  2. Visuo-haptic multisensory object recognition, categorization, and representation

    PubMed Central

    Lacey, Simon; Sathian, K.

    2014-01-01

    Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery. PMID:25101014

  3. Pattern recognition of multiple objects using adaptive correlation filters

    NASA Astrophysics Data System (ADS)

    Pinedo-García, Marco I.; Kober, Vitaly

    2007-09-01

    A new method for reliable pattern recognition of multiple distorted objects in a cluttered background and consequent classification of the detected objects is proposed. The method is based on a bank of composite correlation filters. The filters are designed with the help of an iterative algorithm exploiting a modified version of synthetic discriminant functions. The bank consists of a minimal quantity of the filters required for a given input scene to guarantee a prespecified value of discrimination capability for pattern recognition and classification of all objects. Statistical analysis of the number of required correlations versus the recognition performance is provided and discussed. Computer simulation results obtained with the proposed method are compared with those of known techniques in terms of performance criteria for recognition and classification of objects.

  4. Leveraging Cognitive Context for Object Recognition

    DTIC Science & Technology

    2014-06-01

    looking at a desk so I may see my computer next), as well as internal (e.g., I was told to look for a banana so I may be likely to see one soon). We...itself); raisins and banana are tied for the second-most likely object to be seen next. When raisins are seen, context again suggests that raisins are...most likely to be seen next, with banana the second- most likely to be seen next. This pattern repeats itself for the second set of objects

  5. High speed optical object recognition processor with massive holographic memory

    NASA Technical Reports Server (NTRS)

    Chao, T.; Zhou, H.; Reyes, G.

    2002-01-01

    Real-time object recognition using a compact grayscale optical correlator will be introduced. A holographic memory module for storing a large bank of optimum correlation filters, to accommodate the large data throughput rate needed for many real-world applications, has also been developed. System architecture of the optical processor and the holographic memory will be presented. Application examples of this object recognition technology will also be demonstrated.

  6. Crowding: a cortical constraint on object recognition.

    PubMed

    Pelli, Denis G

    2008-08-01

    The external world is mapped retinotopically onto the primary visual cortex (V1). We show here that objects in the world, unless they are very dissimilar, can be recognized only if they are sufficiently separated in visual cortex: specifically, in V1, at least 6mm apart in the radial direction (increasing eccentricity) or 1mm apart in the circumferential direction (equal eccentricity). Objects closer together than this critical spacing are perceived as an unidentifiable jumble. This is called 'crowding'. It severely limits visual processing, including speed of reading and searching. The conclusion about visual cortex rests on three findings. First, psychophysically, the necessary 'critical' spacing, in the visual field, is proportional to (roughly half) the eccentricity of the objects. Second, the critical spacing is independent of the size and kind of object. Third, anatomically, the representation of the visual field on the cortical surface is such that the position in V1 (and several other areas) is the logarithm of eccentricity in the visual field. Furthermore, we show that much of this can be accounted for by supposing that each 'combining field', defined by the critical spacing measurements, is implemented by a fixed number of cortical neurons.

  7. Object Recognition and Random Image Structure Evolution

    ERIC Educational Resources Information Center

    Sadr, Jvid; Sinha, Pawan

    2004-01-01

    We present a technique called Random Image Structure Evolution (RISE) for use in experimental investigations of high-level visual perception. Potential applications of RISE include the quantitative measurement of perceptual hysteresis and priming, the study of the neural substrates of object perception, and the assessment and detection of subtle…

  8. Object Recognition and Random Image Structure Evolution

    ERIC Educational Resources Information Center

    Sadr, Jvid; Sinha, Pawan

    2004-01-01

    We present a technique called Random Image Structure Evolution (RISE) for use in experimental investigations of high-level visual perception. Potential applications of RISE include the quantitative measurement of perceptual hysteresis and priming, the study of the neural substrates of object perception, and the assessment and detection of subtle…

  9. Comparing object recognition from binary and bipolar edge features

    PubMed Central

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2017-01-01

    Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary edge images (black edges on white background or white edges on black background) have been used to represent features (edges and cusps) in scenes. However, the polarity of cusps and edges may contain important depth information (depth from shading) which is lost in the binary edge representation. This depth information may be restored, to some degree, using bipolar edges. We compared recognition rates of 16 binary edge images, or bipolar features, by 26 subjects. Object recognition rates were higher with bipolar edges and the improvement was significant in scenes with complex backgrounds.

  10. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  11. Changes in functional connectivity support conscious object recognition.

    PubMed

    Imamoglu, Fatma; Kahnt, Thorsten; Koch, Christof; Haynes, John-Dylan

    2012-12-01

    What are the brain mechanisms that mediate conscious object recognition? To investigate this question, it is essential to distinguish between brain processes that cause conscious recognition of a stimulus from other correlates of its sensory processing. Previous fMRI studies have identified large-scale brain activity ranging from striate to high-level sensory and prefrontal regions associated with conscious visual perception or recognition. However, the possible role of changes in connectivity during conscious perception between these regions has only rarely been studied. Here, we used fMRI and connectivity analyses, together with 120 custom-generated, two-tone, Mooney images to directly assess whether conscious recognition of an object is accompanied by a dynamical change in the functional coupling between extrastriate cortex and prefrontal areas. We compared recognizing an object versus not recognizing it in 19 naïve subjects using two different response modalities. We find that connectivity between the extrastriate cortex and the dorsolateral prefrontal cortex (DLPFC) increases when objects are consciously recognized. This interaction was independent of the response modality used to report conscious recognition. Furthermore, computing the difference in Granger causality between recognized and not recognized conditions reveals stronger feedforward connectivity than feedback connectivity when subjects recognized the objects. We suggest that frontal and visual brain regions are part of a functional network that supports conscious object recognition by changes in functional connectivity.

  12. Planning Multiple Observations for Object Recognition

    DTIC Science & Technology

    1992-12-09

    choosing the branch with the highest weight at each level, and backtracking when necessary. The PREMIO system of Camps, et al [51 predicts object...appearances under various conditions of lighting, viewpoint, sensor, and image processing operators. Unlike other systems, PREMIO also evaluates the utility...1988). [51 Camps, 0. 1., Shapiro, L. G., and Haralick, R. M. PREMIO : an overview. Proc. IEEE Workshop on Directions in Automated CAD-Based Vision, pp

  13. Visual Object Recognition and Tracking of Tools

    NASA Technical Reports Server (NTRS)

    English, James; Chang, Chu-Yin; Tardella, Neil

    2011-01-01

    A method has been created to automatically build an algorithm off-line, using computer-aided design (CAD) models, and to apply this at runtime. The object type is discriminated, and the position and orientation are identified. This system can work with a single image and can provide improved performance using multiple images provided from videos. The spatial processing unit uses three stages: (1) segmentation; (2) initial type, pose, and geometry (ITPG) estimation; and (3) refined type, pose, and geometry (RTPG) calculation. The image segmentation module files all the tools in an image and isolates them from the background. For this, the system uses edge-detection and thresholding to find the pixels that are part of a tool. After the pixels are identified, nearby pixels are grouped into blobs. These blobs represent the potential tools in the image and are the product of the segmentation algorithm. The second module uses matched filtering (or template matching). This approach is used for condensing synthetic images using an image subspace that captures key information. Three degrees of orientation, three degrees of position, and any number of degrees of freedom in geometry change are included. To do this, a template-matching framework is applied. This framework uses an off-line system for calculating template images, measurement images, and the measurements of the template images. These results are used online to match segmented tools against the templates. The final module is the RTPG processor. Its role is to find the exact states of the tools given initial conditions provided by the ITPG module. The requirement that the initial conditions exist allows this module to make use of a local search (whereas the ITPG module had global scope). To perform the local search, 3D model matching is used, where a synthetic image of the object is created and compared to the sensed data. The availability of low-cost PC graphics hardware allows rapid creation of synthetic images

  14. Multiple Kernel Learning for Visual Object Recognition: A Review.

    PubMed

    Bucak, Serhat S; Rong Jin; Jain, Anil K

    2014-07-01

    Multiple kernel learning (MKL) is a principled approach for selecting and combining kernels for a given recognition task. A number of studies have shown that MKL is a useful tool for object recognition, where each image is represented by multiple sets of features and MKL is applied to combine different feature sets. We review the state-of-the-art for MKL, including different formulations and algorithms for solving the related optimization problems, with the focus on their applications to object recognition. One dilemma faced by practitioners interested in using MKL for object recognition is that different studies often provide conflicting results about the effectiveness and efficiency of MKL. To resolve this, we conduct extensive experiments on standard datasets to evaluate various approaches to MKL for object recognition. We argue that the seemingly contradictory conclusions offered by studies are due to different experimental setups. The conclusions of our study are: (i) given a sufficient number of training examples and feature/kernel types, MKL is more effective for object recognition than simple kernel combination (e.g., choosing the best performing kernel or average of kernels); and (ii) among the various approaches proposed for MKL, the sequential minimal optimization, semi-infinite programming, and level method based ones are computationally most efficient.

  15. Facilitation of face recognition through the retino-tectal pathway.

    PubMed

    Nakano, Tamami; Higashida, Noriko; Kitazawa, Shigeru

    2013-08-01

    Humans can shift their gazes faster to human faces than to non-face targets during a task in which they are required to choose between face and non-face targets. However, it remains unclear whether a direct projection from the retina to the superior colliculus is specifically involved in this facilitated recognition of faces. To address this question, we presented a pair of face and non-face pictures to participants modulated in greyscale (luminance-defined stimuli) in one condition and modulated in a blue-yellow scale (S-cone-isolating stimuli) in another. The information of the S-cone-isolating stimuli is conveyed through the retino-geniculate pathway rather than the retino-tectal pathway. For the luminance stimuli, the reaction time was shorter towards a face than towards a non-face target. The facilitatory effect while choosing a face disappeared with the S-cone stimuli. Moreover, fearful faces elicited a significantly larger facilitatory effect relative to neutral faces, when the face (with or without emotion) and non-face stimuli were presented in greyscale. The effect of emotional expressions disappeared with the S-cone stimuli. In contrast to the S-cone stimuli, the face facilitatory effect was still observed with negated stimuli that were prepared by reversing the polarity of the original colour pictures and looked as unusual as the S-cone stimuli but still contained luminance information. These results demonstrate that the face facilitatory effect requires the facial and emotional information defined by luminance, suggesting that the luminance information conveyed through the retino-tectal pathway is responsible for the faster recognition of human faces.

  16. Shape and Color Features for Object Recognition Search

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.; Stubberud, Allen R.

    2012-01-01

    A bio-inspired shape feature of an object of interest emulates the integration of the saccadic eye movement and horizontal layer in vertebrate retina for object recognition search where a single object can be used one at a time. The optimal computational model for shape-extraction-based principal component analysis (PCA) was also developed to reduce processing time and enable the real-time adaptive system capability. A color feature of the object is employed as color segmentation to empower the shape feature recognition to solve the object recognition in the heterogeneous environment where a single technique - shape or color - may expose its difficulties. To enable the effective system, an adaptive architecture and autonomous mechanism were developed to recognize and adapt the shape and color feature of the moving object. The bio-inspired object recognition based on bio-inspired shape and color can be effective to recognize a person of interest in the heterogeneous environment where the single technique exposed its difficulties to perform effective recognition. Moreover, this work also demonstrates the mechanism and architecture of the autonomous adaptive system to enable the realistic system for the practical use in the future.

  17. Induced gamma band responses predict recognition delays during object identification.

    PubMed

    Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M

    2007-06-01

    Neural mechanisms of object recognition seem to rely on activity of distributed neural assemblies coordinated by synchronous firing in the gamma-band range (>20 Hz). In the present electroencephalogram (EEG) study, we investigated induced gamma band activity during the naming of line drawings of upright objects and objects rotated in the image plane. Such plane-rotation paradigms elicit view-dependent processing, leading to delays in recognition of disoriented objects. Our behavioral results showed reaction time delays for rotated, as opposed to upright, images. These delays were accompanied by delays in the peak latency of induced gamma band responses (GBRs), in the absence of any effects on other measures of EEG activity. The latency of the induced GBRs has thus, for the first time, been selectively modulated by an experimental manipulation that delayed recognition. This finding indicates that induced GBRs have a genuine role as neural markers of late representational processes during object recognition. In concordance with the view that object recognition is achieved through dynamic learning processes, we propose that induced gamma band activity could be one of the possible cortical markers of such dynamic object coding.

  18. Homotopic image pseudo-invariants for openset object recognition and image retrieval.

    PubMed

    Shinagawa, Yoshihisa

    2008-11-01

    This paper presents novel homotopic image pseudo-invariants for face recognition based on pixelwise analysis. An exemplar face and test images are matched, and the most similar image is determined first. The homotopic image pseudo-invariants are calculated next to judge whether the most similar image is the same person as the exemplar. The proposed method can be applied to openset recognition. Recognition task can be performed with or without face databases, while the recognition rate is higher when a database is available. This fact facilitates the recognition of faces and various other objects on the Internet. We benchmark the method using FERET as well as the images downloaded from the Internet.

  19. Stochastic Process Underlying Emergent Recognition of Visual Objects Hidden in Degraded Images

    PubMed Central

    Murata, Tsutomu; Hamada, Takashi; Shimokawa, Tetsuya; Tanifuji, Manabu; Yanagida, Toshio

    2014-01-01

    When a degraded two-tone image such as a “Mooney” image is seen for the first time, it is unrecognizable in the initial seconds. The recognition of such an image is facilitated by giving prior information on the object, which is known as top-down facilitation and has been intensively studied. Even in the absence of any prior information, however, we experience sudden perception of the emergence of a salient object after continued observation of the image, whose processes remain poorly understood. This emergent recognition is characterized by a comparatively long reaction time ranging from seconds to tens of seconds. In this study, to explore this time-consuming process of emergent recognition, we investigated the properties of the reaction times for recognition of degraded images of various objects. The results show that the time-consuming component of the reaction times follows a specific exponential function related to levels of image degradation and subject's capability. Because generally an exponential time is required for multiple stochastic events to co-occur, we constructed a descriptive mathematical model inspired by the neurophysiological idea of combination coding of visual objects. Our model assumed that the coincidence of stochastic events complement the information loss of a degraded image leading to the recognition of its hidden object, which could successfully explain the experimental results. Furthermore, to see whether the present results are specific to the task of emergent recognition, we also conducted a comparison experiment with the task of perceptual decision making of degraded images, which is well known to be modeled by the stochastic diffusion process. The results indicate that the exponential dependence on the level of image degradation is specific to emergent recognition. The present study suggests that emergent recognition is caused by the underlying stochastic process which is based on the coincidence of multiple stochastic events

  20. Stochastic process underlying emergent recognition of visual objects hidden in degraded images.

    PubMed

    Murata, Tsutomu; Hamada, Takashi; Shimokawa, Tetsuya; Tanifuji, Manabu; Yanagida, Toshio

    2014-01-01

    When a degraded two-tone image such as a "Mooney" image is seen for the first time, it is unrecognizable in the initial seconds. The recognition of such an image is facilitated by giving prior information on the object, which is known as top-down facilitation and has been intensively studied. Even in the absence of any prior information, however, we experience sudden perception of the emergence of a salient object after continued observation of the image, whose processes remain poorly understood. This emergent recognition is characterized by a comparatively long reaction time ranging from seconds to tens of seconds. In this study, to explore this time-consuming process of emergent recognition, we investigated the properties of the reaction times for recognition of degraded images of various objects. The results show that the time-consuming component of the reaction times follows a specific exponential function related to levels of image degradation and subject's capability. Because generally an exponential time is required for multiple stochastic events to co-occur, we constructed a descriptive mathematical model inspired by the neurophysiological idea of combination coding of visual objects. Our model assumed that the coincidence of stochastic events complement the information loss of a degraded image leading to the recognition of its hidden object, which could successfully explain the experimental results. Furthermore, to see whether the present results are specific to the task of emergent recognition, we also conducted a comparison experiment with the task of perceptual decision making of degraded images, which is well known to be modeled by the stochastic diffusion process. The results indicate that the exponential dependence on the level of image degradation is specific to emergent recognition. The present study suggests that emergent recognition is caused by the underlying stochastic process which is based on the coincidence of multiple stochastic events.

  1. Robust object recognition under partial occlusions using NMF.

    PubMed

    Soukup, Daniel; Bajla, Ivan

    2008-01-01

    In recent years, nonnegative matrix factorization (NMF) methods of a reduced image data representation attracted the attention of computer vision community. These methods are considered as a convenient part-based representation of image data for recognition tasks with occluded objects. A novel modification in NMF recognition tasks is proposed which utilizes the matrix sparseness control introduced by Hoyer. We have analyzed the influence of sparseness on recognition rates (RRs) for various dimensions of subspaces generated for two image databases, ORL face database, and USPS handwritten digit database. We have studied the behavior of four types of distances between a projected unknown image object and feature vectors in NMF subspaces generated for training data. One of these metrics also is a novelty we proposed. In the recognition phase, partial occlusions in the test images have been modeled by putting two randomly large, randomly positioned black rectangles into each test image.

  2. Ontogeny of Rat Recognition Memory Measured by the Novel Object Recognition Task

    PubMed Central

    Reger, Maxine L.; Hovda, David A.; Giza, Christopher C.

    2010-01-01

    Detection of novelty is an essential component of recognition memory, which develops throughout cerebral maturation. To better understand the developmental aspects of this memory system, the novel object recognition task (NOR) was used with the immature rat and ontogenically profiled. It was hypothesized that object recognition would vary across development and be inferior to adult performance. The NOR design was made age-appropriate by downsizing the testing objects and arena. Weanling (P20-23), juvenile (P29-40) and adult (P50+) rats were tested after 0.25 hr, 1 hr, 24 hr and 48 hr delays. Weanlings exhibited novel object recognition at 0.25 and 1 hrs, while older animals showed a preference for the novel object out to 24 hrs. These findings are consistent with previous research performed in humans and monkeys, as well as to studies using the NOR after medial temporal lobe damage in adult rats. PMID:19739136

  3. Semantic information can facilitate covert face recognition in congenital prosopagnosia.

    PubMed

    Rivolta, Davide; Schmalzl, Laura; Coltheart, Max; Palermo, Romina

    2010-11-01

    People with congenital prosopagnosia have never developed the ability to accurately recognize faces. This single case investigation systematically investigates covert and overt face recognition in "C.," a 69 year-old woman with congenital prosopagnosia. Specifically, we: (a) describe the first assessment of covert face recognition in congenital prosopagnosia using multiple tasks; (b) show that semantic information can contribute to covert recognition; and (c) provide a theoretical explanation for the mechanisms underlying covert face recognition.

  4. Invariance in Visual Object Recognition Requires Training: A Computational Argument

    PubMed Central

    Goris, Robbe L. T.; de Beeck, Hans P. Op

    2009-01-01

    Visual object recognition is remarkably accurate and robust, yet its neurophysiological underpinnings are poorly understood. Single cells in brain regions thought to underlie object recognition code for many stimulus aspects, which poses a limit on their invariance. Combining the responses of multiple non-invariant neurons via weighted linear summation offers an optimal decoding strategy, which may be able to achieve invariant object recognition. However, because object identification is essentially parameter optimization in this model, the characteristics of the identification task trained to perform are critically important. If this task does not require invariance, a neural population-code is inherently more selective but less tolerant than the single-neurons constituting the population. Nevertheless, tolerance can be learned – provided that it is trained for – at the cost of selectivity. We argue that this model is an interesting null-hypothesis to compare behavioral results with and conclude that it may explain several experimental findings. PMID:20589239

  5. Generalized Sparselet Models for Real-Time Multiclass Object Recognition.

    PubMed

    Song, Hyun Oh; Girshick, Ross; Zickler, Stefan; Geyer, Christopher; Felzenszwalb, Pedro; Darrell, Trevor

    2015-05-01

    The problem of real-time multiclass object recognition is of great practical importance in object recognition. In this paper, we describe a framework that simultaneously utilizes shared representation, reconstruction sparsity, and parallelism to enable real-time multiclass object detection with deformable part models at 5Hz on a laptop computer with almost no decrease in task performance. Our framework is trained in the standard structured output prediction formulation and is generically applicable for speeding up object recognition systems where the computational bottleneck is in multiclass, multi-convolutional inference. We experimentally demonstrate the efficiency and task performance of our method on PASCAL VOC, subset of ImageNet, Caltech101 and Caltech256 dataset.

  6. A hippocampal signature of perceptual learning in object recognition.

    PubMed

    Guggenmos, Matthias; Rothkirch, Marcus; Obermayer, Klaus; Haynes, John-Dylan; Sterzer, Philipp

    2015-04-01

    Perceptual learning is the improvement in perceptual performance through training or exposure. Here, we used fMRI before and after extensive behavioral training to investigate the effects of perceptual learning on the recognition of objects under challenging viewing conditions. Objects belonged either to trained or untrained categories. Trained categories were further subdivided into trained and untrained exemplars and were coupled with high or low monetary rewards during training. After a 3-day training, object recognition was markedly improved. Although there was a considerable transfer of learning to untrained exemplars within categories, an enhancing effect of reward reinforcement was specific to trained exemplars. fMRI showed that hippocampus responses to both trained and untrained exemplars of trained categories were enhanced by perceptual learning and correlated with the effect of reward reinforcement. Our results suggest a key role of hippocampus in object recognition after perceptual learning.

  7. Where you look can influence haptic object recognition.

    PubMed

    Lawson, Rebecca; Boylan, Amy; Edwards, Lauren

    2014-02-01

    We investigated whether the relative position of objects and the body would influence haptic recognition. People felt objects on the right or left side of their body midline, using their right hand. Their head was turned towards or away from the object, and they could not see their hands or the object. People were better at naming 2-D raised line drawings and 3-D small-scale models of objects and also real, everyday objects when they looked towards them. However, this head-towards benefit was reliable only when their right hand crossed their body midline to feel objects on their left side. Thus, haptic object recognition was influenced by people's head position, although vision of their hand and the object was blocked. This benefit of turning the head towards the object being explored suggests that proprioceptive and haptic inputs are remapped into an external coordinate system and that this remapping is harder when the body is in an unusual position (with the hand crossing the body midline and the head turned away from the hand). The results indicate that haptic processes align sensory inputs from the hand and head even though either hand-centered or object-centered coordinate systems should suffice for haptic object recognition.

  8. The Neural Regions Sustaining Episodic Encoding and Recognition of Objects

    ERIC Educational Resources Information Center

    Hofer, Alex; Siedentopf, Christian M.; Ischebeck, Anja; Rettenbacher, Maria A.; Widschwendter, Christian G.; Verius, Michael; Golaszewski, Stefan M.; Koppelstaetter, Florian; Felber, Stephan; Wolfgang Fleischhacker, W.

    2007-01-01

    In this functional MRI experiment, encoding of objects was associated with activation in left ventrolateral prefrontal/insular and right dorsolateral prefrontal and fusiform regions as well as in the left putamen. By contrast, correct recognition of previously learned objects (R judgments) produced activation in left superior frontal, bilateral…

  9. Spontaneous Object Recognition Memory in Aged Rats: Complexity versus Similarity

    ERIC Educational Resources Information Center

    Gamiz, Fernando; Gallo, Milagros

    2012-01-01

    Previous work on the effect of aging on spontaneous object recognition (SOR) memory tasks in rats has yielded controversial results. Although the results at long-retention intervals are consistent, conflicting results have been reported at shorter delays. We have assessed the potential relevance of the type of object used in the performance of…

  10. The Neural Regions Sustaining Episodic Encoding and Recognition of Objects

    ERIC Educational Resources Information Center

    Hofer, Alex; Siedentopf, Christian M.; Ischebeck, Anja; Rettenbacher, Maria A.; Widschwendter, Christian G.; Verius, Michael; Golaszewski, Stefan M.; Koppelstaetter, Florian; Felber, Stephan; Wolfgang Fleischhacker, W.

    2007-01-01

    In this functional MRI experiment, encoding of objects was associated with activation in left ventrolateral prefrontal/insular and right dorsolateral prefrontal and fusiform regions as well as in the left putamen. By contrast, correct recognition of previously learned objects (R judgments) produced activation in left superior frontal, bilateral…

  11. Spontaneous Object Recognition Memory in Aged Rats: Complexity versus Similarity

    ERIC Educational Resources Information Center

    Gamiz, Fernando; Gallo, Milagros

    2012-01-01

    Previous work on the effect of aging on spontaneous object recognition (SOR) memory tasks in rats has yielded controversial results. Although the results at long-retention intervals are consistent, conflicting results have been reported at shorter delays. We have assessed the potential relevance of the type of object used in the performance of…

  12. Picture object recognition in an American black bear (Ursus americanus).

    PubMed

    Johnson-Ulrich, Zoe; Vonk, Jennifer; Humbyrd, Mary; Crowley, Marilyn; Wojtkowski, Ela; Yates, Florence; Allard, Stephanie

    2016-11-01

    Many animals have been tested for conceptual discriminations using two-dimensional images as stimuli, and many of these species appear to transfer knowledge from 2D images to analogous real life objects. We tested an American black bear for picture-object recognition using a two alternative forced choice task. She was presented with four unique sets of objects and corresponding pictures. The bear showed generalization from both objects to pictures and pictures to objects; however, her transfer was superior when transferring from real objects to pictures, suggesting that bears can recognize visual features from real objects within photographic images during discriminations.

  13. 3D object recognition based on local descriptors

    NASA Astrophysics Data System (ADS)

    Jakab, Marek; Benesova, Wanda; Racev, Marek

    2015-01-01

    In this paper, we propose an enhanced method of 3D object description and recognition based on local descriptors using RGB image and depth information (D) acquired by Kinect sensor. Our main contribution is focused on an extension of the SIFT feature vector by the 3D information derived from the depth map (SIFT-D). We also propose a novel local depth descriptor (DD) that includes a 3D description of the key point neighborhood. Thus defined the 3D descriptor can then enter the decision-making process. Two different approaches have been proposed, tested and evaluated in this paper. First approach deals with the object recognition system using the original SIFT descriptor in combination with our novel proposed 3D descriptor, where the proposed 3D descriptor is responsible for the pre-selection of the objects. Second approach demonstrates the object recognition using an extension of the SIFT feature vector by the local depth description. In this paper, we present the results of two experiments for the evaluation of the proposed depth descriptors. The results show an improvement in accuracy of the recognition system that includes the 3D local description compared with the same system without the 3D local description. Our experimental system of object recognition is working near real-time.

  14. The viewpoint complexity of an object-recognition task.

    PubMed

    Tjan, B S; Legge, G E

    1998-08-01

    There is an ongoing debate about the nature of perceptual representation in human object recognition. Resolution of this debate has been hampered by the lack of a metric for assessing the representational requirements of a recognition task. To recognize a member of a given set of 3-D objects, how much detail must the objects' representations contain in order to achieve a specific accuracy criterion? From the performance of an ideal observer, we derived a quantity called the view complexity (VX) to measure the required granularity of representation. VX is an intrinsic property of the object-recognition task, taking into account both the object ensemble and the type of decision required of an observer. It does not depend on the visual representation or processing used by the observer. VX can be interpreted as the number of randomly selected 2-D images needed to represent the decision boundaries in the image space of a 3-D object-recognition task. A low VX means the task is inherently more viewpoint invariant and a high VX means it is inherently more viewpoint dependent. By measuring the VX of recognition tasks with different object sets, we show that the current confusion about the nature of human perceptual representation is partly due to a failure in distinguishing between human visual processing and the properties of a task and its stimuli. We find general correspondence between the VX of a recognition task and the published human data on viewpoint dependence. Exceptions in this relationship motivated us to propose the view-rate hypothesis: human visual performance is limited by the equivalent number of 2-D image views that can be processed per unit time.

  15. Sensor agnostic object recognition using a map seeking circuit

    NASA Astrophysics Data System (ADS)

    Overman, Timothy L.; Hart, Michael

    2012-05-01

    Automatic object recognition capabilities are traditionally tuned to exploit the specific sensing modality they were designed to. Their successes (and shortcomings) are tied to object segmentation from the background, they typically require highly skilled personnel to train them, and they become cumbersome with the introduction of new objects. In this paper we describe a sensor independent algorithm based on the biologically inspired technology of map seeking circuits (MSC) which overcomes many of these obstacles. In particular, the MSC concept offers transparency in object recognition from a common interface to all sensor types, analogous to a USB device. It also provides a common core framework that is independent of the sensor and expandable to support high dimensionality decision spaces. Ease in training is assured by using commercially available 3D models from the video game community. The search time remains linear no matter how many objects are introduced, ensuring rapid object recognition. Here, we report results of an MSC algorithm applied to object recognition and pose estimation from high range resolution radar (1D), electrooptical imagery (2D), and LIDAR point clouds (3D) separately. By abstracting the sensor phenomenology from the underlying a prior knowledge base, MSC shows promise as an easily adaptable tool for incorporating additional sensor inputs.

  16. A Proposed Biologically Inspired Model for Object Recognition

    NASA Astrophysics Data System (ADS)

    Al-Absi, Hamada R. H.; Abdullah, Azween B.

    Object recognition has attracted the attention of many researchers as it is considered as one of the most important problems in computer vision. Two main approaches have been utilized to develop object recognition solutions i.e. machine and biological vision. Many algorithms have been developed in machine vision. Recently, Biology has inspired computer scientist to map the features of the human and primate's visual systems into computational models. Some of these models are based on the feed-forward mechanism of information processing in cortex; however, the performance of these models has been affected by the increase of clutter in the scene. Another mechanism of information processing in cortex is called the feedback. This mechanism has also been mapped into computational models. However, the results were also not satisfying. In this paper an object recognition model based on the integration of the feed-forward and feedback functions in the visual cortex is proposed.

  17. Hypnotizability and haptics: visual recognition of unimanually explored 'nonmeaningful' objects.

    PubMed

    Castellani, E; Carli, G; Santarcangelo, E L

    2012-08-01

    The cognitive trait of hypnotizability modulates sensorimotor integration and mental imagery. In particular, earlier results show that visual recognition of 'nonmeaningful', unfamiliar objects bimanually explored is faster and more accurate in subjects with high (Highs) than with low hypnotizability (Lows). The present study was aimed at investigating whether Highs exhibit a similar advantage after unimanual exploration. Recognition frequency (RF) and Recognition time (RT) of correct recognitions of the explored objects were recorded. The results showed the absence of any hypnotizability-related difference in recognition frequencies. In addition, RF of the right and left hand was comparable in Highs as in Lows, while slight differences were found in RT. We suggest that hemispheric co-operation played a key role in the better performance of Highs in the bimanual task previously studied. In the unimanual exploration, the task's characteristics (favoring the left hand), hypnotizability-related cerebral asymmetry (favoring the right hand in Highs) and the possible preferential verbal style of recognition in Lows (favoring the right hand in this group) antagonize each other and prevent the occurrence of major differences between the performance of Highs and Lows.

  18. Aerial Object Recognition Algorithm Based on Contour Descriptor

    NASA Astrophysics Data System (ADS)

    Strotov, V. V.; Babyan, P. V.; Smirnov, S. A.

    2017-05-01

    This paper describes the image recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the aerial object of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can gather set of training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types. The proposed orientation estimation algorithm showed good accuracy in all case studies.

  19. Object Recognition Method of Space Debris Tracking Image Sequence

    NASA Astrophysics Data System (ADS)

    Chen, Zhang; Yi-ding, Ping

    2016-07-01

    In order to strengthen the capability of space debris detection, the automated optical observation becomes more and more popular. Thus, the fully unattended automatic object recognition is urgently needed to study. As the open-loop tracking, which guides the telescope only with the historical orbital elements, is a simple and robust way to track space debris, based on the analysis on the point distribution characteristics of object's open-loop tracking image sequence in the pixel space, this paper has proposed to use the cluster identification method for the automatic space debris recognition, and made a comparison on the three kinds of different algorithms.

  20. A new method of edge detection for object recognition

    USGS Publications Warehouse

    Maddox, Brian G.; Rhew, Benjamin

    2004-01-01

    Traditional edge detection systems function by returning every edge in an input image. This can result in a large amount of clutter and make certain vectorization algorithms less accurate. Accuracy problems can then have a large impact on automated object recognition systems that depend on edge information. A new method of directed edge detection can be used to limit the number of edges returned based on a particular feature. This results in a cleaner image that is easier for vectorization. Vectorized edges from this process could then feed an object recognition system where the edge data would also contain information as to what type of feature it bordered.

  1. Rule-Based Orientation Recognition Of A Moving Object

    NASA Astrophysics Data System (ADS)

    Gove, Robert J.

    1989-03-01

    This paper presents a detailed description and a comparative analysis of the algorithms used to determine the position and orientation of an object in real-time. The exemplary object, a freely moving gold-fish in an aquarium, provides "real-world" motion, with definable characteristics of motion (the fish never swims upside-down) and the complexities of a non-rigid body. For simplicity of implementation, and since a restricted and stationary viewing domain exists (fish-tank), we reduced the problem of obtaining 3D correspondence information to trivial alignment calculations by using two cameras orthogonally viewing the object. We applied symbolic processing techniques to recognize the 3-D orientation of a moving object of known identity in real-time. Assuming motion, each new frame (sensed by the two cameras) provides images of the object's profile which has most likely undergone translation, rotation, scaling and/or bending of the non-rigid object since the previous frame. We developed an expert system which uses heuristics of the object's motion behavior in the form of rules and information obtained via low-level image processing (like numerical inertial axis calculations) to dynamically estimate the object's orientation. An inference engine provides these estimates at frame rates of up to 10 per second (which is essentially real-time). The advantages of the rule-based approach to orientation recognition will be compared other pattern recognition techniques. Our results of an investigation of statistical pattern recognition, neural networks, and procedural techniques for orientation recognition will be included. We implemented the algorithms in a rapid-prototyping environment, the TI-Ezplorer, equipped with an Odyssey and custom imaging hardware. A brief overview of the workstation is included to clarify one motivation for our choice of algorithms. These algorithms exploit two facets of the prototype image processing and understanding workstation - both low

  2. Utilization-based object recognition in confined spaces

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir; Telagamsetti, Durga; Chan, Alex L.

    2017-05-01

    Recognizing substantially occluded objects in confined spaces is a very challenging problem for ground-based persistent surveillance systems. In this paper, we discuss the ontology inference of occluded object recognition in the context of in-vehicle group activities (IVGA) and describe an approach that we refer to as utilization-based object recognition method. We examine the performance of three types of classifiers tailored for the recognition of objects with partial visibility, namely, (1) Hausdorff Distance classifier, (2) Hamming Network classifier, and (3) Recurrent Neural Network classifier. In order to train these classifiers, we have generated multiple imagery datasets containing a mixture of common objects appearing inside a vehicle with full or partial visibility and occultation. To generate dynamic interactions between multiple people, we model the IVGA scenarios using a virtual simulation environment, in which a number of simulated actors perform a variety of IVGA tasks independently or jointly. This virtual simulation engine produces the much needed imagery datasets for the verification and validation of the efficiency and effectiveness of the selected object recognizers. Finally, we improve the performance of these object recognizers by incorporating human gestural information that differentiates various object utilization or handling methods through the analyses of dynamic human-object interactions (HOI), human-human interactions (HHI), and human-vehicle interactions (HVI) in the context of IVGA.

  3. Visual Exploration and Object Recognition by Lattice Deformation

    PubMed Central

    Melloni, Lucia; Mureşan, Raul C.

    2011-01-01

    Mechanisms of explicit object recognition are often difficult to investigate and require stimuli with controlled features whose expression can be manipulated in a precise quantitative fashion. Here, we developed a novel method (called “Dots”), for generating visual stimuli, which is based on the progressive deformation of a regular lattice of dots, driven by local contour information from images of objects. By applying progressively larger deformation to the lattice, the latter conveys progressively more information about the target object. Stimuli generated with the presented method enable a precise control of object-related information content while preserving low-level image statistics, globally, and affecting them only little, locally. We show that such stimuli are useful for investigating object recognition under a naturalistic setting – free visual exploration – enabling a clear dissociation between object detection and explicit recognition. Using the introduced stimuli, we show that top-down modulation induced by previous exposure to target objects can greatly influence perceptual decisions, lowering perceptual thresholds not only for object recognition but also for object detection (visual hysteresis). Visual hysteresis is target-specific, its expression and magnitude depending on the identity of individual objects. Relying on the particular features of dot stimuli and on eye-tracking measurements, we further demonstrate that top-down processes guide visual exploration, controlling how visual information is integrated by successive fixations. Prior knowledge about objects can guide saccades/fixations to sample locations that are supposed to be highly informative, even when the actual information is missing from those locations in the stimulus. The duration of individual fixations is modulated by the novelty and difficulty of the stimulus, likely reflecting cognitive demand. PMID:21818397

  4. Image-based object recognition in man, monkey and machine.

    PubMed

    Tarr, M J; Bülthoff, H H

    1998-07-01

    Theories of visual object recognition must solve the problem of recognizing 3D objects given that perceivers only receive 2D patterns of light on their retinae. Recent findings from human psychophysics, neurophysiology and machine vision provide converging evidence for 'image-based' models in which objects are represented as collections of viewpoint-specific local features. This approach is contrasted with 'structural-description' models in which objects are represented as configurations of 3D volumes or parts. We then review recent behavioral results that address the biological plausibility of both approaches, a well as some of their computational advantages and limitations. We conclude that, although the image-based approach holds great promise, it has potential pitfalls that may be best overcome by including structural information. Thus, the most viable model of object recognition may be one that incorporates the most appealing aspects of both image-based and structural description theories.

  5. A Hierarchical Control Strategy For 2-D Object Recognition

    NASA Astrophysics Data System (ADS)

    Cullen, Mark F.; Kuszmaul, Christopher L.; Ramsey, Timothy S.

    1988-02-01

    A control strategy for 2-D object recognition has been implemented on a hardware configuration which includes a Symbolics Lisp Machine (TM) as a front-end processor to a 16,384 processor Connection Machine (TM). The goal of this ongoing research program is to develop an image analysis system as an aid to human image interpretation experts. Our efforts have concentrated on 2-D object recognition in aerial imagery specifically, the detection and identification of aircraft near the Danbury, CT airport. Image processing functions to label and extract image features are implemented on the Connection Machine for robust computation. A model matching function was also designed and implemented on the CM for object recognition. In this paper we report on the integration of these algorithms on the CM, with a hierarchical control strategy to focus and guide the object recognition task to particular objects and regions of interest in imagery. It will be shown that these tech-nigues may be used to manipulate imagery on the order of 2k x 2k pixels in near-real-time.

  6. Individual differences in involvement of the visual object recognition system during visual word recognition.

    PubMed

    Laszlo, Sarah; Sacchi, Elizabeth

    2015-01-01

    Individuals with dyslexia often evince reduced activation during reading in left hemisphere (LH) language regions. This can be observed along with increased activation in the right hemisphere (RH), especially in areas associated with object recognition - a pattern referred to as RH compensation. The mechanisms of RH compensation are relatively unclear. We hypothesize that RH compensation occurs when the RH object recognition system is called upon to supplement an underperforming LH visual word form recognition system. We tested this by collecting ERPs while participants with a range of reading abilities viewed words, objects, and word/object ambiguous items (e.g., "SMILE" shaped like a smile). Less experienced readers differentiate words, objects, and ambiguous items less strongly, especially over the RH. We suggest that this lack of differentiation may have negative consequences for dyslexic individuals demonstrating RH compensation.

  7. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  8. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  9. 3-D object recognition using 2-D views.

    PubMed

    Li, Wenjing; Bebis, George; Bourbakis, Nikolaos G

    2008-11-01

    We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood

  10. Vision holds a greater share in visuo-haptic object recognition than touch.

    PubMed

    Kassuba, Tanja; Klinge, Corinna; Hölig, Cordula; Röder, Brigitte; Siebner, Hartwig R

    2013-01-15

    The integration of visual and haptic input can facilitate object recognition. Yet, vision might dominate visuo-haptic interactions as it is more effective than haptics in processing several object features in parallel and recognizing objects outside of reaching space. The maximum likelihood approach of multisensory integration would predict that haptics as the less efficient sense for object recognition gains more from integrating additional visual information than vice versa. To test for asymmetries between vision and touch in visuo-haptic interactions, we measured regional changes in brain activity using functional magnetic resonance imaging while healthy individuals performed a delayed-match-to-sample task. We manipulated identity matching of sample and target objects: We hypothesized that only coherent visual and haptic object features would activate unified object representations. The bilateral object-specific lateral occipital cortex, fusiform gyrus, and intraparietal sulcus showed increased activation to crossmodal compared to unimodal matching but only for congruent object pairs. Critically, the visuo-haptic interaction effects in these regions depended on the sensory modality which processed the target object, being more pronounced for haptic than visual targets. This preferential response of visuo-haptic regions indicates a modality-specific asymmetry in crossmodal matching of visual and haptic object features, suggesting a functional primacy of vision over touch in visuo-haptic object recognition.

  11. A novel multi-view object recognition in complex background

    NASA Astrophysics Data System (ADS)

    Chang, Yongxin; Yu, Huapeng; Xu, Zhiyong; Fu, Chengyu; Gao, Chunming

    2015-02-01

    Recognizing objects from arbitrary aspects is always a highly challenging problem in computer vision, and most existing algorithms mainly focus on a specific viewpoint research. Hence, in this paper we present a novel recognizing framework based on hierarchical representation, part-based method and learning in order to recognize objects from different viewpoints. The learning evaluates the model's mistakes and feeds it back the detector to avid the same mistakes in the future. The principal idea is to extract intrinsic viewpoint invariant features from the unseen poses of object, and then to take advantage of these shared appearance features to support recognition combining with the improved multiple view model. Compared with other recognition models, the proposed approach can efficiently tackle multi-view problem and promote the recognition versatility of our system. For an quantitative valuation The novel algorithm has been tested on several benchmark datasets such as Caltech 101 and PASCAL VOC 2010. The experimental results validate that our approach can recognize objects more precisely and the performance outperforms others single view recognition methods.

  12. Orientation-Invariant Object Recognition: Evidence from Repetition Blindness

    ERIC Educational Resources Information Center

    Harris, Irina M.; Dux, Paul E.

    2005-01-01

    The question of whether object recognition is orientation-invariant or orientation-dependent was investigated using a repetition blindness (RB) paradigm. In RB, the second occurrence of a repeated stimulus is less likely to be reported, compared to the occurrence of a different stimulus, if it occurs within a short time of the first presentation.…

  13. Computing with Connections in Visual Recognition of Origami Objects.

    ERIC Educational Resources Information Center

    Sabbah, Daniel

    1985-01-01

    Summarizes an initial foray in tackling artificial intelligence problems using a connectionist approach. The task chosen is visual recognition of Origami objects, and the questions answered are how to construct a connectionist network to represent and recognize projected Origami line drawings and the advantages such an approach would have. (30…

  14. Computing with Connections in Visual Recognition of Origami Objects.

    ERIC Educational Resources Information Center

    Sabbah, Daniel

    1985-01-01

    Summarizes an initial foray in tackling artificial intelligence problems using a connectionist approach. The task chosen is visual recognition of Origami objects, and the questions answered are how to construct a connectionist network to represent and recognize projected Origami line drawings and the advantages such an approach would have. (30…

  15. Type of object motion facilitates word mapping by preverbal infants.

    PubMed

    Matatyaho-Bullaro, Dalit J; Gogate, Lakshmi; Mason, Zachary; Cadavid, Steven; Abdel-Mottaleb, Mohammed

    2014-02-01

    This study assessed whether specific types of object motion, which predominate in maternal naming to preverbal infants, facilitate word mapping by infants. A total of 60 full-term 8-month-old infants were habituated to two spoken words, /bæf/ and /wem/, synchronous with the handheld motions of a toy dragonfly and a fish or a lamb chop and a squiggly. They were presented in one of four experimental motion conditions-shaking, looming, upward, and sideways-and one all-motion control condition. Infants were then given a test that consisted of two mismatch (change) and two control (no-change) trials, counterbalanced for order. Results revealed that infants learned the word-object relations (i.e., looked longer on the mismatch trials relative to the control trials) in the shaking and looming motion conditions but not in the upward, sideways, and all-motion conditions. Infants learned the word-object relations in the looming and shaking conditions likely because these motions foreground the object for the infants. Thus, the type of gesture an adult uses matters during naming when preverbal infants are beginning to map words onto objects. The results suggest that preverbal infants learn word-object relations within an embodied system involving matches between infants' perception of motion and specific motion properties of caregivers' naming.

  16. Evidence That the Rat Hippocampus Has Contrasting Roles in Object Recognition Memory and Object Recency Memory

    PubMed Central

    Albasser, Mathieu M.; Amin, Eman; Lin, Tzu-Ching E.; Iordanova, Mihaela D.; Aggleton, John P.

    2012-01-01

    Adult rats with extensive, bilateral neurotoxic lesions of the hippocampus showed normal forgetting curves for object recognition memory, yet were impaired on closely related tests of object recency memory. The present findings point to specific mechanisms for temporal order information (recency) that are dependent on the hippocampus and do not involve object recognition memory. The object recognition tests measured rats exploring simultaneously presented objects, one novel and the other familiar. Task difficulty was varied by altering the retention delays after presentation of the familiar object, so creating a forgetting curve. Hippocampal lesions had no apparent effect, despite using an apparatus (bow-tie maze) where it was possible to give lists of objects that might be expected to increase stimulus interference. In contrast, the same hippocampal lesions impaired the normal preference for an older (less recent) familiar object over a more recent, familiar object. A correlation was found between the loss of septal hippocampal tissue and this impairment in recency memory. The dissociation in the present study between recognition memory (spared) and recency memory (impaired) was unusually compelling, because it was possible to test the same objects for both forms of memory within the same session and within the same apparatus. The object recency deficit is of additional interest as it provides an example of a nonspatial memory deficit following hippocampal damage. PMID:23025831

  17. Evidence that the rat hippocampus has contrasting roles in object recognition memory and object recency memory.

    PubMed

    Albasser, Mathieu M; Amin, Eman; Lin, Tzu-Ching E; Iordanova, Mihaela D; Aggleton, John P

    2012-10-01

    Adult rats with extensive, bilateral neurotoxic lesions of the hippocampus showed normal forgetting curves for object recognition memory, yet were impaired on closely related tests of object recency memory. The present findings point to specific mechanisms for temporal order information (recency) that are dependent on the hippocampus and do not involve object recognition memory. The object recognition tests measured rats exploring simultaneously presented objects, one novel and the other familiar. Task difficulty was varied by altering the retention delays after presentation of the familiar object, so creating a forgetting curve. Hippocampal lesions had no apparent effect, despite using an apparatus (bow-tie maze) where it was possible to give lists of objects that might be expected to increase stimulus interference. In contrast, the same hippocampal lesions impaired the normal preference for an older (less recent) familiar object over a more recent, familiar object. A correlation was found between the loss of septal hippocampal tissue and this impairment in recency memory. The dissociation in the present study between recognition memory (spared) and recency memory (impaired) was unusually compelling, because it was possible to test the same objects for both forms of memory within the same session and within the same apparatus. The object recency deficit is of additional interest as it provides an example of a nonspatial memory deficit following hippocampal damage. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  18. Speckle-learning-based object recognition through scattering media.

    PubMed

    Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun

    2015-12-28

    We experimentally demonstrated object recognition through scattering media based on direct machine learning of a number of speckle intensity images. In the experiments, speckle intensity images of amplitude or phase objects on a spatial light modulator between scattering plates were captured by a camera. We used the support vector machine for binary classification of the captured speckle intensity images of face and non-face data. The experimental results showed that speckles are sufficient for machine learning.

  19. Nicotine Administration Attenuates Methamphetamine-Induced Novel Object Recognition Deficits

    PubMed Central

    Vieira-Brock, Paula L.; McFadden, Lisa M.; Nielsen, Shannon M.; Smith, Misty D.; Hanson, Glen R.

    2015-01-01

    Background: Previous studies have demonstrated that methamphetamine abuse leads to memory deficits and these are associated with relapse. Furthermore, extensive evidence indicates that nicotine prevents and/or improves memory deficits in different models of cognitive dysfunction and these nicotinic effects might be mediated by hippocampal or cortical nicotinic acetylcholine receptors. The present study investigated whether nicotine attenuates methamphetamine-induced novel object recognition deficits in rats and explored potential underlying mechanisms. Methods: Adolescent or adult male Sprague-Dawley rats received either nicotine water (10–75 μg/mL) or tap water for several weeks. Methamphetamine (4×7.5mg/kg/injection) or saline was administered either before or after chronic nicotine exposure. Novel object recognition was evaluated 6 days after methamphetamine or saline. Serotonin transporter function and density and α4β2 nicotinic acetylcholine receptor density were assessed on the following day. Results: Chronic nicotine intake via drinking water beginning during either adolescence or adulthood attenuated the novel object recognition deficits caused by a high-dose methamphetamine administration. Similarly, nicotine attenuated methamphetamine-induced deficits in novel object recognition when administered after methamphetamine treatment. However, nicotine did not attenuate the serotonergic deficits caused by methamphetamine in adults. Conversely, nicotine attenuated methamphetamine-induced deficits in α4β2 nicotinic acetylcholine receptor density in the hippocampal CA1 region. Furthermore, nicotine increased α4β2 nicotinic acetylcholine receptor density in the hippocampal CA3, dentate gyrus and perirhinal cortex in both saline- and methamphetamine-treated rats. Conclusions: Overall, these findings suggest that nicotine-induced increases in α4β2 nicotinic acetylcholine receptors in the hippocampus and perirhinal cortex might be one mechanism by which

  20. Nicotine Administration Attenuates Methamphetamine-Induced Novel Object Recognition Deficits.

    PubMed

    Vieira-Brock, Paula L; McFadden, Lisa M; Nielsen, Shannon M; Smith, Misty D; Hanson, Glen R; Fleckenstein, Annette E

    2015-07-11

    Previous studies have demonstrated that methamphetamine abuse leads to memory deficits and these are associated with relapse. Furthermore, extensive evidence indicates that nicotine prevents and/or improves memory deficits in different models of cognitive dysfunction and these nicotinic effects might be mediated by hippocampal or cortical nicotinic acetylcholine receptors. The present study investigated whether nicotine attenuates methamphetamine-induced novel object recognition deficits in rats and explored potential underlying mechanisms. Adolescent or adult male Sprague-Dawley rats received either nicotine water (10-75 μg/mL) or tap water for several weeks. Methamphetamine (4 × 7.5mg/kg/injection) or saline was administered either before or after chronic nicotine exposure. Novel object recognition was evaluated 6 days after methamphetamine or saline. Serotonin transporter function and density and α4β2 nicotinic acetylcholine receptor density were assessed on the following day. Chronic nicotine intake via drinking water beginning during either adolescence or adulthood attenuated the novel object recognition deficits caused by a high-dose methamphetamine administration. Similarly, nicotine attenuated methamphetamine-induced deficits in novel object recognition when administered after methamphetamine treatment. However, nicotine did not attenuate the serotonergic deficits caused by methamphetamine in adults. Conversely, nicotine attenuated methamphetamine-induced deficits in α4β2 nicotinic acetylcholine receptor density in the hippocampal CA1 region. Furthermore, nicotine increased α4β2 nicotinic acetylcholine receptor density in the hippocampal CA3, dentate gyrus and perirhinal cortex in both saline- and methamphetamine-treated rats. Overall, these findings suggest that nicotine-induced increases in α4β2 nicotinic acetylcholine receptors in the hippocampus and perirhinal cortex might be one mechanism by which novel object recognition deficits are

  1. Priming for novel object associations: Neural differences from object item priming and equivalent forms of recognition.

    PubMed

    Gomes, Carlos Alexandre; Figueiredo, Patrícia; Mayes, Andrew

    2016-04-01

    The neural substrates of associative and item priming and recognition were investigated in a functional magnetic resonance imaging study over two separate sessions. In the priming session, participants decided which object of a pair was bigger during both study and test phases. In the recognition session, participants saw different object pairs and performed the same size-judgement task followed by an associative recognition memory task. Associative priming was accompanied by reduced activity in the right middle occipital gyrus as well as in bilateral hippocampus. Object item priming was accompanied by reduced activity in extensive priming-related areas in the bilateral occipitotemporofrontal cortex, as well as in the perirhinal cortex, but not in the hippocampus. Associative recognition was characterized by activity increases in regions linked to recollection, such as the hippocampus, posterior cingulate cortex, anterior medial frontal gyrus and posterior parahippocampal cortex. Item object priming and recognition recruited broadly overlapping regions (e.g., bilateral middle occipital and prefrontal cortices, left fusiform gyrus), even though the BOLD response was in opposite directions. These regions along with the precuneus, where both item priming and recognition were accompanied by activation, have been found to respond to object familiarity. The minimal structural overlap between object associative priming and recollection-based associative recognition suggests that they depend on largely different stimulus-related information and that the different directions of the effects indicate distinct retrieval mechanisms. In contrast, item priming and familiarity-based recognition seemed mainly based on common memory information, although the extent of common processing between priming and familiarity remains unclear. Further implications of these findings are discussed.

  2. Invariant visual object recognition and shape processing in rats

    PubMed Central

    Zoccolan, Davide

    2015-01-01

    Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision. PMID:25561421

  3. Object Manipulation Facilitates Kind-Based Object Individuation of Shape-Similar Objects

    ERIC Educational Resources Information Center

    Kingo, Osman S.; Krojgaard, Peter

    2011-01-01

    Five experiments investigated the importance of shape and object manipulation when 12-month-olds were given the task of individuating objects representing exemplars of kinds in an event-mapping design. In Experiments 1 and 2, results of the study from Xu, Carey, and Quint (2004, Experiment 4) were partially replicated, showing that infants were…

  4. Object Manipulation Facilitates Kind-Based Object Individuation of Shape-Similar Objects

    ERIC Educational Resources Information Center

    Kingo, Osman S.; Krojgaard, Peter

    2011-01-01

    Five experiments investigated the importance of shape and object manipulation when 12-month-olds were given the task of individuating objects representing exemplars of kinds in an event-mapping design. In Experiments 1 and 2, results of the study from Xu, Carey, and Quint (2004, Experiment 4) were partially replicated, showing that infants were…

  5. Object Recognition using Feature- and Color-Based Methods

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  6. Semantic interference from visual object recognition on visual imagery.

    PubMed

    Lloyd-Jones, Toby J; Vernon, David

    2003-07-01

    A new technique for examining the interaction between visual object recognition and visual imagery is reported. The "image-picture interference" paradigm requires participants to generate and make a response to a mental image of a previously memorized object, while ignoring a simultaneously presented picture distractor. Responses in 2 imagery tasks (making left-right higher spatial judgments and making taller-wider judgments) were longer when the simultaneous picture distractor was categorically related to the target distractor relative to unrelated and neutral target-distractor combinations. In contrast, performance was not influenced in this way when the distractor was a related word, when a semantic categorization decision was made to the target, or when distractor and target were visually but not categorically related to one another. The authors discuss these findings in terms of the semantic representations shared by visual object recognition and visual imagery that mediate performance.

  7. Comparison of Object Recognition Behavior in Human and Monkey.

    PubMed

    Rajalingham, Rishi; Schmidt, Kailyn; DiCarlo, James J

    2015-09-02

    Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize "pooled human" object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the

  8. Comparison of Object Recognition Behavior in Human and Monkey

    PubMed Central

    Rajalingham, Rishi; Schmidt, Kailyn

    2015-01-01

    Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to

  9. Object discrimination through active electrolocation: Shape recognition and the influence of electrical noise.

    PubMed

    Schumacher, Sarah; Burt de Perera, Theresa; von der Emde, Gerhard

    2016-12-12

    The weakly electric fish Gnathonemus petersii can recognise objects using active electrolocation. Here, we tested two aspects of object recognition; first whether shape recognition might be influenced by movement of the fish, and second whether object discrimination is affected by the presence of electrical noise from conspecifics. (i) Unlike other object features, such as size or volume, no parameter within a single electrical image has been found that encodes object shape. We investigated whether shape recognition might be facilitated by movement-induced modulations (MIM) of the set of electrical images that are created as a fish swims past an object. Fish were trained to discriminate between pairs of objects that either created similar or dissimilar levels of MIM of the electrical images. As predicted, the fish were able to discriminate between objects up to a longer distance if there was a large difference in MIM between the objects than if there was a small difference. This supports an involvement of MIMs in shape recognition but the use of other cues cannot be excluded. (ii) Electrical noise might impair object recognition if the noise signals overlap with the EODs of an electrolocating fish. To avoid jamming, we predicted that fish might employ pulsing strategies to prevent overlaps. To investigate the influence of electrical noise on discrimination performance, two fish were tested either in the presence of a conspecific or of playback signals and the electric signals were recorded during the experiments. The fish were surprisingly immune to jamming by conspecifics: While the discrimination performance of one fish dropped to chance level when more than 22% of its EODs overlapped with the noise signals, the performance of the other fish was not impaired even when all its EODs overlapped. Neither of the fish changed their pulsing behaviour, suggesting that they did not use any kind of jamming avoidance strategy.

  10. Distortion-invariant kernel correlation filters for general object recognition

    NASA Astrophysics Data System (ADS)

    Patnaik, Rohit

    General object recognition is a specific application of pattern recognition, in which an object in a background must be classified in the presence of several distortions such as aspect-view differences, scale differences, and depression-angle differences. Since the object can be present at different locations in the test input, a classification algorithm must be applied to all possible object locations in the test input. We emphasize one type of classifier, the distortion-invariant filter (DIF), for fast object recognition, since it can be applied to all possible object locations using a fast Fourier transform (FFT) correlation. We refer to distortion-invariant correlation filters simply as DIFs. DIFs all use a combination of training-set images that are representative of the expected distortions in the test set. In this dissertation, we consider a new approach that combines DIFs and the higher-order kernel technique; these form what we refer to as "kernel DIFs." Our objective is to develop higher-order classifiers that can be applied (efficiently and fast) to all possible locations of the object in the test input. All prior kernel DIFs ignored the issue of efficient filter shifts. We detail which kernel DIF formulations are computational realistic to use and why. We discuss the proper way to synthesize DIFs and kernel DIFs for the wide area search case (i.e., when a small filter must be applied to a much larger test input) and the preferable way to perform wide area search with these filters; this is new. We use computer-aided design (CAD) simulated infrared (IR) object imagery and real IR clutter imagery to obtain test results. Our test results on IR data show that a particular kernel DIF, the kernel SDF filter and its new "preprocessed" version, is promising, in terms of both test-set performance and on-line calculations, and is emphasized in this dissertation. We examine the recognition of object variants. We also quantify the effect of different constant

  11. Object Recognition in Mental Representations: Directions for Exploring Diagnostic Features through Visual Mental Imagery.

    PubMed

    Roldan, Stephanie M

    2017-01-01

    One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.

  12. Object Recognition in Mental Representations: Directions for Exploring Diagnostic Features through Visual Mental Imagery

    PubMed Central

    Roldan, Stephanie M.

    2017-01-01

    One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538

  13. Shape Recognition Of Complex Objects By Syntactical Primitives

    NASA Astrophysics Data System (ADS)

    Lenger, D.; Cipovic, H.

    1985-04-01

    The paper describes a pattern recognition method based on syntactic image analysis applicable in autonomous systems of robot vision for the purpose of pattern detection or classification. The discrimination of syntactic elements is realized by polygonal approximation of contours employing a very fast algorithm based upon coding, local pixel logic and methods of choice instead of numerical methods. Semantic information is derived from attributes calculated from the filtered shape vector. No a priori information on image objects is required, and the choice of starting point is determined by finding the significant directions on the shape vector. The radius of recognition sphere is minimum Euclidian distance, i.e. maximum similarity between the unknown model and each individual grammar created in the learning phase. By keeping information on derivations of individual syntactic elements, an alternative of parsing recognition is left. The analysis is very flexible, and permits the recognition of highly distorted or even partially visible objects. The output from syntactic analyzer is the measure of irregularity, and the method is thus applicable in any application where sample deformation is being examined.

  14. Trajectory Recognition as the Basis for Object Individuation: A Functional Model of Object File Instantiation and Object-Token Encoding

    PubMed Central

    Fields, Chris

    2011-01-01

    The perception of persisting visual objects is mediated by transient intermediate representations, object files, that are instantiated in response to some, but not all, visual trajectories. The standard object file concept does not, however, provide a mechanism sufficient to account for all experimental data on visual object persistence, object tracking, and the ability to perceive spatially disconnected stimuli as continuously existing objects. Based on relevant anatomical, functional, and developmental data, a functional model is constructed that bases visual object individuation on the recognition of temporal sequences of apparent center-of-mass positions that are specifically identified as trajectories by dedicated “trajectory recognition networks” downstream of the medial–temporal motion-detection area. This model is shown to account for a wide range of data, and to generate a variety of testable predictions. Individual differences in the recognition, abstraction, and encoding of trajectory information are expected to generate distinct object persistence judgments and object recognition abilities. Dominance of trajectory information over feature information in stored object tokens during early infancy, in particular, is expected to disrupt the ability to re-identify human and other individuals across perceptual episodes, and lead to developmental outcomes with characteristics of autism spectrum disorders. PMID:21716599

  15. Communicative Signals Promote Object Recognition Memory and Modulate the Right Posterior STS.

    PubMed

    Redcay, Elizabeth; Ludlum, Ruth S; Velnoskey, Kayla R; Kanwal, Simren

    2016-01-01

    Detection of communicative signals is thought to facilitate knowledge acquisition early in life, but less is known about the role these signals play in adult learning or about the brain systems supporting sensitivity to communicative intent. The current study examined how ostensive gaze cues and communicative actions affect adult recognition memory and modulate neural activity as measured by fMRI. For both the behavioral and fMRI experiments, participants viewed a series of videos of an actress acting on one of two objects in front of her. Communicative context in the videos was manipulated in a 2 × 2 design in which the actress either had direct gaze (Gaze) or wore a visor (NoGaze) and either pointed at (Point) or reached for (Reach) one of the objects (target) in front of her. Participants then completed a recognition memory task with old (target and nontarget) objects and novel objects. Recognition memory for target objects in the Gaze conditions was greater than NoGaze, but no effects of gesture type were seen. Similarly, the fMRI video-viewing task revealed a significant effect of Gaze within right posterior STS (pSTS), but no significant effects of Gesture. Furthermore, pSTS sensitivity to Gaze conditions was related to greater memory for objects viewed in Gaze, as compared with NoGaze, conditions. Taken together, these results demonstrate that the ostensive, communicative signal of direct gaze preceding an object-directed action enhances recognition memory for attended items and modulates the pSTS response to object-directed actions. Thus, establishment of a communicative context through ostensive signals remains an important component of learning and memory into adulthood, and the pSTS may play a role in facilitating this type of social learning.

  16. An Approach to Object Recognition: Aligning Pictorial Descriptions.

    DTIC Science & Technology

    1986-12-01

    PERFORMING 0RGANIZATION NAMIE ANDORS IS551. PROGRAM ELEMENT. PROJECT. TASK Artificial Inteligence Laboratory AREKA A WORK UNIT NUMBERS ( 545 Technology... ARTIFICIAL INTELLIGENCE LABORATORY A.I. Memo No. 931 December, 1986 AN APPROACH TO OBJECT RECOGNITION: ALIGNING PICTORIAL DESCRIPTIONS Shimon Ullman...within the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. Support for the A.I. Laboratory’s artificial intelligence

  17. Neuronal substrates characterizing two stages in visual object recognition.

    PubMed

    Taminato, Tomoya; Miura, Naoki; Sugiura, Motoaki; Kawashima, Ryuta

    2014-12-01

    Visual object recognition is classically believed to involve two stages: a perception stage in which perceptual information is integrated, and a memory stage in which perceptual information is matched with an object's representation. The transition from the perception to the memory stage can be slowed to allow for neuroanatomical segregation using a degraded visual stimuli (DVS) task in which images are first presented at low spatial resolution and then gradually sharpened. In this functional magnetic resonance imaging study, we characterized these two stages using a DVS task based on the classic model. To separate periods that are assumed to dominate the perception, memory, and post-recognition stages, subjects responded once when they could guess the identity of the object in the image and a second time when they were certain of the identity. Activation of the right medial occipitotemporal region and the posterior part of the rostral medial frontal cortex was found to be characteristic of the perception and memory stages, respectively. Although the known role of the former region in perceptual integration was consistent with the classic model, a likely role of the latter region in monitoring for confirmation of recognition suggests the advantage of recently proposed interactive models.

  18. Invariant visual object recognition and shape processing in rats.

    PubMed

    Zoccolan, Davide

    2015-05-15

    Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision. Copyright © 2015 The Author. Published by Elsevier B.V. All rights reserved.

  19. Object Recognition and Object Segregation in Infancy: Historical Perspective, Theoretical Significance, "Kinds" of Knowledge, and Relation to Object Categorization.

    ERIC Educational Resources Information Center

    Quinn, Paul C.; Bhatt, Ramesh S.

    2001-01-01

    Reflects on Needham's findings on infants' object recognition and segregation. Examines the role for perceptual bias in explaining infant performance, places Needham's studies in historical perspective, and assesses their theoretical significance. Discusses the merits of positing different kinds of information sources for object segregation, and…

  20. How does the brain solve visual object recognition?

    PubMed Central

    Zoccolan, Davide; Rust, Nicole C.

    2012-01-01

    Mounting evidence suggests that “core object recognition,” the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains little-understood. Here we review evidence ranging from individual neurons, to neuronal populations, to behavior, to computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical sub-networks with a common functional goal. PMID:22325196

  1. How does the brain solve visual object recognition?

    PubMed

    DiCarlo, James J; Zoccolan, Davide; Rust, Nicole C

    2012-02-09

    Mounting evidence suggests that 'core object recognition,' the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains poorly understood. Here we review evidence ranging from individual neurons and neuronal populations to behavior and computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical subnetworks with a common functional goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Methylphenidate restores novel object recognition in DARPP-32 knockout mice.

    PubMed

    Heyser, Charles J; McNaughton, Caitlyn H; Vishnevetsky, Donna; Fienberg, Allen A

    2013-09-15

    Previously, we have shown that Dopamine- and cAMP-regulated phosphoprotein of 32kDa (DARPP-32) knockout mice required significantly more trials to reach criterion than wild-type mice in an operant reversal-learning task. The present study was conducted to examine adult male and female DARPP-32 knockout mice and wild-type controls in a novel object recognition test. Wild-type and knockout mice exhibited comparable behavior during the initial exploration trials. As expected, wild-type mice exhibited preferential exploration of the novel object during the substitution test, demonstrating recognition memory. In contrast, knockout mice did not show preferential exploration of the novel object, instead exhibiting an increase in exploration of all objects during the test trial. Given that the removal of DARPP-32 is an intracellular manipulation, it seemed possible to pharmacologically restore some cellular activity and behavior by stimulating dopamine receptors. Therefore, a second experiment was conducted examining the effect of methylphenidate. The results show that methylphenidate increased horizontal activity in both wild-type and knockout mice, though this increase was blunted in knockout mice. Pretreatment with methylphenidate significantly impaired novel object recognition in wild-type mice. In contrast, pretreatment with methylphenidate restored the behavior of DARPP-32 knockout mice to that observed in wild-type mice given saline. These results provide additional evidence for a functional role of DARPP-32 in the mediation of processes underlying learning and memory. These results also indicate that the behavioral deficits in DARPP-32 knockout mice may be restored by the administration of methylphenidate.

  3. Selective visual attention in object recognition and scene analysis

    NASA Astrophysics Data System (ADS)

    Gonzaga, Adilson; de Almeida Neves, Evelina M.; Frere, Annie F.

    1998-10-01

    An important feature of human vision system is the ability of selective visual attention. The stimulus that reaches the primate retina is processed in two different cortical pathways; one is specialized for object vision (`What') and the other for spatial vision (`Where'). By this, the visual system is able to recognize objects independently where they appear in the visual field. There are two major theories to explain the human visual attention. According to the Object- Based theory there is a limit on the isolated objects that could be perceived simultaneously and by the Space-Based theory there is a limit on the spatial areas from which the information could be taken up. This paper deals with the Object-Based theory that states the visual world occurs in two stages. The scene is segmented into isolated objects by region growing techniques in the pre-attentive stage. Invariant features (moments) are extracted and used as input of an Artificial Neural Network giving the probable object location (`Where'). In the focal-stage, particular objects are analyzed in detail through another neural network that performs the object recognition (`What'). The number of analyzed objects is based on a top-down process doing a consistent scene interpretation. With Visual Attention is possible the development of more efficient and flexible interfaces between low sensory information and high level process.

  4. The role of the dorsal dentate gyrus in object and object-context recognition.

    PubMed

    Dees, Richard L; Kesner, Raymond P

    2013-11-01

    The aim of this study was to determine the role of the dorsal dentate gyrus (dDG) in object recognition memory using a black box and object-context recognition memory using a clear box with available cues that define a spatial context. Based on a 10 min retention interval between the study phase and the test phase, the results indicated that dDG lesioned rats are impaired when compared to controls in the object-context recognition test in the clear box. However, there were no reliable differences between the dDG lesioned rats and the control group for the object recognition test in the black box. Even though the dDG lesioned rats were more active in object exploration, the habituation gradients did not differ. These results suggest that the dentate gyrus lesioned rats are clearly impaired when there is an important contribution of context. Furthermore, based on a 24 h retention interval in the black box the dDG lesioned rats were impaired compared to controls.

  5. Picture-object recognition in the tortoise Chelonoidis carbonaria.

    PubMed

    Wilkinson, Anna; Mueller-Paul, Julia; Huber, Ludwig

    2013-01-01

    To recognize that a picture is a representation of a real-life object is a cognitively demanding task. It requires an organism to mentally represent the concrete object (the picture) and abstract its relation to the item that it represents. This form of representational insight has been shown in a small number of mammal and bird species. However, it has not previously been studied in reptiles. This study examined picture-object recognition in the red-footed tortoise (Chelonoidis carbonaria). In Experiment 1, five red-footed tortoises were trained to distinguish between food and non-food objects using a two-alternative forced choice procedure. After reaching criterion, they were presented with test trials in which the real objects were replaced with color photographs of those objects. There was no difference in performance between training and test trials, suggesting that the tortoises did see some correspondence between the real object and its photographic representation. Experiment 2 examined the nature of this correspondence by presenting the tortoises with a choice between the real food object and a photograph of it. The findings revealed that the tortoises confused the photograph with the real-life object. This suggests that they process real items and photographic representations of these items in the same way and, in this context, do not exhibit representational insight.

  6. Multiple-View Object Recognition in Smart Camera Networks

    NASA Astrophysics Data System (ADS)

    Yang, Allen Y.; Maji, Subhransu; Christoudias, C. Mario; Darrell, Trevor; Malik, Jitendra; Sastry, S. Shankar

    We study object recognition in low-power, low-bandwidth smart camera networks. The ability to perform robust object recognition is crucial for applications such as visual surveillance to track and identify objects of interest, and overcome visual nuisances such as occlusion and pose variations between multiple camera views. To accommodate limited bandwidth between the cameras and the base-station computer, the method utilizes the available computational power on the smart sensors to locally extract SIFT-type image features to represent individual camera views. We show that between a network of cameras, high-dimensional SIFT histograms exhibit a joint sparse pattern corresponding to a set of shared features in 3-D. Such joint sparse patterns can be explicitly exploited to encode the distributed signal via random projections. At the network station, multiple decoding schemes are studied to simultaneously recover the multiple-view object features based on a distributed compressive sensing theory. The system has been implemented on the Berkeley CITRIC smart camera platform. The efficacy of the algorithm is validated through extensive simulation and experiment.

  7. Long-term visual object recognition memory in aged rats.

    PubMed

    Platano, Daniela; Fattoretti, Patrizia; Balietti, Marta; Bertoni-Freddari, Carlo; Aicardi, Giorgio

    2008-04-01

    Aging is associated with memory impairments, but the neural bases of this process need to be clarified. To this end, behavioral protocols for memory testing may be applied to aged animals to compare memory performances with functional and structural characteristics of specific brain regions. Visual object recognition memory can be investigated in the rat using a behavioral task based on its spontaneous preference for exploring novel rather than familiar objects. We found that a behavioral task able to elicit long-term visual object recognition memory in adult Long-Evans rats failed in aged (25-27 months old) Wistar rats. Since no tasks effective in aged rats are reported in the literature, we changed the experimental conditions to improve consolidation processes to assess whether this form of memory can still be maintained for long term at this age: the learning trials were performed in a smaller box, identical to the home cage, and the inter-trial delays were shortened. We observed a reduction in anxiety in this box (as indicated by the lower number of fecal boli produced during habituation), and we developed a learning protocol able to elicit a visual object recognition memory that was maintained after 24 h in these aged rats. When we applied the same protocol to adult rats, we obtained similar results. This experimental approach can be useful to study functional and structural changes associated with age-related memory impairments, and may help to identify new behavioral strategies and molecular targets that can be addressed to ameliorate memory performances during aging.

  8. Category selectivity in human visual cortex: Beyond visual object recognition.

    PubMed

    Peelen, Marius V; Downing, Paul E

    2017-04-02

    Human ventral temporal cortex shows a categorical organization, with regions responding selectively to faces, bodies, tools, scenes, words, and other categories. Why is this? Traditional accounts explain category selectivity as arising within a hierarchical system dedicated to visual object recognition. For example, it has been proposed that category selectivity reflects the clustering of category-associated visual feature representations, or that it reflects category-specific computational algorithms needed to achieve view invariance. This visual object recognition framework has gained renewed interest with the success of deep neural network models trained to "recognize" objects: these hierarchical feed-forward networks show similarities to human visual cortex, including categorical separability. We argue that the object recognition framework is unlikely to fully account for category selectivity in visual cortex. Instead, we consider category selectivity in the context of other functions such as navigation, social cognition, tool use, and reading. Category-selective regions are activated during such tasks even in the absence of visual input and even in individuals with no prior visual experience. Further, they are engaged in close connections with broader domain-specific networks. Considering the diverse functions of these networks, category-selective regions likely encode their preferred stimuli in highly idiosyncratic formats; representations that are useful for navigation, social cognition, or reading are unlikely to be meaningfully similar to each other and to varying degrees may not be entirely visual. The demand for specific types of representations to support category-associated tasks may best account for category selectivity in visual cortex. This broader view invites new experimental and computational approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. The role of surface information in object recognition: studies of a visual form agnosic and normal subjects.

    PubMed

    Humphrey, G K; Goodale, M A; Jakobson, L S; Servos, P

    1994-01-01

    Three experiments were conducted to explore the role of colour and other surface properties in object recognition. The effects of manipulating the availability of surface-based information on object naming in a patient with visual form agnosia and in two age-matched control subjects were examined in experiment 1. The objects were presented under seven different viewing conditions ranging from a full view of the actual objects to line drawings of those same objects. The presence of colour and other surface properties aided the recognition of natural objects such as fruits and vegetables in both the patient and the control subjects. Experiment 2 was focused on four of the critical viewing conditions used in experiment 1 but with a large sample of normal subjects. As in experiment 1, it was found that surface properties, particularly colour, aided the naming of natural objects. The presence of colour did not facilitate the naming of manufactured objects. Experiment 3 was focused on possible ways by which colour could assist in the recognition of natural objects and it was found that object naming was facilitated only if the objects were presented in their usual colour. The results of the experiments show that colour does improve recognition for some types of objects and that the improvement occurs at a high level of visual analysis.

  10. Determinants of novel object and location recognition during development.

    PubMed

    Jablonski, S A; Schreiber, W B; Westbrook, S R; Brennan, L E; Stanton, M E

    2013-11-01

    In the novel object recognition (OR) paradigm, rats are placed in an arena where they encounter two sample objects during a familiarization phase. A few minutes later, they are returned to the same arena and are presented with a familiar object and a novel object. The object location recognition (OL) variant involves the same familiarization procedure but during testing one of the familiar objects is placed in a novel location. Normal adult rats are able to perform both the OR and OL tasks, as indicated by enhanced exploration of the novel vs. the familiar test item. Rats with hippocampal lesions perform the OR but not OL task indicating a role of spatial memory in OL. Recently, these tasks have been used to study the ontogeny of spatial memory but the literature has yielded conflicting results. The current experiments add to this literature by: (1) behaviorally characterizing these paradigms in postnatal day (PD) 21, 26 and 31-day-old rats; (2) examining the role of NMDA systems in OR vs. OL; and (3) investigating the effects of neonatal alcohol exposure on both tasks. Results indicate that normal-developing rats are able to perform OR and OL by PD21, with greater novelty exploration in the OR task at each age. Second, memory acquisition in the OL but not OR task requires NMDA receptor function in juvenile rats [corrected]. Lastly, neonatal alcohol exposure does not disrupt performance in either task. Implications for the ontogeny of incidental spatial learning and its disruption by developmental alcohol exposure are discussed.

  11. Canonical Wnt signaling is necessary for object recognition memory consolidation.

    PubMed

    Fortress, Ashley M; Schram, Sarah L; Tuscher, Jennifer J; Frick, Karyn M

    2013-07-31

    Wnt signaling has emerged as a potent regulator of hippocampal synaptic function, although no evidence yet supports a critical role for Wnt signaling in hippocampal memory. Here, we sought to determine whether canonical β-catenin-dependent Wnt signaling is necessary for hippocampal memory consolidation. Immediately after training in a hippocampal-dependent object recognition task, mice received a dorsal hippocampal (DH) infusion of vehicle or the canonical Wnt antagonist Dickkopf-1 (Dkk-1; 50, 100, or 200 ng/hemisphere). Twenty-four hours later, mice receiving vehicle remembered the familiar object explored during training. However, mice receiving Dkk-1 exhibited no memory for the training object, indicating that object recognition memory consolidation is dependent on canonical Wnt signaling. To determine how Dkk-1 affects canonical Wnt signaling, mice were infused with vehicle or 50 ng/hemisphere Dkk-1 and protein levels of Wnt-related proteins (Dkk-1, GSK3β, β-catenin, TCF1, LEF1, Cyclin D1, c-myc, Wnt7a, Wnt1, and PSD95) were measured in the dorsal hippocampus 5 min or 4 h later. Dkk-1 produced a rapid increase in Dkk-1 protein levels and a decrease in phosphorylated GSK3β levels, followed by a decrease in β-catenin, TCF1, LEF1, Cyclin D1, c-myc, Wnt7a, and PSD95 protein levels 4 h later. These data suggest that alterations in Wnt/GSK3β/β-catenin signaling may underlie the memory impairments induced by Dkk-1. In a subsequent experiment, object training alone rapidly increased DH GSK3β phosphorylation and levels of β-catenin and Cyclin D1. These data suggest that canonical Wnt signaling is regulated by object learning and is necessary for hippocampal memory consolidation.

  12. Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

    PubMed

    Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison

    2016-04-01

    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that

  13. Detailed 3D representations for object recognition and modeling.

    PubMed

    Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad

    2013-11-01

    Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.

  14. Orientation-invariant object recognition: evidence from repetition blindness.

    PubMed

    Harris, Irina M; Dux, Paul E

    2005-02-01

    The question of whether object recognition is orientation-invariant or orientation-dependent was investigated using a repetition blindness (RB) paradigm. In RB, the second occurrence of a repeated stimulus is less likely to be reported, compared to the occurrence of a different stimulus, if it occurs within a short time of the first presentation. This failure is usually interpreted as a difficulty in assigning two separate episodic tokens to the same visual type. Thus, RB can provide useful information about which representations are treated as the same by the visual system. Two experiments tested whether RB occurs for repeated objects that were either in identical orientations, or differed by 30, 60, 90, or 180 degrees . Significant RB was found for all orientation differences, consistent with the existence of orientation-invariant object representations. However, under some circumstances, RB was reduced or even eliminated when the repeated object was rotated by 180 degrees , suggesting easier individuation of the repeated objects in this case. A third experiment confirmed that the upside-down orientation is processed more easily than other rotated orientations. The results indicate that, although object identity can be determined independently of orientation, orientation plays an important role in establishing distinct episodic representations of a repeated object, thus enabling one to report them as separate events.

  15. Recognition memory for object form and object location: an event-related potential study.

    PubMed

    Mecklinger, A; Meinshausen, R M

    1998-09-01

    In this study, the processes associated with retrieving object forms and object locations from working memory were examined with the use of simultaneously recorded event-related potential (ERP) activity. Subjects memorized object forms and their spatial locations and made either object-based or location-based recognition judgments. In Experiment 1, recognition performance was higher for object locations than for object forms. Old responses evoked more positive-going ERP activity between 0.3 and 1.8 sec poststimulus than did new responses. The topographic distribution of these old/new effects in the P300 time interval was task specific, with object-based recognition judgments being associated with anteriorly focused effects and location-based judgments with posteriorly focused effects. Late old/new effects were dominant at right frontal recordings. Using an interference paradigm, it was shown in Experiment 2 that visual representations were used to rehearse both object forms and object locations in working memory. The results of Experiment 3 indicated that the observed differential topographic distributions of the old/new effects in the P300 time interval are unlikely to reflect differences between easy and difficult recognition judgments. More specific effects were obtained for a subgroup of subjects for which the processing characteristics during location-based judgments presumably were similar to those in Experiment 1. These data, together with those from Experiment 1, indicate that different brain areas are engaged in retrieving object forms and object locations from working memory. Further analyses support the view that retrieval of object forms relies on conceptual semantic representation, whereas retrieving object locations is based on structural representations of spatial information. The effects in the later time intervals may play a functional role in post-retrieval processing, such as recollecting information from the study episode or other processes

  16. Enriched environment effects on remote object recognition memory.

    PubMed

    Melani, Riccardo; Chelini, Gabriele; Cenni, Maria Cristina; Berardi, Nicoletta

    2017-04-12

    Since Ebbinghaus' classical work on oblivion and saving effects, we know that declarative memories may become at first spontaneously irretrievable and only subsequently completely extinguished. Recently, this time-dependent path towards memory-trace loss has been shown to correlate with different patterns of brain activation. Environmental enrichment (EE) enhances learning and memory and affects system memory consolidation. However, there is no evidence on whether and how EE could affect the time-dependent path towards oblivion. We used Object Recognition Test (ORT) to assess in adult mice put in EE for 40 days (EE mice) or left in standard condition (SC mice) memory retrieval of the familiar objects 9 and 21 days after learning with or without a brief retraining performed the day before. We found that SC mice show preferential exploration of new object at day 9 only with retraining, while EE mice do it even without. At day 21 SC mice do not show preferential exploration of novel object, irrespective of the retraining, while EE mice are still capable to benefit from retraining, even if they were not able to spontaneously recover the trace. Analysis of c-fos expression 20 days after learning shows a different pattern of active brain areas in response to the retraining session in EE and SC mice, with SC mice recruiting the same brain network as naïve SC or EE mice following de novo learning. This suggests that EE promotes formation of longer-lasting object recognition memory, allowing a longer time window during which saving is present.

  17. Neural Substrates of View-Invariant Object Recognition Developed without Experiencing Rotations of the Objects

    PubMed Central

    Okamura, Jun-ya; Yamaguchi, Reona; Honda, Kazunari; Tanaka, Keiji

    2014-01-01

    One fails to recognize an unfamiliar object across changes in viewing angle when it must be discriminated from similar distractor objects. View-invariant recognition gradually develops as the viewer repeatedly sees the objects in rotation. It is assumed that different views of each object are associated with one another while their successive appearance is experienced in rotation. However, natural experience of objects also contains ample opportunities to discriminate among objects at each of the multiple viewing angles. Our previous behavioral experiments showed that after experiencing a new set of object stimuli during a task that required only discrimination at each of four viewing angles at 30° intervals, monkeys could recognize the objects across changes in viewing angle up to 60°. By recording activities of neurons from the inferotemporal cortex after various types of preparatory experience, we here found a possible neural substrate for the monkeys' performance. For object sets that the monkeys had experienced during the task that required only discrimination at each of four viewing angles, many inferotemporal neurons showed object selectivity covering multiple views. The degree of view generalization found for these object sets was similar to that found for stimulus sets with which the monkeys had been trained to conduct view-invariant recognition. These results suggest that the experience of discriminating new objects in each of several viewing angles develops the partially view-generalized object selectivity distributed over many neurons in the inferotemporal cortex, which in turn bases the monkeys' emergent capability to discriminate the objects across changes in viewing angle. PMID:25378169

  18. Neural substrates of view-invariant object recognition developed without experiencing rotations of the objects.

    PubMed

    Okamura, Jun-Ya; Yamaguchi, Reona; Honda, Kazunari; Wang, Gang; Tanaka, Keiji

    2014-11-05

    One fails to recognize an unfamiliar object across changes in viewing angle when it must be discriminated from similar distractor objects. View-invariant recognition gradually develops as the viewer repeatedly sees the objects in rotation. It is assumed that different views of each object are associated with one another while their successive appearance is experienced in rotation. However, natural experience of objects also contains ample opportunities to discriminate among objects at each of the multiple viewing angles. Our previous behavioral experiments showed that after experiencing a new set of object stimuli during a task that required only discrimination at each of four viewing angles at 30° intervals, monkeys could recognize the objects across changes in viewing angle up to 60°. By recording activities of neurons from the inferotemporal cortex after various types of preparatory experience, we here found a possible neural substrate for the monkeys' performance. For object sets that the monkeys had experienced during the task that required only discrimination at each of four viewing angles, many inferotemporal neurons showed object selectivity covering multiple views. The degree of view generalization found for these object sets was similar to that found for stimulus sets with which the monkeys had been trained to conduct view-invariant recognition. These results suggest that the experience of discriminating new objects in each of several viewing angles develops the partially view-generalized object selectivity distributed over many neurons in the inferotemporal cortex, which in turn bases the monkeys' emergent capability to discriminate the objects across changes in viewing angle.

  19. Recognition of similar objects using simulated prosthetic vision.

    PubMed

    Hu, Jie; Xia, Peng; Gu, Chaochen; Qi, Jin; Li, Sheng; Peng, Yinghong

    2014-02-01

    Due to the limitations of existing techniques, even the most advanced visual prostheses, using several hundred electrodes to transmit signals to the visual pathway, restrict sensory function and visual information. To identify the bottlenecks and guide prosthesis designing, psychophysics simulations of a visual prosthesis in normally sighted individuals are desirable. In this study, psychophysical experiments of discriminating objects with similar profiles were used to test the effects of phosphene array parameters (spatial resolution, gray scale, distortion, and dropout rate) on visual information using simulated prosthetic vision. The results showed that the increase in spatial resolution and number of gray levels and the decrease in phosphene distortion and dropout rate improved recognition performance, and the accuracy is 78.5% under the optimum condition (resolution: 32 × 32, gray level: 8, distortion: k = 0, dropout: 0%). In combined parameter tests, significant facial recognition accuracy was achieved for all the images with k = 0.1 distortion and 10% dropout. Compared with other experiments, we find that different objects do not show specific sensitivity to the changes of parameters and visual information is not nearly enough even under the optimum condition. The results suggests that higher spatial resolution and more gray levels are required for visual prosthetic devices and further research on image processing strategies to improve prosthetic vision is necessary, especially when the wearers have to accomplish more than simple visual tasks.

  20. Spatially rearranged object parts can facilitate perception of intact whole objects

    PubMed Central

    Cacciamani, Laura; Ayars, Alisabeth A.; Peterson, Mary A.

    2014-01-01

    The familiarity of an object depends on the spatial arrangement of its parts; when the parts are spatially rearranged, they form a novel, unrecognizable configuration. Yet the same collection of parts comprises both the familiar and novel configuration. Is it possible that the collection of familiar parts activates a representation of the intact familiar configuration even when they are spatially rearranged? We presented novel configurations as primes before test displays that assayed effects on figure-ground perception from memories of intact familiar objects. In our test displays, two equal-area regions shared a central border; one region depicted a portion of a familiar object. Previous research with such displays has shown that participants are more likely to perceive the region depicting a familiar object as the figure and the abutting region as its ground when the familiar object is depicted in its upright orientation rather than upside down. The novel primes comprised either the same or a different collection of parts as the familiar object in the test display (part-rearranged and control primes, respectively). We found that participants were more likely to perceive the familiar region as figure in upright vs. inverted displays following part-rearranged primes but not control primes. Thus, priming with a novel configuration comprising the same familiar parts as the upcoming figure-ground display facilitated orientation-dependent effects of object memories on figure assignment. Similar results were obtained when the spatially rearranged collection of parts was suggested on the groundside of the prime's border, suggesting that familiar parts in novel configurations access the representation of their corresponding intact whole object before figure assignment. These data demonstrate that familiar parts access memories of familiar objects even when they are arranged in a novel configuration. PMID:24904495

  1. 3D object recognition in TOF data sets

    NASA Astrophysics Data System (ADS)

    Hess, Holger; Albrecht, Martin; Grothof, Markus; Hussmann, Stephan; Oikonomidis, Nikolaos; Schwarte, Rudolf

    2003-08-01

    In the last years 3D-Vision systems based on the Time-Of-Flight (TOF) principle have gained more importance than Stereo Vision (SV). TOF offers a direct depth-data acquisition, whereas SV involves a great amount of computational power for a comparable 3D data set. Due to the enormous progress in TOF-techniques, nowadays 3D cameras can be manufactured and be used for many practical applications. Hence there is a great demand for new accurate algorithms for 3D object recognition and classification. This paper presents a new strategy and algorithm designed for a fast and solid object classification. A challenging example - accurate classification of a (half-) sphere - demonstrates the performance of the developed algorithm. Finally, the transition from a general model of the system to specific applications such as Intelligent Airbag Control and Robot Assistance in Surgery are introduced. The paper concludes with the current research results in the above mentioned fields.

  2. Inflatable bladder to facilitate handling of heavy objects - A concept

    NASA Technical Reports Server (NTRS)

    Mc Goldrick, G. J.

    1969-01-01

    Inflatable bladder facilitates the removal of heavy, highly finished metal parts from tote boxes or shipping containers. The proposed concept permits removal without danger of damage to the parts or injury to handling personnel.

  3. I feel your fear: shared touch between faces facilitates recognition of fearful facial expressions.

    PubMed

    Maister, Lara; Tsiakkas, Eleni; Tsakiris, Manos

    2013-02-01

    Embodied simulation accounts of emotion recognition claim that we vicariously activate somatosensory representations to simulate, and eventually understand, how others feel. Interestingly, mirror-touch synesthetes, who experience touch when observing others being touched, show both enhanced somatosensory simulation and superior recognition of emotional facial expressions. We employed synchronous visuotactile stimulation to experimentally induce a similar experience of "mirror touch" in nonsynesthetic participants. Seeing someone else's face being touched at the same time as one's own face results in the "enfacement illusion," which has been previously shown to blur self-other boundaries. We demonstrate that the enfacement illusion also facilitates emotion recognition, and, importantly, this facilitatory effect is specific to fearful facial expressions. Shared synchronous multisensory experiences may experimentally facilitate somatosensory simulation mechanisms involved in the recognition of fearful emotional expressions.

  4. Modeling 4D Human-Object Interactions for Joint Event Segmentation, Recognition, and Object Localization.

    PubMed

    Wei, Ping; Zhao, Yibiao; Zheng, Nanning; Zhu, Song-Chun

    2016-06-01

    In this paper, we present a 4D human-object interaction (4DHOI) model for solving three vision tasks jointly: i) event segmentation from a video sequence, ii) event recognition and parsing, and iii) contextual object localization. The 4DHOI model represents the geometric, temporal, and semantic relations in daily events involving human-object interactions. In 3D space, the interactions of human poses and contextual objects are modeled by semantic co-occurrence and geometric compatibility. On the time axis, the interactions are represented as a sequence of atomic event transitions with coherent objects. The 4DHOI model is a hierarchical spatial-temporal graph representation which can be used for inferring scene functionality and object affordance. The graph structures and parameters are learned using an ordered expectation maximization algorithm which mines the spatial-temporal structures of events from RGB-D video samples. Given an input RGB-D video, the inference is performed by a dynamic programming beam search algorithm which simultaneously carries out event segmentation, recognition, and object localization. We collected and released a large multiview RGB-D event dataset which contains 3,815 video sequences and 383,036 RGB-D frames captured by three RGB-D cameras. The experimental results on three challenging datasets demonstrate the strength of the proposed method.

  5. Selective attention affects conceptual object priming and recognition: a study with young and older adults.

    PubMed

    Ballesteros, Soledad; Mayas, Julia

    2014-01-01

    In the present study, we investigated the effects of selective attention at encoding on conceptual object priming (Experiment 1) and old-new recognition memory (Experiment 2) tasks in young and older adults. The procedures of both experiments included encoding and memory test phases separated by a short delay. At encoding, the picture outlines of two familiar objects, one in blue and the other in green, were presented to the left and to the right of fixation. In Experiment 1, participants were instructed to attend to the picture outline of a certain color and to classify the object as natural or artificial. After a short delay, participants performed a natural/artificial speeded conceptual classification task with repeated attended, repeated unattended, and new pictures. In Experiment 2, participants at encoding memorized the attended pictures and classify them as natural or artificial. After the encoding phase, they performed an old-new recognition memory task. Consistent with previous findings with perceptual priming tasks, we found that conceptual object priming, like explicit memory, required attention at encoding. Significant priming was obtained in both age groups, but only for those pictures that were attended at encoding. Although older adults were slower than young adults, both groups showed facilitation for attended pictures. In line with previous studies, young adults had better recognition memory than older adults.

  6. Selective attention affects conceptual object priming and recognition: a study with young and older adults

    PubMed Central

    Ballesteros, Soledad; Mayas, Julia

    2015-01-01

    In the present study, we investigated the effects of selective attention at encoding on conceptual object priming (Experiment 1) and old–new recognition memory (Experiment 2) tasks in young and older adults. The procedures of both experiments included encoding and memory test phases separated by a short delay. At encoding, the picture outlines of two familiar objects, one in blue and the other in green, were presented to the left and to the right of fixation. In Experiment 1, participants were instructed to attend to the picture outline of a certain color and to classify the object as natural or artificial. After a short delay, participants performed a natural/artificial speeded conceptual classification task with repeated attended, repeated unattended, and new pictures. In Experiment 2, participants at encoding memorized the attended pictures and classify them as natural or artificial. After the encoding phase, they performed an old–new recognition memory task. Consistent with previous findings with perceptual priming tasks, we found that conceptual object priming, like explicit memory, required attention at encoding. Significant priming was obtained in both age groups, but only for those pictures that were attended at encoding. Although older adults were slower than young adults, both groups showed facilitation for attended pictures. In line with previous studies, young adults had better recognition memory than older adults. PMID:25628588

  7. Short-term plasticity of visuo-haptic object recognition.

    PubMed

    Kassuba, Tanja; Klinge, Corinna; Hölig, Cordula; Röder, Brigitte; Siebner, Hartwig R

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies have provided ample evidence for the involvement of the lateral occipital cortex (LO), fusiform gyrus (FG), and intraparietal sulcus (IPS) in visuo-haptic object integration. Here we applied 30 min of sham (non-effective) or real offline 1 Hz repetitive transcranial magnetic stimulation (rTMS) to perturb neural processing in left LO immediately before subjects performed a visuo-haptic delayed-match-to-sample task during fMRI. In this task, subjects had to match sample (S1) and target (S2) objects presented sequentially within or across vision and/or haptics in both directions (visual-haptic or haptic-visual) and decide whether or not S1 and S2 were the same objects. Real rTMS transiently decreased activity at the site of stimulation and remote regions such as the right LO and bilateral FG during haptic S1 processing. Without affecting behavior, the same stimulation gave rise to relative increases in activation during S2 processing in the right LO, left FG, bilateral IPS, and other regions previously associated with object recognition. Critically, the modality of S2 determined which regions were recruited after rTMS. Relative to sham rTMS, real rTMS induced increased activations during crossmodal congruent matching in the left FG for haptic S2 and the temporal pole for visual S2. In addition, we found stronger activations for incongruent than congruent matching in the right anterior parahippocampus and middle frontal gyrus for crossmodal matching of haptic S2 and in the left FG and bilateral IPS for unimodal matching of visual S2, only after real but not sham rTMS. The results imply that a focal perturbation of the left LO triggers modality-specific interactions between the stimulated left LO and other key regions of object processing possibly to maintain unimpaired object recognition. This suggests that visual and haptic processing engage partially distinct brain networks during visuo-haptic object matching.

  8. Robust feature detection for 3D object recognition and matching

    NASA Astrophysics Data System (ADS)

    Pankanti, Sharath; Dorai, Chitra; Jain, Anil K.

    1993-06-01

    Salient surface features play a central role in tasks related to 3-D object recognition and matching. There is a large body of psychophysical evidence demonstrating the perceptual significance of surface features such as local minima of principal curvatures in the decomposition of objects into a hierarchy of parts. Many recognition strategies employed in machine vision also directly use features derived from surface properties for matching. Hence, it is important to develop techniques that detect surface features reliably. Our proposed scheme consists of (1) a preprocessing stage, (2) a feature detection stage, and (3) a feature integration stage. The preprocessing step selectively smoothes out noise in the depth data without degrading salient surface details and permits reliable local estimation of the surface features. The feature detection stage detects both edge-based and region-based features, of which many are derived from curvature estimates. The third stage is responsible for integrating the information provided by the individual feature detectors. This stage also completes the partial boundaries provided by the individual feature detectors, using proximity and continuity principles of Gestalt. All our algorithms use local support and, therefore, are inherently parallelizable. We demonstrate the efficacy and robustness of our approach by applying it to two diverse domains of applications: (1) segmentation of objects into volumetric primitives and (2) detection of salient contours on free-form surfaces. We have tested our algorithms on a number of real range images with varying degrees of noise and missing data due to self-occlusion. The preliminary results are very encouraging.

  9. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    PubMed Central

    Rosselli, Federica B.; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide

    2015-01-01

    In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning. PMID:25814936

  10. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats.

    PubMed

    Rosselli, Federica B; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide

    2015-01-01

    In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning.

  11. Object recognition testing: methodological considerations on exploration and discrimination measures.

    PubMed

    Akkerman, Sven; Blokland, Arjan; Reneerkens, Olga; van Goethem, Nick P; Bollen, Eva; Gijselaers, Hieronymus J M; Lieben, Cindy K J; Steinbusch, Harry W M; Prickaerts, Jos

    2012-07-01

    The object recognition task (ORT) is a popular one-trial learning test for animals. In the current study, we investigated several methodological issues concerning the task. Data was pooled from 28 ORT studies, containing 731 male Wistar rats. We investigated the relationship between 3 common absolute- and relative discrimination measures, as well as their relation to exploratory activity. In this context, the effects of pre-experimental habituation, object familiarity, trial duration, retention interval and the amnesic drugs MK-801 and scopolamine were investigated. Our analyses showed that the ORT is very sensitive, capable of detecting subtle differences in memory (discrimination) and exploratory performance. As a consequence, it is susceptible to potential biases due to (injection) stress and side effects of drugs. Our data indicated that a minimum amount of exploration is required in the sample and test trial for stable significant discrimination performance. However, there was no relationship between the level of exploration in the sample trial and discrimination performance. In addition, the level of exploration in the test trial was positively related to the absolute discrimination measure, whereas this was not the case for relative discrimination measures, which correct for exploratory differences, making them more resistant to exploration biases. Animals appeared to remember object information over multiple test sessions. Therefore, when animals have encountered both objects in prior test sessions, the object preference observed in the test trial of 1h retention intervals is probably due to a relative difference in familiarity between the objects in the test trial, rather than true novelty per se. Taken together, our findings suggest to take into consideration pre-experimental exposure (familiarization) to objects, habituation to treatment procedures, and the use of relative discrimination measures when using the ORT.

  12. Visual object recognition for mobile tourist information systems

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander

    2005-03-01

    We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.

  13. New neural-networks-based 3D object recognition system

    NASA Astrophysics Data System (ADS)

    Abolmaesumi, Purang; Jahed, M.

    1997-09-01

    Three-dimensional object recognition has always been one of the challenging fields in computer vision. In recent years, Ulman and Basri (1991) have proposed that this task can be done by using a database of 2-D views of the objects. The main problem in their proposed system is that the correspondent points should be known to interpolate the views. On the other hand, their system should have a supervisor to decide which class does the represented view belong to. In this paper, we propose a new momentum-Fourier descriptor that is invariant to scale, translation, and rotation. This descriptor provides the input feature vectors to our proposed system. By using the Dystal network, we show that the objects can be classified with over 95% precision. We have used this system to classify the objects like cube, cone, sphere, torus, and cylinder. Because of the nature of the Dystal network, this system reaches to its stable point by a single representation of the view to the system. This system can also classify the similar views to a single class (e.g., for the cube, the system generated 9 different classes for 50 different input views), which can be used to select an optimum database of training views. The system is also very flexible to the noise and deformed views.

  14. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  15. Covariation of Color and Luminance Facilitate Object Individuation in Infancy

    ERIC Educational Resources Information Center

    Woods, Rebecca J.; Wilcox, Teresa

    2010-01-01

    The ability to individuate objects is one of our most fundamental cognitive capacities. Recent research has revealed that when objects vary in color or luminance alone, infants fail to individuate those objects until 11.5 months. However, color and luminance frequently covary in the natural environment, thus providing a more salient and reliable…

  16. Covariation of Color and Luminance Facilitate Object Individuation in Infancy

    ERIC Educational Resources Information Center

    Woods, Rebecca J.; Wilcox, Teresa

    2010-01-01

    The ability to individuate objects is one of our most fundamental cognitive capacities. Recent research has revealed that when objects vary in color or luminance alone, infants fail to individuate those objects until 11.5 months. However, color and luminance frequently covary in the natural environment, thus providing a more salient and reliable…

  17. Anthropomorphic robot for recognition and drawing generalized object images

    NASA Astrophysics Data System (ADS)

    Ginzburg, Vera M.

    1998-10-01

    The process of recognition, for instance, understanding the text, written by different fonts, consists in the depriving of the individual attributes of the letters in the particular font. It is shown that such process, in Nature and technique, can be provided by the narrowing the spatial frequency of the object's image by its defocusing. In defocusing images remain only areas, so-called Informative Fragments (IFs), which all together form the generalized (stylized) image of many identical objects. It is shown that the variety of shapes of IFs is restricted and can be presented by `Geometrical alphabet'. The `letters' for this alphabet can be created using two basic `genetic' figures: a stripe and round spot. It is known from physiology that the special cells of visual cortex response to these particular figures. The prototype of such `genetic' alphabet has been made using Boolean algebra (Venn's diagrams). The algorithm for drawing the letter's (`genlet's') shape in this alphabet and generalized images of objects (for example, `sleeping cat'), are given. A scheme of an anthropomorphic robot is shown together with results of model computer experiment of the robot's action--`drawing' the generalized image.

  18. Infrared detection, recognition and identification of handheld objects

    NASA Astrophysics Data System (ADS)

    Adomeit, Uwe

    2012-10-01

    A main criterion for comparison and selection of thermal imagers for military applications is their nominal range performance. This nominal range performance is calculated for a defined task and standardized target and environmental conditions. The only standardization available to date is STANAG 4347. The target defined there is based on a main battle tank in front view. Because of modified military requirements, this target is no longer up-to-date. Today, different topics of interest are of interest, especially differentiation between friend and foe and identification of humans. There is no direct way to differentiate between friend and foe in asymmetric scenarios, but one clue can be that someone is carrying a weapon. This clue can be transformed in the observer tasks detection: a person is carrying or is not carrying an object, recognition: the object is a long / medium / short range weapon or civil equipment and identification: the object can be named (e. g. AK-47, M-4, G36, RPG7, Axe, Shovel etc.). These tasks can be assessed experimentally and from the results of such an assessment, a standard target for handheld objects may be derived. For a first assessment, a human carrying 13 different handheld objects in front of his chest was recorded at four different ranges with an IR-dual-band camera. From the recorded data, a perception experiment was prepared. It was conducted with 17 observers in a 13-alternative forced choice, unlimited observation time arrangement. The results of the test together with Minimum Temperature Difference Perceived measurements of the camera and temperature difference and critical dimension derived from the recorded imagery allowed defining a first standard target according to the above tasks. This standard target consist of 2.5 / 3.5 / 5 DRI line pairs on target, 0.24 m critical size and 1 K temperature difference. The values are preliminary and have to be refined in the future. Necessary are different aspect angles, different

  19. Covariation of Color and Luminance Facilitate Object Individuation in Infancy

    PubMed Central

    Woods, Rebecca J.; Wilcox, Teresa

    2013-01-01

    The ability to individuate objects is one of our most fundamental cognitive capacities. Recent research has revealed that when objects vary in color or luminance alone, infants fail to individuate those objects until 11.5 months. However, color and luminance frequently covary in the natural environment, thus providing a more salient and reliable indicator of distinct objects. For this reason, we propose that infants may be more likely to individuate when objects vary in both color and luminance. Using the narrow-screen task of Wilcox and Baillargeon (1998a), in Experiment 1 we assessed 7.5-month-old infants' ability to individuate uniformly colored objects that varied in both color and luminance or luminance alone. Experiment 2 further explored the link between color and luminance by assessing infants' ability to use pattern differences that included luminance or color to individuate objects. Results indicated that infants individuated objects only when covariations in color and luminance were used. These studies add to a growing body of literature investigating the interaction of color and luminance in object processing in infants and have implications for developmental changes in the nature and content of infants' object representations. PMID:20438179

  20. Object recognition and pose estimation of planar objects from range data

    NASA Technical Reports Server (NTRS)

    Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael

    1994-01-01

    The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and

  1. 3D video analysis of the novel object recognition test in rats.

    PubMed

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity.

  2. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  3. Use of 3D faces facilitates facial expression recognition in children

    PubMed Central

    Wang, Lamei; Chen, Wenfeng; Li, Hong

    2017-01-01

    This study assessed whether presenting 3D face stimuli could facilitate children’s facial expression recognition. Seventy-one children aged between 3 and 6 participated in the study. Their task was to judge whether a face presented in each trial showed a happy or fearful expression. Half of the face stimuli were shown with 3D representations, whereas the other half of the images were shown as 2D pictures. We compared expression recognition under these conditions. The results showed that the use of 3D faces improved the speed of facial expression recognition in both boys and girls. Moreover, 3D faces improved boys’ recognition accuracy for fearful expressions. Since fear is the most difficult facial expression for children to recognize, the facilitation effect of 3D faces has important practical implications for children with difficulties in facial expression recognition. The potential benefits of 3D representation for other expressions also have implications for developing more realistic assessments of children’s expression recognition. PMID:28368008

  4. Facilitating Use of Speech Recognition Software for People with Disabilities: A Comparison of Three Treatments

    ERIC Educational Resources Information Center

    Hird, Kathryn; Hennessey, Neville W.

    2007-01-01

    This study examined the relative benefit of three interventions (i.e. physiological, behavioural, and pragmatic) designed to facilitate speech recognition software use. Participants were 15 adults with dysarthria associated with a variety of aetiological conditions, including cerebral palsy, Parkinson's disease, and motor neuron disease. Results…

  5. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  6. Facilitating Use of Speech Recognition Software for People with Disabilities: A Comparison of Three Treatments

    ERIC Educational Resources Information Center

    Hird, Kathryn; Hennessey, Neville W.

    2007-01-01

    This study examined the relative benefit of three interventions (i.e. physiological, behavioural, and pragmatic) designed to facilitate speech recognition software use. Participants were 15 adults with dysarthria associated with a variety of aetiological conditions, including cerebral palsy, Parkinson's disease, and motor neuron disease. Results…

  7. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  8. Improving human object recognition performance using video enhancement techniques

    NASA Astrophysics Data System (ADS)

    Whitman, Lucy S.; Lewis, Colin; Oakley, John P.

    2004-12-01

    Atmospheric scattering causes significant degradation in the quality of video images, particularly when imaging over long distances. The principle problem is the reduction in contrast due to scattered light. It is known that when the scattering particles are not too large compared with the imaging wavelength (i.e. Mie scattering) then high spatial resolution information may be contained within a low-contrast image. Unfortunately this information is not easily perceived by a human observer, particularly when using a standard video monitor. A secondary problem is the difficulty of achieving a sharp focus since automatic focus techniques tend to fail in such conditions. Recently several commercial colour video processing systems have become available. These systems use various techniques to improve image quality in low contrast conditions whilst retaining colour content. These systems produce improvements in subjective image quality in some situations, particularly in conditions of haze and light fog. There is also some evidence that video enhancement leads to improved ATR performance when used as a pre-processing stage. Psychological literature indicates that low contrast levels generally lead to a reduction in the performance of human observers in carrying out simple visual tasks. The aim of this paper is to present the results of an empirical study on object recognition in adverse viewing conditions. The chosen visual task was vehicle number plate recognition at long ranges (500 m and beyond). Two different commercial video enhancement systems are evaluated using the same protocol. The results show an increase in effective range with some differences between the different enhancement systems.

  9. Multispectral image analysis for object recognition and classification

    NASA Astrophysics Data System (ADS)

    Viau, C. R.; Payeur, P.; Cretu, A.-M.

    2016-05-01

    Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.

  10. Emerging technologies with potential for objectively evaluating speech recognition skills.

    PubMed

    Rawool, Vishakha Waman

    2016-01-01

    Work-related exposure to noise and other ototoxins can cause damage to the cochlea, synapses between the inner hair cells, the auditory nerve fibers, and higher auditory pathways, leading to difficulties in recognizing speech. Procedures designed to determine speech recognition scores (SRS) in an objective manner can be helpful in disability compensation cases where the worker claims to have poor speech perception due to exposure to noise or ototoxins. Such measures can also be helpful in determining SRS in individuals who cannot provide reliable responses to speech stimuli, including patients with Alzheimer's disease, traumatic brain injuries, and infants with and without hearing loss. Cost-effective neural monitoring hardware and software is being rapidly refined due to the high demand for neurogaming (games involving the use of brain-computer interfaces), health, and other applications. More specifically, two related advances in neuro-technology include relative ease in recording neural activity and availability of sophisticated analysing techniques. These techniques are reviewed in the current article and their applications for developing objective SRS procedures are proposed. Issues related to neuroaudioethics (ethics related to collection of neural data evoked by auditory stimuli including speech) and neurosecurity (preservation of a person's neural mechanisms and free will) are also discussed.

  11. Object recognition in Williams syndrome: uneven ventral stream activation.

    PubMed

    O'Hearn, Kirsten; Roth, Jennifer K; Courtney, Susan M; Luna, Beatriz; Street, Whitney; Terwillinger, Robert; Landau, Barbara

    2011-05-01

    Williams syndrome (WS) is a genetic disorder associated with severe visuospatial deficits, relatively strong language skills, heightened social interest, and increased attention to faces. On the basis of the visuospatial deficits, this disorder has been characterized primarily as a deficit of the dorsal stream, the occipitoparietal brain regions that subserve visuospatial processing. However, some evidence indicates that this disorder may also affect the development of the ventral stream, the occipitotemporal cortical regions that subserve face and object recognition. The present studies examined ventral stream function in WS, with the hypothesis that faces would produce a relatively more mature pattern of ventral occipitotemporal activation, relative to other objects that are also represented across these visual areas. Using functional magnetic imaging, we compared activation patterns during viewing of human faces, cat faces, houses and shoes in individuals with WS (age 14-27), typically developing 6-9-year-olds (matched approximately on mental age), and typically developing 14-26-year-olds (matched on chronological age). Typically developing individuals exhibited changes in the pattern of activation over age, consistent with previous reports. The ventral stream topography of individuals with WS differed from both control groups, however, reflecting the same level of activation to face stimuli as chronological age matches, but less activation to house stimuli than either mental age or chronological age matches. We discuss the possible causes of this unusual topography and implications for understanding the behavioral profile of people with WS.

  12. Biological object recognition in μ-radiography images

    NASA Astrophysics Data System (ADS)

    Prochazka, A.; Dammer, J.; Weyda, F.; Sopko, V.; Benes, J.; Zeman, J.; Jandejsek, I.

    2015-03-01

    This study presents an applicability of real-time microradiography to biological objects, namely to horse chestnut leafminer, Cameraria ohridella (Insecta: Lepidoptera, Gracillariidae) and following image processing focusing on image segmentation and object recognition. The microradiography of insects (such as horse chestnut leafminer) provides a non-invasive imaging that leaves the organisms alive. The imaging requires a high spatial resolution (micrometer scale) radiographic system. Our radiographic system consists of a micro-focus X-ray tube and two types of detectors. The first is a charge integrating detector (Hamamatsu flat panel), the second is a pixel semiconductor detector (Medipix2 detector). The latter allows detection of single quantum photon of ionizing radiation. We obtained numerous horse chestnuts leafminer pupae in several microradiography images easy recognizable in automatic mode using the image processing methods. We implemented an algorithm that is able to count a number of dead and alive pupae in images. The algorithm was based on two methods: 1) noise reduction using mathematical morphology filters, 2) Canny edge detection. The accuracy of the algorithm is higher for the Medipix2 (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.83), than for the flat panel (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.77). Therefore, we conclude that Medipix2 has lower noise and better displays contours (edges) of biological objects. Our method allows automatic selection and calculation of dead and alive chestnut leafminer pupae. It leads to faster monitoring of the population of one of the world's important insect pest.

  13. Associative recognition and the hippocampus: differential effects of hippocampal lesions on object-place, object-context and object-place-context memory.

    PubMed

    Langston, Rosamund F; Wood, Emma R

    2010-10-01

    The hippocampus is thought to be required for the associative recognition of objects together with the spatial or temporal contexts in which they occur. However, recent data showing that rats with fornix lesions perform as well as controls in an object-place task, while being impaired on an object-place-context task (Eacott and Norman (2004) J Neurosci 24:1948-1953), suggest that not all forms of context-dependent associative recognition depend on the integrity of the hippocampus. To examine the role of the hippocampus in context-dependent recognition directly, the present study tested the effects of large, selective, bilateral hippocampus lesions in rats on performance of a series of spontaneous recognition memory tasks: object recognition, object-place recognition, object-context recognition and object-place-context recognition. Consistent with the effects of fornix lesions, animals with hippocampus lesions were impaired only on the object-place-context task. These data confirm that not all forms of context-dependent associative recognition are mediated by the hippocampus. Subsequent experiments suggested that the object-place task does not require an allocentric representation of space, which could account for the lack of impairment following hippocampus lesions. Importantly, as the object-place-context task has similar spatial requirements, the selective deficit in object-place-context recognition suggests that this task requires hippocampus-dependent neural processes distinct from those required for allocentric spatial memory, or for object memory, object-place memory or object-context memory. Two possibilities are that object, place, and context information converge only in the hippocampus, or that recognition of integrated object-place-context information requires a hippocampus-dependent mode of retrieval, such as recollection.

  14. Object Familiarity Facilitates Foreign Word Learning in Preschoolers

    ERIC Educational Resources Information Center

    Sera, Maria D.; Cole, Caitlin A.; Oromendia, Mercedes; Koenig, Melissa A.

    2014-01-01

    Studying how children learn words in a foreign language can shed light on how language learning changes with development. In one experiment, we examined whether three-, four-, and five-year-olds could learn and remember words for familiar and unfamiliar objects in their native English and a foreign language. All age groups could learn and remember…

  15. Object Familiarity Facilitates Foreign Word Learning in Preschoolers

    ERIC Educational Resources Information Center

    Sera, Maria D.; Cole, Caitlin A.; Oromendia, Mercedes; Koenig, Melissa A.

    2014-01-01

    Studying how children learn words in a foreign language can shed light on how language learning changes with development. In one experiment, we examined whether three-, four-, and five-year-olds could learn and remember words for familiar and unfamiliar objects in their native English and a foreign language. All age groups could learn and remember…

  16. Generalization between canonical and non-canonical views in object recognition

    PubMed Central

    Ghose, Tandra; Liu, Zili

    2013-01-01

    Viewpoint generalization in object recognition is the process that allows recognition of a given 3D object from many different viewpoints despite variations in its 2D projections. We used the canonical view effects as a foundation to empirically test the validity of a major theory in object recognition, the view-approximation model (Poggio & Edelman, 1990). This model predicts that generalization should be better when an object is first seen from a non-canonical view and then a canonical view than when seen in the reversed order. We also manipulated object similarity to study the degree to which this view generalization was constrained by shape details and task instructions (object vs. image recognition). Old-new recognition performance for basic and subordinate level objects was measured in separate blocks. We found that for object recognition, view generalization between canonical and non-canonical views was comparable for basic level objects. For subordinate level objects, recognition performance was more accurate from non-canonical to canonical views than the other way around. When the task was changed from object recognition to image recognition, the pattern of the results reversed. Interestingly, participants responded “old” to “new” images of “old” objects with a substantially higher rate than to “new” objects, despite instructions to the contrary, thereby indicating involuntary view generalization. Our empirical findings are incompatible with the prediction of the view-approximation theory, and argue against the hypothesis that views are stored independently. PMID:23283692

  17. Measuring the Speed of Newborn Object Recognition in Controlled Visual Worlds

    ERIC Educational Resources Information Center

    Wood, Justin N.; Wood, Samantha M. W.

    2017-01-01

    How long does it take for a newborn to recognize an object? Adults can recognize objects rapidly, but measuring object recognition speed in newborns has not previously been possible. Here we introduce an automated controlled-rearing method for measuring the speed of newborn object recognition in controlled visual worlds. We raised newborn chicks…

  18. Atypical Time Course of Object Recognition in Autism Spectrum Disorder

    PubMed Central

    Caplette, Laurent; Wicker, Bruno; Gosselin, Frédéric

    2016-01-01

    In neurotypical observers, it is widely believed that the visual system samples the world in a coarse-to-fine fashion. Past studies on Autism Spectrum Disorder (ASD) have identified atypical responses to fine visual information but did not investigate the time course of the sampling of information at different levels of granularity (i.e. Spatial Frequencies, SF). Here, we examined this question during an object recognition task in ASD and neurotypical observers using a novel experimental paradigm. Our results confirm and characterize with unprecedented precision a coarse-to-fine sampling of SF information in neurotypical observers. In ASD observers, we discovered a different pattern of SF sampling across time: in the first 80 ms, high SFs lead ASD observers to a higher accuracy than neurotypical observers, and these SFs are sampled differently across time in the two subject groups. Our results might be related to the absence of a mandatory precedence of global information, and to top-down processing abnormalities in ASD. PMID:27752088

  19. Crowding, grouping, and object recognition: A matter of appearance

    PubMed Central

    Herzog, Michael H.; Sayim, Bilge; Chicherov, Vitaly; Manassi, Mauro

    2015-01-01

    In crowding, the perception of a target strongly deteriorates when neighboring elements are presented. Crowding is usually assumed to have the following characteristics. (a) Crowding is determined only by nearby elements within a restricted region around the target (Bouma's law). (b) Increasing the number of flankers can only deteriorate performance. (c) Target-flanker interference is feature-specific. These characteristics are usually explained by pooling models, which are well in the spirit of classic models of object recognition. In this review, we summarize recent findings showing that crowding is not determined by the above characteristics, thus, challenging most models of crowding. We propose that the spatial configuration across the entire visual field determines crowding. Only when one understands how all elements of a visual scene group with each other, can one determine crowding strength. We put forward the hypothesis that appearance (i.e., how stimuli look) is a good predictor for crowding, because both crowding and appearance reflect the output of recurrent processing rather than interactions during the initial phase of visual processing. PMID:26024452

  20. Poka Yoke system based on image analysis and object recognition

    NASA Astrophysics Data System (ADS)

    Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.

    2015-11-01

    Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).

  1. Crowding, grouping, and object recognition: A matter of appearance.

    PubMed

    Herzog, Michael H; Sayim, Bilge; Chicherov, Vitaly; Manassi, Mauro

    2015-01-01

    In crowding, the perception of a target strongly deteriorates when neighboring elements are presented. Crowding is usually assumed to have the following characteristics. (a) Crowding is determined only by nearby elements within a restricted region around the target (Bouma's law). (b) Increasing the number of flankers can only deteriorate performance. (c) Target-flanker interference is feature-specific. These characteristics are usually explained by pooling models, which are well in the spirit of classic models of object recognition. In this review, we summarize recent findings showing that crowding is not determined by the above characteristics, thus, challenging most models of crowding. We propose that the spatial configuration across the entire visual field determines crowding. Only when one understands how all elements of a visual scene group with each other, can one determine crowding strength. We put forward the hypothesis that appearance (i.e., how stimuli look) is a good predictor for crowding, because both crowding and appearance reflect the output of recurrent processing rather than interactions during the initial phase of visual processing.

  2. Attentional facilitation of detection of flicker on moving objects.

    PubMed

    Shioiri, Satoshi; Ogawa, Masayuki; Yaguchi, Hirohisa; Cavanagh, Patrick

    2015-01-01

    We investigated the influence of attention and motion on the sensitivity of flicker detection for a target among distractors. Experiment 1 showed that when the target and distractors were moving, detection performance plummeted compared to when they were not moving, suggesting that the most sensitive detectors were local, temporal frequency-tuned receptive fields. With the stimuli in motion, a qualitatively different strategy was required and this led to much reduced performance. Cueing, which specified the target location with 100% validity, had no effect for targets that had little or no motion, suggesting that the flicker was sufficiently salient in this case to attract attention to the target without requiring any search. For targets with medium to high speeds, however, cueing provided a strong increase in sensitivity over uncued performance. This suggests a significant advantage for localizing and tracking the target and so sampling the luminance changes from only one trajectory. Experiment 2 showed that effect of attention was to increase the efficiency and duration of signal integration for the moving target. Overall, the results show that flicker sensitivity for a moving target relies on a much less efficient process than detection of static flicker, and that this less efficient process is facilitated when attention can select the relevant trajectory and ignore the others.

  3. Action Properties of Object Images Facilitate Visual Search.

    PubMed

    Gomez, Michael A; Snow, Jacqueline C

    2017-03-06

    There is mounting evidence that constraints from action can influence the early stages of object selection, even in the absence of any explicit preparation for action. Here, we examined whether action properties of images can influence visual search, and whether such effects were modulated by hand preference. Observers searched for an oddball target among 3 distractors. The search arrays consisted either of images of graspable "handles" ("action-related" stimuli), or images that were otherwise identical to the handles but in which the semicircular fulcrum element was reoriented so that the stimuli no longer looked like graspable objects ("non-action-related" stimuli). In Experiment 1, right-handed observers, who have been shown previously to prefer to use the right hand over the left for manual tasks, were faster to detect targets in action-related versus non-action-related arrays, and showed a response time (reaction time [RT]) advantage for rightward- versus leftward-oriented action-related handles. In Experiment 2, left-handed observers, who have been shown to use the left and right hands relatively equally in manual tasks, were also faster to detect targets in the action-related versus non-action-related arrays, but RTs were equally fast for rightward- and leftward-oriented handle targets. Together, or results suggest that action properties in images, and constraints for action imposed by preferences for manual interaction with objects, can influence attentional selection in the context of visual search. (PsycINFO Database Record

  4. The object pattern separation (OPS) task: a behavioral paradigm derived from the object recognition task.

    PubMed

    van Hagen, B T J; van Goethem, N P; Lagatta, D C; Prickaerts, J

    2015-05-15

    The object recognition task (ORT) is widely used to measure object memory processes in rodents. Recently, the memory process known as pattern separation has received increasing attention, as impaired pattern separation can be one of the cognitive symptoms of multiple neurological and psychiatric disorders. Pattern separation is the formation of distinct representations out of similar inputs. In the search for an easily implemented task for rodents that can be used to measure pattern separation, we developed a task derived from the ORT and the object location task (OLT), which we called the object pattern separation (OPS) task. This task aims to measure spatial pattern separation per se, which utilizes memory processes centered in the DG and CA3 region of the hippocampus. Adult male C57BL/6 mice and adult male Wistar rats were used to validate different object locations which can be used to measure spatial pattern separation. Furthermore, different inter-trial time intervals were tested with the most optimal object location, to further evaluate pattern separation-related memory in mice. We found that specific object locations show gradual effects, which is indicative of pattern separation, and that the OPS task allows the detection of spatial pattern separation bi-directionally at intermediate spatial separations. Thus, object locations and time intervals can be specifically adjusted as needed, in order to investigate an expected improvement or impairment. We conclude that the current spatial OPS task can be best described as a specific version of the ORT, which can be used to investigate pattern separation processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Disturbances of novel object exploration and recognition in a chronic ketamine mouse model of schizophrenia.

    PubMed

    Hauser, Maria Jelena; Isbrandt, Dirk; Roeper, Jochen

    2017-08-14

    Schizophrenia is a chronic and devastating disease with an overall lifetime risk of 1%. While positive symptoms of schizophrenia such as hallucinations and delusions are reduced by antipsychotic medication based on the inhibition of type 2 dopaminergic receptors (D2R), negative symptoms (e.g. reduced motivation) and cognitive symptoms (e.g. impaired working memory) of schizophrenia are not effectively treated by current medication. This dichotomy might arise in part because of our limited understanding of the pathophysiology of negative and cognitive symptoms in schizophrenia. In addition to genetic approaches, chronic systemic application of NMDA inhibitors such as ketamine have been used to generate rodent models, which displayed several relevant endophenotypes related to negative and cognitive symptoms and might thus facilitate mechanistic studies into the underlying pathophysiology. In this context, previous behavioral testing identified impairments in novel object recognition memory as a key feature in chronic NMDA-inhibitor schizophrenia rodent models. Using a chronic ketamine mouse model, we have however identified are more complex behavioral phenotype including deficits in novel space and novel object exploration in combination deficits in short-term novel object recognition memory. These impairments in novelty discrimination are in line with prefrontal and hippocampal reductions in parvalbumin-expression as well as reduced expression of the early immediate gene c-fos after novel-object exploration in hippocampal areas in our model. Our results indicate that adult C57Bl6N mice chronically treated with ketamine display combined impairments in novelty exploration and recognition, which might represent both motivational (negative) and cognitive symptoms of schizophrenia. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Regulation of object recognition and object placement by ovarian sex steroid hormones.

    PubMed

    Tuscher, Jennifer J; Fortress, Ashley M; Kim, Jaekyoon; Frick, Karyn M

    2015-05-15

    The ovarian hormones 17β-estradiol (E2) and progesterone (P4) are potent modulators of hippocampal memory formation. Both hormones have been demonstrated to enhance hippocampal memory by regulating the cellular and molecular mechanisms thought to underlie memory formation. Behavioral neuroendocrinologists have increasingly used the object recognition and object placement (object location) tasks to investigate the role of E2 and P4 in regulating hippocampal memory formation in rodents. These one-trial learning tasks are ideal for studying acute effects of hormone treatments on different phases of memory because they can be administered during acquisition (pre-training), consolidation (post-training), or retrieval (pre-testing). This review synthesizes the rodent literature testing the effects of E2 and P4 on object recognition (OR) and object placement (OP), and the molecular mechanisms in the hippocampus supporting memory formation in these tasks. Some general trends emerge from the data. Among gonadally intact females, object memory tends to be best when E2 and P4 levels are elevated during the estrous cycle, pregnancy, and in middle age. In ovariectomized females, E2 given before or immediately after testing generally enhances OR and OP in young and middle-aged rats and mice, although effects are mixed in aged rodents. Effects of E2 treatment on OR and OP memory consolidation can be mediated by both classical estrogen receptors (ERα and ERβ), and depend on glutamate receptors (NMDA, mGluR1) and activation of numerous cell signaling cascades (e.g., ERK, PI3K/Akt, mTOR) and epigenetic processes (e.g., histone acetylation, DNA methylation). Acute P4 treatment given immediately after training also enhances OR and OP in young and middle-aged ovariectomized females by activating similar cell signaling pathways as E2 (e.g., ERK, mTOR). The few studies that have administered both hormones in combination suggest that treatment can enhance OR and OP, but that effects

  7. Regulation of object recognition and object placement by ovarian sex steroid hormones

    PubMed Central

    Tuscher, Jennifer J.; Fortress, Ashley M.; Kim, Jaekyoon; Frick, Karyn M.

    2014-01-01

    The ovarian hormones 17β-estradiol (E2) and progesterone (P4) are potent modulators of hippocampal memory formation. Both hormones have been demonstrated to enhance hippocampal memory by regulating the cellular and molecular mechanisms thought to underlie memory formation. Behavioral neuroendocrinologists have increasingly used the object recognition and object placement (object location) tasks to investigate the role of E2 and P4 in regulating hippocampal memory formation in rodents. These one-trial learning tasks are ideal for studying acute effects of hormone treatments on different phases of memory because they can be administered during acquisition (pre-training), consolidation (post-training), or retrieval (pre-testing). This review synthesizes the rodent literature testing the effects of E2 and P4 on object recognition (OR) and object placement (OP), and the molecular mechanisms in the hippocampus supporting memory formation in these tasks. Some general trends emerge from the data. Among gonadally intact females, object memory tends to be best when E2 and P4 levels are elevated during the estrous cycle, pregnancy, and in middle age. In ovariectomized females, E2 given before or immediately after testing generally enhances OR and OP in young and middle-aged rats and mice, although effects are mixed in aged rodents. Effects of E2 treatment on OR 7and OP memory consolidation can be mediated by both classical estrogen receptors (ERα and ERβ), and depend on glutamate receptors (NMDA, mGluR1) and activation of numerous cell signaling cascades (e.g., ERK, PI3K/Akt, mTOR) and epigenetic processes (e.g., histone H3 acetylation, DNA methylation). Acute P4 treatment given immediately after training also enhances OR and OP in young and middle-aged ovariectomized females by activating similar cell signaling pathways as E2 (e.g., ERK, mTOR). The few studies that have administered both hormones in combination suggest that treatment can enhance OR and OP, but that

  8. Eyeblink Conditioning and Novel Object Recognition in the Rabbit: Behavioral Paradigms for Assaying Psychiatric Diseases

    PubMed Central

    Weiss, Craig; Disterhoft, John F.

    2015-01-01

    Analysis of data collected from behavioral paradigms has provided important information for understanding the etiology and progression of diseases that involve neural regions mediating abnormal behavior. The trace eyeblink conditioning (EBC) paradigm is particularly suited to examine cerebro-cerebellar interactions since the paradigm requires the cerebellum, forebrain, and awareness of the stimulus contingencies. Impairments in acquiring EBC have been noted in several neuropsychiatric conditions, including schizophrenia, Alzheimer’s disease (AD), progressive supranuclear palsy, and post-traumatic stress disorder. Although several species have been used to examine EBC, the rabbit is unique in its tolerance for restraint, which facilitates imaging, its relatively large skull that facilitates chronic neuronal recordings, a genetic sequence for amyloid that is identical to humans which makes it a valuable model to study AD, and in contrast to rodents, it has a striatum that is differentiated into a caudate and a putamen that facilitates analysis of diseases involving the striatum. This review focuses on EBC during schizophrenia and AD since impairments in cerebro-cerebellar connections have been hypothesized to lead to a cognitive dysmetria. We also relate EBC to conditioned avoidance responses that are more often examined for effects of antipsychotic medications, and we propose that an analysis of novel object recognition (NOR) may add to our understanding of how the underlying neural circuitry has changed during disease states. We propose that the EBC and NOR paradigms will help to determine which therapeutics are effective for treating the cognitive aspects of schizophrenia and AD, and that neuroimaging may reveal biomarkers of the diseases and help to evaluate potential therapeutics. The rabbit, thus, provides an important translational system for studying neural mechanisms mediating maladaptive behaviors that underlie some psychiatric diseases, especially

  9. Similarity dependency of the change in ERP component N1 accompanying with the object recognition learning.

    PubMed

    Tokudome, Wataru; Wang, Gang

    2012-01-01

    Performance during object recognition across views is largely dependent on inter-object similarity. The present study was designed to investigate the similarity dependency of object recognition learning on the changes in ERP component N1. Human subjects were asked to train themselves to recognize novel objects with different inter-object similarity by performing object recognition tasks. During the tasks, images of an object had to be discriminated from the images of other objects irrespective of the viewpoint. When objects had a high inter-object similarity, the ERP component, N1 exhibited a significant increase in both the amplitude and the latency variation across objects during the object recognition learning process, and the N1 amplitude and latency variation across the views of the same objects decreased significantly. In contrast, no significant changes were found during the learning process when using objects with low inter-object similarity. The present findings demonstrate that the changes in the variation of N1 that accompany the object recognition learning process are dependent upon the inter-object similarity and imply that there is a difference in the neuronal representation for object recognition when using objects with high and low inter-object similarity.

  10. The importance of visual features in generic vs. specialized object recognition: a computational study

    PubMed Central

    Ghodrati, Masoud; Rajaei, Karim; Ebrahimpour, Reza

    2014-01-01

    It is debated whether the representation of objects in inferior temporal (IT) cortex is distributed over activities of many neurons or there are restricted islands of neurons responsive to a specific set of objects. There are lines of evidence demonstrating that fusiform face area (FFA-in human) processes information related to specialized object recognition (here we say within category object recognition such as face identification). Physiological studies have also discovered several patches in monkey ventral temporal lobe that are responsible for facial processing. Neuronal recording from these patches shows that neurons are highly selective for face images whereas for other objects we do not see such selectivity in IT. However, it is also well-supported that objects are encoded through distributed patterns of neural activities that are distinctive for each object category. It seems that visual cortex utilize different mechanisms for between category object recognition (e.g., face vs. non-face objects) vs. within category object recognition (e.g., two different faces). In this study, we address this question with computational simulations. We use two biologically inspired object recognition models and define two experiments which address these issues. The models have a hierarchical structure of several processing layers that simply simulate visual processing from V1 to aIT. We show, through computational modeling, that the difference between these two mechanisms of recognition can underlie the visual feature and extraction mechanism. It is argued that in order to perform generic and specialized object recognition, visual cortex must separate the mechanisms involved in within category from between categories object recognition. High recognition performance in within category object recognition can be guaranteed when class-specific features with intermediate size and complexity are extracted. However, generic object recognition requires a distributed universal

  11. The importance of visual features in generic vs. specialized object recognition: a computational study.

    PubMed

    Ghodrati, Masoud; Rajaei, Karim; Ebrahimpour, Reza

    2014-01-01

    It is debated whether the representation of objects in inferior temporal (IT) cortex is distributed over activities of many neurons or there are restricted islands of neurons responsive to a specific set of objects. There are lines of evidence demonstrating that fusiform face area (FFA-in human) processes information related to specialized object recognition (here we say within category object recognition such as face identification). Physiological studies have also discovered several patches in monkey ventral temporal lobe that are responsible for facial processing. Neuronal recording from these patches shows that neurons are highly selective for face images whereas for other objects we do not see such selectivity in IT. However, it is also well-supported that objects are encoded through distributed patterns of neural activities that are distinctive for each object category. It seems that visual cortex utilize different mechanisms for between category object recognition (e.g., face vs. non-face objects) vs. within category object recognition (e.g., two different faces). In this study, we address this question with computational simulations. We use two biologically inspired object recognition models and define two experiments which address these issues. The models have a hierarchical structure of several processing layers that simply simulate visual processing from V1 to aIT. We show, through computational modeling, that the difference between these two mechanisms of recognition can underlie the visual feature and extraction mechanism. It is argued that in order to perform generic and specialized object recognition, visual cortex must separate the mechanisms involved in within category from between categories object recognition. High recognition performance in within category object recognition can be guaranteed when class-specific features with intermediate size and complexity are extracted. However, generic object recognition requires a distributed universal

  12. Multispectral and hyperspectral imaging with AOTF for object recognition

    NASA Astrophysics Data System (ADS)

    Gupta, Neelam; Dahmani, Rachid

    1999-01-01

    Acousto-optic tunable-filter (AOTF) technology has been used in the design of a no-moving parts, compact, lightweight, field portable, automated, adaptive spectral imaging system when combined with a high sensitivity imaging detector array. Such a system could detect spectral signatures of targets and/or background, which contain polarization information and can be digitally processed by a variety of algorithms. At the Army Research Laboratory, we have developed and used a number of AOTF imaging systems and are also carrying out the development of such imagers at longer wavelengths. We have carried out hyperspectral and multispectral imaging using AOTF systems covering the spectral range from the visible to mid-IR. One of the imager uses a two-cascaded collinear-architecture AOTF cell in the visible-to-near-IR range with a digital Si charge-coupled device camera as the detector. The images obtained with this system showed no color blurring or image shift due to the angular deviation of different colors as a result of diffraction, and the digital images are stored and processed with great ease. The spatial resolution of the filter was evaluated by means of the lines of a target chart. We have also obtained and processed images from another noncollinear visible-to-near-IR AOTF imager with a digital camera, and used hyperspectral image processing software to enhance object recognition in cluttered background. We are presently working on a mid-IR AOTF imaging system that uses a high- performance InSb focal plane array and image acquisition and processing software. We describe our hyperspectral imaging program and present results from our imaging experiments.

  13. Cognitive impairment in old rats: a comparison of object displacement, object recognition and water maze.

    PubMed

    Luparini, M R; Del Vecchio, A; Barillari, G; Magnani, M; Prosdocimi, M

    2000-08-01

    The behavioral performance of young and aged rats was studied in a repeated-trials test. Young animals reacted to both spatial displacement and novelty, whereas most aged rats lost the ability to react to novelty although maintaining spatial memory. The cluster analysis procedure performed on all the tested subjects enabled the recognition of a consistent group of the aged sample (35%) with a mild degree of spatial and non-spatial memory impairment. Spatial memory impairment of some of the aged animals was also evaluated in the Morris water maze test. On the fifth day of the task, we observed a very low percentage of impaired aged animals, which partially corresponded to the impaired group identified by the object recognition test. In contrast, the subgroup of mildly impaired rats performed similarly to the young animals. We advance that the Morris water maze might represent a stressful experimental condition for aged rats, enhancing the motivational level of animals subjected to this procedure. This condition may alter the cognitive responses. As a consequence, the "mildly impaired" rats, which may be considered an interesting group for investigating memory-enhancing drugs, will infrequently be recognized with the Morris water maze test. Cognitive impairment in aged rats should be studied utilizing a sensitive test in which motivation does not substantially influence the results of the test.

  14. Does Imitation Facilitate Word Recognition in a Non-Native Regional Accent?

    PubMed Central

    Nguyen, Noël; Dufour, Sophie; Brunellière, Angèle

    2012-01-01

    We asked to what extent phonetic convergence across speakers may facilitate later word recognition. Northern-French participants showed both a clear phonetic convergence effect toward Southern French in a word repetition task, and a bias toward the phonemic system of their own variety in the recognition of single words. Perceptual adaptation to a non-native accent may be difficult when the native accent has a phonemic contrast that is associated with a single phonemic category in the non-native accent. Convergence toward a speaker of a non-native accent in production may not prevent each speaker’s native variety to prevail in word identification. Imitation has been found in previous studies to contribute to predicting upcoming words in sentences in adverse listening conditions, but may play a more limited role in the recognition of single words. PMID:23162514

  15. Virus-mediated suppression of host non-self recognition facilitates horizontal transmission of heterologous viruses.

    PubMed

    Wu, Songsong; Cheng, Jiasen; Fu, Yanping; Chen, Tao; Jiang, Daohong; Ghabrial, Said A; Xie, Jiatao

    2017-03-01

    Non-self recognition is a common phenomenon among organisms; it often leads to innate immunity to prevent the invasion of parasites and maintain the genetic polymorphism of organisms. Fungal vegetative incompatibility is a type of non-self recognition which often induces programmed cell death (PCD) and restricts the spread of molecular parasites. It is not clearly known whether virus infection could attenuate non-self recognition among host individuals to facilitate its spread. Here, we report that a hypovirulence-associated mycoreovirus, named Sclerotinia sclerotiorum mycoreovirus 4 (SsMYRV4), could suppress host non-self recognition and facilitate horizontal transmission of heterologous viruses. We found that cell death in intermingled colony regions between SsMYRV4-infected Sclerotinia sclerotiorum strain and other tested vegetatively incompatible strains was markedly reduced and inhibition barrage lines were not clearly observed. Vegetative incompatibility, which involves Heterotrimeric guanine nucleotide-binding proteins (G proteins) signaling pathway, is controlled by specific loci termed het (heterokaryon incompatibility) loci. Reactive oxygen species (ROS) plays a key role in vegetative incompatibility-mediated PCD. The expression of G protein subunit genes, het genes, and ROS-related genes were significantly down-regulated, and cellular production of ROS was suppressed in the presence of SsMYRV4. Furthermore, SsMYRV4-infected strain could easily accept other viruses through hyphal contact and these viruses could be efficiently transmitted from SsMYRV4-infected strain to other vegetatively incompatible individuals. Thus, we concluded that SsMYRV4 is capable of suppressing host non-self recognition and facilitating heterologous viruses transmission among host individuals. These findings may enhance our understanding of virus ecology, and provide a potential strategy to utilize hypovirulence-associated mycoviruses to control fungal diseases.

  16. Virus-mediated suppression of host non-self recognition facilitates horizontal transmission of heterologous viruses

    PubMed Central

    Wu, Songsong; Cheng, Jiasen; Fu, Yanping; Chen, Tao; Jiang, Daohong; Ghabrial, Said A.

    2017-01-01

    Non-self recognition is a common phenomenon among organisms; it often leads to innate immunity to prevent the invasion of parasites and maintain the genetic polymorphism of organisms. Fungal vegetative incompatibility is a type of non-self recognition which often induces programmed cell death (PCD) and restricts the spread of molecular parasites. It is not clearly known whether virus infection could attenuate non-self recognition among host individuals to facilitate its spread. Here, we report that a hypovirulence-associated mycoreovirus, named Sclerotinia sclerotiorum mycoreovirus 4 (SsMYRV4), could suppress host non-self recognition and facilitate horizontal transmission of heterologous viruses. We found that cell death in intermingled colony regions between SsMYRV4-infected Sclerotinia sclerotiorum strain and other tested vegetatively incompatible strains was markedly reduced and inhibition barrage lines were not clearly observed. Vegetative incompatibility, which involves Heterotrimeric guanine nucleotide-binding proteins (G proteins) signaling pathway, is controlled by specific loci termed het (heterokaryon incompatibility) loci. Reactive oxygen species (ROS) plays a key role in vegetative incompatibility-mediated PCD. The expression of G protein subunit genes, het genes, and ROS-related genes were significantly down-regulated, and cellular production of ROS was suppressed in the presence of SsMYRV4. Furthermore, SsMYRV4-infected strain could easily accept other viruses through hyphal contact and these viruses could be efficiently transmitted from SsMYRV4-infected strain to other vegetatively incompatible individuals. Thus, we concluded that SsMYRV4 is capable of suppressing host non-self recognition and facilitating heterologous viruses transmission among host individuals. These findings may enhance our understanding of virus ecology, and provide a potential strategy to utilize hypovirulence-associated mycoviruses to control fungal diseases. PMID:28334041

  17. Role of the dentate gyrus in mediating object-spatial configuration recognition.

    PubMed

    Kesner, Raymond P; Taylor, James O; Hoge, Jennifer; Andy, Ford

    2015-02-01

    In the present study the effects of dorsal dentate gyrus (dDG) lesions in rats were tested on recognition memory tasks based on the interaction between objects, features of objects, and spatial features. The results indicated that the rats with dDG lesions did not differ from controls in recognition for a change within object feature configuration and object recognition tasks. In contrast, there was a deficit for the dDG lesioned rats relative to controls in recognition for a change within object-spatial feature configuration, complex object-place feature configuration and spatial recognition tasks. It is suggested that the dDG subregion of the hippocampus supports object-place and complex object-place feature information via a conjunctive encoding process.

  18. Self-recognition in corals facilitates deep-sea habitat engineering.

    PubMed

    Hennige, S J; Morrison, C L; Form, A U; Büscher, J; Kamenos, N A; Roberts, J M

    2014-10-27

    The ability of coral reefs to engineer complex three-dimensional habitats is central to their success and the rich biodiversity they support. In tropical reefs, encrusting coralline algae bind together substrates and dead coral framework to make continuous reef structures, but beyond the photic zone, the cold-water coral Lophelia pertusa also forms large biogenic reefs, facilitated by skeletal fusion. Skeletal fusion in tropical corals can occur in closely related or juvenile individuals as a result of non-aggressive skeletal overgrowth or allogeneic tissue fusion, but contact reactions in many species result in mortality if there is no 'self-recognition' on a broad species level. This study reveals areas of 'flawless' skeletal fusion in Lophelia pertusa, potentially facilitated by allogeneic tissue fusion, are identified as having small aragonitic crystals or low levels of crystal organisation, and strong molecular bonding. Regardless of the mechanism, the recognition of 'self' between adjacent L. pertusa colonies leads to no observable mortality, facilitates ecosystem engineering and reduces aggression-related energetic expenditure in an environment where energy conservation is crucial. The potential for self-recognition at a species level, and subsequent skeletal fusion in framework-forming cold-water corals is an important first step in understanding their significance as ecological engineers in deep-seas worldwide.

  19. Research on recognition methods of aphid objects in complex backgrounds

    NASA Astrophysics Data System (ADS)

    Zhao, Hui-Yan; Zhang, Ji-Hong

    2009-07-01

    In order to improve the recognition accuracy among the kinds of aphids in the complex backgrounds, the recognition method among kinds of aphids based on Dual-Tree Complex Wavelet Transform (DT-CWT) and Support Vector Machine (Libsvm) is proposed. Firstly the image is pretreated; secondly the aphid images' texture feature of three crops are extracted by DT-CWT in order to get the training parameters of training model; finally the training model could recognize aphids among the three kinds of crops. By contrasting to Gabor wavelet transform and the traditional extracting texture's methods based on Gray-Level Co-Occurrence Matrix (GLCM), the experiment result shows that the method has a certain practicality and feasibility and provides basic for aphids' recognition between the identification among same kind aphid.

  20. Remembering the Object You Fear: Brain Potentials during Recognition of Spiders in Spider-Fearful Individuals

    PubMed Central

    Michalowski, Jaroslaw M.; Weymar, Mathias; Hamm, Alfons O.

    2014-01-01

    In the present study we investigated long-term memory for unpleasant, neutral and spider pictures in 15 spider-fearful and 15 non-fearful control individuals using behavioral and electrophysiological measures. During the initial (incidental) encoding, pictures were passively viewed in three separate blocks and were subsequently rated for valence and arousal. A recognition memory task was performed one week later in which old and new unpleasant, neutral and spider pictures were presented. Replicating previous results, we found enhanced memory performance and higher confidence ratings for unpleasant when compared to neutral materials in both animal fearful individuals and controls. When compared to controls high animal fearful individuals also showed a tendency towards better memory accuracy and significantly higher confidence during recognition of spider pictures, suggesting that memory of objects prompting specific fear is also facilitated in fearful individuals. In line, spider-fearful but not control participants responded with larger ERP positivity for correctly recognized old when compared to correctly rejected new spider pictures, thus showing the same effects in the neural signature of emotional memory for feared objects that were already discovered for other emotional materials. The increased fear memory for phobic materials observed in the present study in spider-fearful individuals might result in an enhanced fear response and reinforce negative beliefs aggravating anxiety symptomatology and hindering recovery. PMID:25296032

  1. Remembering the object you fear: brain potentials during recognition of spiders in spider-fearful individuals.

    PubMed

    Michalowski, Jaroslaw M; Weymar, Mathias; Hamm, Alfons O

    2014-01-01

    In the present study we investigated long-term memory for unpleasant, neutral and spider pictures in 15 spider-fearful and 15 non-fearful control individuals using behavioral and electrophysiological measures. During the initial (incidental) encoding, pictures were passively viewed in three separate blocks and were subsequently rated for valence and arousal. A recognition memory task was performed one week later in which old and new unpleasant, neutral and spider pictures were presented. Replicating previous results, we found enhanced memory performance and higher confidence ratings for unpleasant when compared to neutral materials in both animal fearful individuals and controls. When compared to controls high animal fearful individuals also showed a tendency towards better memory accuracy and significantly higher confidence during recognition of spider pictures, suggesting that memory of objects prompting specific fear is also facilitated in fearful individuals. In line, spider-fearful but not control participants responded with larger ERP positivity for correctly recognized old when compared to correctly rejected new spider pictures, thus showing the same effects in the neural signature of emotional memory for feared objects that were already discovered for other emotional materials. The increased fear memory for phobic materials observed in the present study in spider-fearful individuals might result in an enhanced fear response and reinforce negative beliefs aggravating anxiety symptomatology and hindering recovery.

  2. Conversion of short-term to long-term memory in the novel object recognition paradigm.

    PubMed

    Moore, Shannon J; Deshpande, Kaivalya; Stinnett, Gwen S; Seasholtz, Audrey F; Murphy, Geoffrey G

    2013-10-01

    It is well-known that stress can significantly impact learning; however, whether this effect facilitates or impairs the resultant memory depends on the characteristics of the stressor. Investigation of these dynamics can be confounded by the role of the stressor in motivating performance in a task. Positing a cohesive model of the effect of stress on learning and memory necessitates elucidating the consequences of stressful stimuli independently from task-specific functions. Therefore, the goal of this study was to examine the effect of manipulating a task-independent stressor (elevated light level) on short-term and long-term memory in the novel object recognition paradigm. Short-term memory was elicited in both low light and high light conditions, but long-term memory specifically required high light conditions during the acquisition phase (familiarization trial) and was independent of the light level during retrieval (test trial). Additionally, long-term memory appeared to be independent of stress-mediated glucocorticoid release, as both low and high light produced similar levels of plasma corticosterone, which further did not correlate with subsequent memory performance. Finally, both short-term and long-term memory showed no savings between repeated experiments suggesting that this novel object recognition paradigm may be useful for longitudinal studies, particularly when investigating treatments to stabilize or enhance weak memories in neurodegenerative diseases or during age-related cognitive decline. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Conversion of short-term to long-term memory in the novel object recognition paradigm

    PubMed Central

    Moore, Shannon J.; Deshpande, Kaivalya; Stinnett, Gwen S.; Seasholtz, Audrey F.; Murphy, Geoffrey G.

    2013-01-01

    It is well-known that stress can significantly impact learning; however, whether this effect facilitates or impairs the resultant memory depends on the characteristics of the stressor. Investigation of these dynamics can be confounded by the role of the stressor in motivating performance in a task. Positing a cohesive model of the effect of stress on learning and memory necessitates elucidating the consequences of stressful stimuli independently from task-specific functions. Therefore, the goal of this study was to examine the effect of manipulating a task-independent stressor (elevated light level) on short-term and long-term memory in the novel object recognition paradigm. Short-term memory was elicited in both low light and high light conditions, but long-term memory specifically required high light conditions during the acquisition phase (familiarization trial) and was independent of the light level during retrieval (test trial). Additionally, long-term memory appeared to be independent of stress-mediated glucocorticoid release, as both low and high light produced similar levels of plasma corticosterone, which further did not correlate with subsequent memory performance. Finally, both short-term and long-term memory showed no savings between repeated experiments suggesting that this novel object recognition paradigm may be useful for longitudinal studies, particularly when investigating treatments to stabilize or enhance weak memories in neurodegenerative diseases or during age-related cognitive decline. PMID:23835143

  4. Object Function Facilitates Infants' Object Individuation in a Manual Search Task

    ERIC Educational Resources Information Center

    Kingo, Osman S.; Krojgaard, Peter

    2012-01-01

    This study investigates the importance of object function (action-object-outcome relations) on object individuation in infancy. Five experiments examined the ability of 9.5- and 12-month-old infants to individuate simple geometric objects in a manual search design. Experiments 1 through 4 (12-month-olds, N = 128) provided several combinations of…

  5. Crossmodal object recognition in rats with and without multimodal object pre-exposure: no effect of hippocampal lesions.

    PubMed

    Reid, James M; Jacklin, Derek L; Winters, Boyer D

    2012-10-01

    The neural mechanisms and brain circuitry involved in the formation, storage, and utilization of multisensory object representations are poorly understood. We have recently introduced a crossmodal object recognition (CMOR) task that enables the study of such questions in rats. Our previous research has indicated that the perirhinal and posterior parietal cortices functionally interact to mediate spontaneous (tactile-to-visual) CMOR performance in rats; however, it remains to be seen whether other brain regions, particularly those receiving polymodal sensory inputs, contribute to this cognitive function. In the current study, we assessed the potential contribution of one such polymodal region, the hippocampus (HPC), to crossmodal object recognition memory. Rats with bilateral excitotoxic HPC lesions were tested in two versions of crossmodal object recognition: (1) the original CMOR task, which requires rats to compare between a stored tactile object representation and visually-presented objects to discriminate the novel and familiar stimuli; and (2) a novel 'multimodal pre-exposure' version of the CMOR task (PE/CMOR), in which simultaneous exploration of the tactile and visual sensory features of an object 24 h prior to the sample phase enhances CMOR performance across longer retention delays. Hippocampus-lesioned rats performed normally on both crossmodal object recognition tasks, but were impaired on a radial arm maze test of spatial memory, demonstrating the functional effectiveness of the lesions. These results strongly suggest that the HPC, despite its polymodal anatomical connections, is not critically involved in tactile-to-visual crossmodal object recognition memory.

  6. Experience moderates overlap between object and face recognition, suggesting a common ability

    PubMed Central

    Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.

    2014-01-01

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021

  7. Experience moderates overlap between object and face recognition, suggesting a common ability.

    PubMed

    Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E

    2014-07-03

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.

  8. Effects of S 18986-1, a novel cognitive enhancer, on memory performances in an object recognition task in rats.

    PubMed

    Lebrun, C; Pillière, E; Lestage, P

    2000-08-04

    (S)-2,3-dihydro-[3,4]cyclopentano-1,2,4-benzothiadiazine-1,1-dioxi de (S 18986-1) is a new compound that facilitates post-synaptic responses by modulating alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid (AMPA) receptor-mediated synaptic responses and thus promotes long-term potentiation and potentiates (S)-AMPA-induced release of noradrenaline in rat brain slices. In the present study, the effects of S 18986-1 were evaluated on cognitive functions by using a one-trial object-recognition test in the Wistar rat, a test which measures a form of episodic memory in rodents. Recognition was measured by the ability of treated rats to discriminate between a familiar and a new object after a 24-h retention delay. Oral administrations with S 18986-1 (0.3 to 100 mg/kg) 1 h before each session of the test improved object recognition at concentrations as low as 0.3 mg/kg. Under the same conditions, the nootropic drug aniracetam was active at a dose of 10 mg/kg by i.p. route. S 18986-1 was still effective on the object-recognition test when it was administered 4 h before each of the three sessions. Furthermore, subchronic oral pretreatment (7 days) with S 18986-1 (0.3 to 30 mg/kg) also increased the recognition of the familiar object indicating that the animals failed to develop tolerance to repeated administrations with S 18986-1. Finally, the recognition of the familiar object was improved when S 18986-1 was administered before the recognition trial whereas the rats failed to recognise the familiar object when S 18986-1 was administered before the sample presentation trial only. Taken together, the results indicated that S 18986-1 facilitated a form of episodic memory in the rat, by improving the recognition of a familiar information (retention). Furthermore, S 18986-1 was long-acting and demonstrated a good oral bioavailability. These data confer on S 18986-1, a potential role in improving episodic memory impaired in neurodegenerative diseases and during aging.

  9. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  10. Insular Cortex Is Involved in Consolidation of Object Recognition Memory

    ERIC Educational Resources Information Center

    Bermudez-Rattoni, Federico; Okuda, Shoki; Roozendaal, Benno; McGaugh, James L.

    2005-01-01

    Extensive evidence indicates that the insular cortex (IC), also termed gustatory cortex, is critically involved in conditioned taste aversion and taste recognition memory. Although most studies of the involvement of the IC in memory have investigated taste, there is some evidence that the IC is involved in memory that is not based on taste. In…

  11. Performance Evaluation of Neuromorphic-Vision Object Recognition Algorithms

    DTIC Science & Technology

    2014-08-01

    Malibu Canyon Rd, Malibu, CA 90265, USA Lior Elazary, Randolph C. Voorhies, Daniel F. Parks, Laurent Itti University of Southern California 3641...and Pattern Recognition, CVPR’07, pp. 1-8. [12] Carpenter , G., Grossberg, S. and Reynolds. J.H. (1991), “ARTMAP: Supervised real-time learning and

  12. A Temporal Same-Object Advantage in the Tunnel Effect: Facilitated Change Detection for Persisting Objects

    ERIC Educational Resources Information Center

    Flombaum, Jonathan I.; Scholl, Brian J.

    2006-01-01

    Meaningful visual experience requires computations that identify objects as the same persisting individuals over time, motion, occlusion, and featural change. This article explores these computations in the tunnel effect: When an object moves behind an occluder, and then an object later emerges following a consistent trajectory, observers…

  13. The development of newborn object recognition in fast and slow visual worlds.

    PubMed

    Wood, Justin N; Wood, Samantha M W

    2016-04-27

    Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world.

  14. The development of newborn object recognition in fast and slow visual worlds

    PubMed Central

    Wood, Justin N.; Wood, Samantha M. W.

    2016-01-01

    Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world. PMID:27097925

  15. Post-Training Reversible Inactivation of the Hippocampus Enhances Novel Object Recognition Memory

    ERIC Educational Resources Information Center

    Oliveira, Ana M. M.; Hawk, Joshua D.; Abel, Ted; Havekes, Robbert

    2010-01-01

    Research on the role of the hippocampus in object recognition memory has produced conflicting results. Previous studies have used permanent hippocampal lesions to assess the requirement for the hippocampus in the object recognition task. However, permanent hippocampal lesions may impact performance through effects on processes besides memory…

  16. The Consolidation of Object and Context Recognition Memory Involve Different Regions of the Temporal Lobe

    ERIC Educational Resources Information Center

    Balderas, Israela; Rodriguez-Ortiz, Carlos J.; Salgado-Tonda, Paloma; Chavez-Hurtado, Julio; McGaugh, James L.; Bermudez-Rattoni, Federico

    2008-01-01

    These experiments investigated the involvement of several temporal lobe regions in consolidation of recognition memory. Anisomycin, a protein synthesis inhibitor, was infused into the hippocampus, perirhinal cortex, insular cortex, or basolateral amygdala of rats immediately after the sample phase of object or object-in-context recognition memory…

  17. Post-Training Reversible Inactivation of the Hippocampus Enhances Novel Object Recognition Memory

    ERIC Educational Resources Information Center

    Oliveira, Ana M. M.; Hawk, Joshua D.; Abel, Ted; Havekes, Robbert

    2010-01-01

    Research on the role of the hippocampus in object recognition memory has produced conflicting results. Previous studies have used permanent hippocampal lesions to assess the requirement for the hippocampus in the object recognition task. However, permanent hippocampal lesions may impact performance through effects on processes besides memory…

  18. The Consolidation of Object and Context Recognition Memory Involve Different Regions of the Temporal Lobe

    ERIC Educational Resources Information Center

    Balderas, Israela; Rodriguez-Ortiz, Carlos J.; Salgado-Tonda, Paloma; Chavez-Hurtado, Julio; McGaugh, James L.; Bermudez-Rattoni, Federico

    2008-01-01

    These experiments investigated the involvement of several temporal lobe regions in consolidation of recognition memory. Anisomycin, a protein synthesis inhibitor, was infused into the hippocampus, perirhinal cortex, insular cortex, or basolateral amygdala of rats immediately after the sample phase of object or object-in-context recognition memory…

  19. An investigation into IgE-facilitated allergen recognition and presentation by human dendritic cells

    PubMed Central

    2013-01-01

    Background Allergen recognition by dendritic cells (DCs) is a key event in the allergic cascade leading to production of IgE antibodies. C-type lectins, such as the mannose receptor and DC-SIGN, were recently shown to play an important role in the uptake of the house dust mite glycoallergen Der p 1 by DCs. In addition to mannose receptor (MR) and DC-SIGN the high and low affinity IgE receptors, namely FcϵRI and FcϵRII (CD23), respectively, have been shown to be involved in allergen uptake and presentation by DCs. Objectives This study aims at understanding the extent to which IgE- and IgG-facilitated Der p 1 uptake by DCs influence T cell polarisation and in particular potential bias in favour of Th2. We have addressed this issue by using two chimaeric monoclonal antibodies produced in our laboratory and directed against a previously defined epitope on Der p 1, namely human IgE 2C7 and IgG1 2C7. Results Flow cytometry was used to establish the expression patterns of IgE (FcϵRI and FcϵRII) and IgG (FcγRI) receptors in relation to MR on DCs. The impact of FcϵRI, FcϵRII, FcγRI and mannose receptor mediated allergen uptake on Th1/Th2 cell differentiation was investigated using DC/T cell co-culture experiments. Myeloid DCs showed high levels of FcϵRI and FcγRI expression, but low levels of CD23 and MR, and this has therefore enabled us to assess the role of IgE and IgG-facilitated allergen presentation in T cell polarisation with minimal interference by CD23 and MR. Our data demonstrate that DCs that have taken up Der p 1 via surface IgE support a Th2 response. However, no such effect was demonstrable via surface IgG. Conclusions IgE bound to its high affinity receptor plays an important role in Der p 1 uptake and processing by peripheral blood DCs and in Th2 polarisation of T cells. PMID:24330349

  20. Haptic Object Recognition is View-Independent in Early Blind but not Sighted People.

    PubMed

    Occelli, Valeria; Lacey, Simon; Stephens, Careese; John, Thomas; Sathian, K

    2016-03-01

    Object recognition, whether visual or haptic, is impaired in sighted people when objects are rotated between learning and test, relative to an unrotated condition, that is, recognition is view-dependent. Loss of vision early in life results in greater reliance on haptic perception for object identification compared with the sighted. Therefore, we hypothesized that early blind people may be more adept at recognizing objects despite spatial transformations. To test this hypothesis, we compared early blind and sighted control participants on a haptic object recognition task. Participants studied pairs of unfamiliar three-dimensional objects and performed a two-alternative forced-choice identification task, with the learned objects presented both unrotated and rotated 180° about they-axis. Rotation impaired the recognition accuracy of sighted, but not blind, participants. We propose that, consistent with our hypothesis, haptic view-independence in the early blind reflects their greater experience with haptic object perception.

  1. Haptic object recognition is view-independent in early blind but not sighted people

    PubMed Central

    Occelli, Valeria; Lacey, Simon; Stephens, Careese; John, Thomas; Sathian, K.

    2016-01-01

    Object recognition, whether visual or haptic, is impaired in sighted people when objects are rotated between learning and test, relative to an unrotated condition, i.e., recognition is view-dependent. Loss of vision early in life results in greater reliance on haptic perception for object identification compared to the sighted. Therefore, we hypothesized that early blind people may be more adept at recognizing objects despite spatial transformations. To test this hypothesis, we compared early blind and sighted control participants on a haptic object recognition task. Participants studied pairs of unfamiliar 3-D objects and performed a two-alternative forced-choice identification task, with the learned objects presented both unrotated and rotated 180° about the y-axis. Rotation impaired the recognition accuracy of sighted, but not blind, participants. We propose that, consistent with our hypothesis, haptic view-independence in the early blind reflects their greater experience with haptic object perception. PMID:26562881

  2. Reprint of: Object-based attentional facilitation and inhibition are neuropsychologically dissociated.

    PubMed

    Smith, Daniel T; Ball, Keira; Swalwell, Robert; Schenk, Thomas

    2016-11-01

    Salient peripheral cues produce a transient shift of attention which is superseded by a sustained inhibitory effect. Cueing part of an object produces an inhibitory cueing effect (ICE) that spreads throughout the object. In dynamic scenes the ICE stays with objects as they move. We examined object-centred attentional facilitation and inhibition in a patient with visual form agnosia. There was no evidence of object-centred attentional facilitation. In contrast, object-centred ICE was observed in 3 out of 4 tasks. These inhibitory effects were strongest where cues to objecthood were highly salient. These data are evidence of a neuropsychological dissociation between the facilitatory and inhibitory effects of attentional cueing. From a theoretical perspective the findings suggest that 'grouped arrays' are sufficient for object-based inhibition, but insufficient to generate object-centred attentional facilitation.

  3. Distortion-tolerant 3-D object recognition by using single exposure on-axis digital holography.

    PubMed

    Kim, Daesuk; Javidi, Bahram

    2004-11-01

    We present a distortion-tolerant 3-D object recognition system using single exposure on-axis digital holography. In contrast to distortion-tolerant 3-D object recognition employing conventional phase shifting scheme which requires multiple exposures, our proposed method requires only one single digital hologram to be synthesized and used for distortion-tolerant 3-D object recognition. A benefit of the single exposure based on-axis approach is enhanced practicality of digital holography for distortion-tolerant 3-D object recognition in terms of its simplicity and high tolerance to external scene parameters such as moving targets. This paper shows experimentally, that single exposure on-axis digital holography is capable of providing a distortion-tolerant 3-D object recognition capability.

  4. BOLD activity during mental rotation and viewpoint-dependent object recognition.

    PubMed

    Gauthier, Isabel; Hayward, William G; Tarr, Michael J; Anderson, Adam W; Skudlarski, Pawel; Gore, John C

    2002-03-28

    We measured brain activity during mental rotation and object recognition with objects rotated around three different axes. Activity in the superior parietal lobe (SPL) increased proportionally to viewpoint disparity during mental rotation, but not during object recognition. In contrast, the fusiform gyrus was preferentially recruited in a viewpoint-dependent manner in recognition as compared to mental rotation. In addition, independent of the effect of viewpoint, object recognition was associated with ventral areas and mental rotation with dorsal areas. These results indicate that the similar behavioral effects of viewpoint obtained in these two tasks are based on different neural substrates. Such findings call into question the hypothesis that mental rotation is used to compensate for changes in viewpoint during object recognition.

  5. Differential effects of spaced vs. massed training in long-term object-identity and object-location recognition memory.

    PubMed

    Bello-Medina, Paola C; Sánchez-Carrasco, Livia; González-Ornelas, Nadia R; Jeffery, Kathryn J; Ramírez-Amaya, Víctor

    2013-08-01

    Here we tested whether the well-known superiority of spaced training over massed training is equally evident in both object identity and object location recognition memory. We trained animals with objects placed in a variable or in a fixed location to produce a location-independent object identity memory or a location-dependent object representation. The training consisted of 5 trials that occurred either on one day (Massed) or over the course of 5 consecutive days (Spaced). The memory test was done in independent groups of animals either 24h or 7 days after the last training trial. In each test the animals were exposed to either a novel object, when trained with the objects in variable locations, or to a familiar object in a novel location, when trained with objects in fixed locations. The difference in time spent exploring the changed versus the familiar objects was used as a measure of recognition memory. For the object-identity-trained animals, spaced training produced clear evidence of recognition memory after both 24h and 7 days, but massed-training animals showed it only after 24h. In contrast, for the object-location-trained animals, recognition memory was evident after both retention intervals and with both training procedures. When objects were placed in variable locations for the two types of training and the test was done with a brand-new location, only the spaced-training animals showed recognition at 24h, but surprisingly, after 7 days, animals trained using both procedures were able to recognize the change, suggesting a post-training consolidation process. We suggest that the two training procedures trigger different neural mechanisms that may differ in the two segregated streams that process object information and that may consolidate differently. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. An optical processor for object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Sloan, J.; Udomkesmalee, S.

    1987-01-01

    The design and development of a miniaturized optical processor that performs real time image correlation are described. The optical correlator utilizes the Vander Lugt matched spatial filter technique. The correlation output, a focused beam of light, is imaged onto a CMOS photodetector array. In addition to performing target recognition, the device also tracks the target. The hardware, composed of optical and electro-optical components, occupies only 590 cu cm of volume. A complete correlator system would also include an input imaging lens. This optical processing system is compact, rugged, requires only 3.5 watts of operating power, and weighs less than 3 kg. It represents a major achievement in miniaturizing optical processors. When considered as a special-purpose processing unit, it is an attractive alternative to conventional digital image recognition processing. It is conceivable that the combined technology of both optical and ditital processing could result in a very advanced robot vision system.

  7. The role of color information on object recognition: a review and meta-analysis.

    PubMed

    Bramão, Inês; Reis, Alexandra; Petersson, Karl Magnus; Faísca, Luís

    2011-09-01

    In this study, we systematically review the scientific literature on the effect of color on object recognition. Thirty-five independent experiments, comprising 1535 participants, were included in a meta-analysis. We found a moderate effect of color on object recognition (d=0.28). Specific effects of moderator variables were analyzed and we found that color diagnosticity is the factor with the greatest moderator effect on the influence of color in object recognition; studies using color diagnostic objects showed a significant color effect (d=0.43), whereas a marginal color effect was found in studies that used non-color diagnostic objects (d=0.18). The present study did not permit the drawing of specific conclusions about the moderator effect of the object recognition task; while the meta-analytic review showed that color information improves object recognition mainly in studies using naming tasks (d=0.36), the literature review revealed a large body of evidence showing positive effects of color information on object recognition in studies using a large variety of visual recognition tasks. We also found that color is important for the ability to recognize artifacts and natural objects, to recognize objects presented as types (line-drawings) or as tokens (photographs), and to recognize objects that are presented without surface details, such as texture or shadow. Taken together, the results of the meta-analysis strongly support the contention that color plays a role in object recognition. This suggests that the role of color should be taken into account in models of visual object recognition.

  8. Face Recognition Is Affected by Similarity in Spatial Frequency Range to a Greater Degree Than Within-Category Object Recognition

    ERIC Educational Resources Information Center

    Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi

    2004-01-01

    Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…

  9. Face Recognition Is Affected by Similarity in Spatial Frequency Range to a Greater Degree Than Within-Category Object Recognition

    ERIC Educational Resources Information Center

    Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi

    2004-01-01

    Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…

  10. An algorithm for recognition and localization of rotated and scaled objects

    NASA Technical Reports Server (NTRS)

    Peli, T.

    1981-01-01

    An algorithm for recognition and localization of objects, which is invariant to displacement and rotation, is extended to the recognition and localization of differently scaled, rotated, and displaced objects. The proposed algorithm provides an optimum way to find if a match exists between two objects that are scaled, rotated, and displaced, while the number of computations is of the same order as for equally scaled objects.

  11. Hemi-methylated DNA opens a closed conformation of UHRF1 to facilitate its histone recognition

    NASA Astrophysics Data System (ADS)

    Fang, Jian; Cheng, Jingdong; Wang, Jiaolong; Zhang, Qiao; Liu, Mengjie; Gong, Rui; Wang, Ping; Zhang, Xiaodan; Feng, Yangyang; Lan, Wenxian; Gong, Zhou; Tang, Chun; Wong, Jiemin; Yang, Huirong; Cao, Chunyang; Xu, Yanhui

    2016-04-01

    UHRF1 is an important epigenetic regulator for maintenance DNA methylation. UHRF1 recognizes hemi-methylated DNA (hm-DNA) and trimethylation of histone H3K9 (H3K9me3), but the regulatory mechanism remains unknown. Here we show that UHRF1 adopts a closed conformation, in which a C-terminal region (Spacer) binds to the tandem Tudor domain (TTD) and inhibits H3K9me3 recognition, whereas the SET-and-RING-associated (SRA) domain binds to the plant homeodomain (PHD) and inhibits H3R2 recognition. Hm-DNA impairs the intramolecular interactions and promotes H3K9me3 recognition by TTD-PHD. The Spacer also facilitates UHRF1-DNMT1 interaction and enhances hm-DNA-binding affinity of the SRA. When TTD-PHD binds to H3K9me3, SRA-Spacer may exist in a dynamic equilibrium: either recognizes hm-DNA or recruits DNMT1 to chromatin. Our study reveals the mechanism for regulation of H3K9me3 and hm-DNA recognition by URHF1.

  12. Model-based object recognition in range imagery

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-09-01

    The paper formulates the mathematical foundations of object discrimination and object re-identification in range image sequences using Bayesian decision theory. Object discrimination determines the unique model corresponding to each scene object, while object re-identification finds the unique object in the scene corresponding to a given model. In the first case object identities are independent; in the second case at most one object exists having a given identity. Efficient analytical and numerical techniques for updating and maximizing the posterior distributions are introduced. Experimental results indicate to what extent a single range image of an object can be used for re-identifying this object in arbitrary scenes. Applications including the protection of commercial vessels against piracy are discussed.

  13. Self-recognition in corals facilitates deep-sea habitat engineering

    PubMed Central

    Hennige, S. J.; Morrison, C. L.; Form, A. U.; Büscher, J.; Kamenos, N. A.; Roberts, J. M.

    2014-01-01

    The ability of coral reefs to engineer complex three-dimensional habitats is central to their success and the rich biodiversity they support. In tropical reefs, encrusting coralline algae bind together substrates and dead coral framework to make continuous reef structures, but beyond the photic zone, the cold-water coral Lophelia pertusa also forms large biogenic reefs, facilitated by skeletal fusion. Skeletal fusion in tropical corals can occur in closely related or juvenile individuals as a result of non-aggressive skeletal overgrowth or allogeneic tissue fusion, but contact reactions in many species result in mortality if there is no ‘self-recognition’ on a broad species level. This study reveals areas of ‘flawless’ skeletal fusion in Lophelia pertusa, potentially facilitated by allogeneic tissue fusion, are identified as having small aragonitic crystals or low levels of crystal organisation, and strong molecular bonding. Regardless of the mechanism, the recognition of ‘self’ between adjacent L. pertusa colonies leads to no observable mortality, facilitates ecosystem engineering and reduces aggression-related energetic expenditure in an environment where energy conservation is crucial. The potential for self-recognition at a species level, and subsequent skeletal fusion in framework-forming cold-water corals is an important first step in understanding their significance as ecological engineers in deep-seas worldwide. PMID:25345760

  14. Self-recognition in corals facilitates deep-sea habitat engineering

    USGS Publications Warehouse

    Hennige, Sebastian J; Morrison, Cheryl; Form, Armin U.; Buscher, Janina; Kamenos, Nicholas A.; Roberts, J. Murray

    2014-01-01

    The ability of coral reefs to engineer complex three-dimensional habitats is central to their success and the rich biodiversity they support. In tropical reefs, encrusting coralline algae bind together substrates and dead coral framework to make continuous reef structures, but beyond the photic zone, the cold-water coral Lophelia pertusa also forms large biogenic reefs, facilitated by skeletal fusion. Skeletal fusion in tropical corals can occur in closely related or juvenile individuals as a result of non-aggressive skeletal overgrowth or allogeneic tissue fusion, but contact reactions in many species result in mortality if there is no ‘self-recognition’ on a broad species level. This study reveals areas of ‘flawless’ skeletal fusion in Lophelia pertusa, potentially facilitated by allogeneic tissue fusion, are identified as having small aragonitic crystals or low levels of crystal organisation, and strong molecular bonding. Regardless of the mechanism, the recognition of ‘self’ between adjacent L. pertusa colonies leads to no observable mortality, facilitates ecosystem engineering and reduces aggression-related energetic expenditure in an environment where energy conservation is crucial. The potential for self-recognition at a species level, and subsequent skeletal fusion in framework-forming cold-water corals is an important first step in understanding their significance as ecological engineers in deep-seas worldwide.

  15. Distinct patterns of viewpoint-dependent BOLD activity during common-object recognition and mental rotation.

    PubMed

    Wilson, Kevin D; Farah, Martha J

    2006-01-01

    A fundamental but unanswered question about the human visual system concerns the way in which misoriented objects are recognized. One hypothesis maintains that representations of incoming stimuli are transformed via parietally based spatial normalization mechanisms (eg mental rotation) to match view-specific representations in long-term memory. Using fMRI, we tested this hypothesis by directly comparing patterns of brain activity evoked during classic mental rotation and misoriented object recognition involving everyday objects. BOLD activity increased systematically with stimulus rotation within the ventral visual stream during object recognition and within the dorsal visual stream during mental rotation. More specifically, viewpoint-dependent activity was significantly greater in the right superior parietal lobule during mental rotation than during object recognition. In contrast, viewpoint-dependent activity was significantly greater in the right fusiform gyrus during object recognition than during mental rotation. In addition to these differences in viewpoint-dependent activity, object recognition and mental rotation produced distinct patterns of brain activity, independent of stimulus rotation: object recognition resulted in greater overall activity within ventral stream visual areas and mental rotation resulted in greater overall activity within dorsal stream visual areas. The present results are inconsistent with the hypothesis that misoriented object recognition is mediated by structures within the parietal lobe that are known to be involved in mental rotation.

  16. Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition.

    PubMed

    Ueda, Yoshiyuki; Saiki, Jun

    2012-01-01

    Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning-visual recognition) and cross-modal learning (eg visual learning-haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances.

  17. Parts and Relations in Young Children's Shape-Based Object Recognition

    ERIC Educational Resources Information Center

    Augustine, Elaine; Smith, Linda B.; Jones, Susan S.

    2011-01-01

    The ability to recognize common objects from sparse information about geometric shape emerges during the same period in which children learn object names and object categories. Hummel and Biederman's (1992) theory of object recognition proposes that the geometric shapes of objects have two components--geometric volumes representing major object…

  18. Object recognition by use of polarimetric phase-shifting digital holography.

    PubMed

    Nomura, Takanori; Javidi, Bahram

    2007-08-01

    Pattern recognition by use of polarimetric phase-shifting digital holography is presented. Using holography, the amplitude distribution and phase difference distribution between two orthogonal polarizations of three-dimensional (3D) or two-dimensional phase objects are obtained. This information contains both complex amplitude and polarimetric characteristics of the object, and it can be used for improving the discrimination capability of object recognition. Experimental results are presented to demonstrate the idea. To the best of our knowledge, this is the first report on 3D polarimetric recognition of objects using digital holography.

  19. Three-dimensional-object recognition by use of single-exposure on-axis digital holography.

    PubMed

    Javidi, Bahram; Kim, Daesuk

    2005-02-01

    On-axis phase-shifting digital holography requires recording of multiple holograms. We describe a novel real-time three-dimensional- (3-D-) object recognition system that uses single-exposure on-axis digital holography. In contrast to 3-D-object recognition by means of a conventional phase-shifting scheme that requires multiple exposures, our proposed method requires only a single digital hologram to be synthesized and used to recognize 3-D objects. A benefit of the proposed 3-D recognition method is enhanced practicality of digital holography for 3-D recognition in terms of its simplicity and greater robustness to external scene parameters such as moving targets and environmental noise factors. We show experimentally the utility of the single-exposure on-axis digital holography-based 3-D-object recognition method.

  20. Symbolic Play Connects to Language through Visual Object Recognition

    ERIC Educational Resources Information Center

    Smith, Linda B.; Jones, Susan S.

    2011-01-01

    Object substitutions in play (e.g. using a box as a car) are strongly linked to language learning and their absence is a diagnostic marker of language delay. Classic accounts posit a symbolic function that underlies both words and object substitutions. Here we show that object substitutions depend on developmental changes in visual object…

  1. How Does Using Object Names Influence Visual Recognition Memory?

    ERIC Educational Resources Information Center

    Richler, Jennifer J.; Palmeri, Thomas J.; Gauthier, Isabel

    2013-01-01

    Two recent lines of research suggest that explicitly naming objects at study influences subsequent memory for those objects at test. Lupyan (2008) suggested that naming "impairs" memory by a representational shift of stored representations of named objects toward the prototype (labeling effect). MacLeod, Gopie, Hourihan, Neary, and Ozubko (2010)…

  2. How Does Using Object Names Influence Visual Recognition Memory?

    ERIC Educational Resources Information Center

    Richler, Jennifer J.; Palmeri, Thomas J.; Gauthier, Isabel

    2013-01-01

    Two recent lines of research suggest that explicitly naming objects at study influences subsequent memory for those objects at test. Lupyan (2008) suggested that naming "impairs" memory by a representational shift of stored representations of named objects toward the prototype (labeling effect). MacLeod, Gopie, Hourihan, Neary, and Ozubko (2010)…

  3. Symbolic Play Connects to Language through Visual Object Recognition

    ERIC Educational Resources Information Center

    Smith, Linda B.; Jones, Susan S.

    2011-01-01

    Object substitutions in play (e.g. using a box as a car) are strongly linked to language learning and their absence is a diagnostic marker of language delay. Classic accounts posit a symbolic function that underlies both words and object substitutions. Here we show that object substitutions depend on developmental changes in visual object…

  4. Tracking the time course of action priming on object recognition: evidence for fast and slow influences of action on perception.

    PubMed

    Kiefer, Markus; Sim, Eun-Jin; Helbig, Hannah; Graf, Markus

    2011-08-01

    Perception and action are classically thought to be supported by functionally and neuroanatomically distinct mechanisms. However, recent behavioral studies using an action priming paradigm challenged this view and showed that action representations can facilitate object recognition. This study determined whether action representations influence object recognition during early visual processing stages, that is, within the first 150 msec. To this end, the time course of brain activation underlying such action priming effects was examined by recording ERPs. Subjects were sequentially presented with two manipulable objects (e.g., tools), which had to be named. In the congruent condition, both objects afforded similar actions, whereas dissimilar actions were afforded in the incongruent condition. In order to test the influence of the prime modality on action priming, the first object (prime) was presented either as picture or as word. We found an ERP effect of action priming over the central scalp as early as 100 msec after target onset for pictorial, but not for verbal primes. A later action priming effect on the N400 ERP component known to index semantic integration processes was obtained for both picture and word primes. The early effect was generated in a fronto-parietal motor network, whereas the late effect reflected activity in anterior temporal areas. The present results indicate that action priming influences object recognition through both fast and slow pathways: Action priming affects rapid visuomotor processes only when elicited by pictorial prime stimuli. However, it also modulates comparably slow conceptual integration processes independent of the prime modality.

  5. On the delay-dependent involvement of the hippocampus in object recognition memory.

    PubMed

    Hammond, Rebecca S; Tull, Laura E; Stackman, Robert W

    2004-07-01

    The role of the hippocampus in object recognition memory processes is unclear in the current literature. Conflicting results have been found in lesion studies of both primates and rodents. Procedural differences between studies, such as retention interval, may explain these discrepancies. In the present study, acute lidocaine administration was used to temporarily inactivate the hippocampus prior to training in the spontaneous object recognition task. Male C57BL/6J mice were administered bilateral lidocaine (4%, 0.5 microl/side) or aCSF (0.5 microl/side) directly into the CA1 region of the dorsal hippocampus 5 min prior to sample object training, and object recognition memory was tested after a short ( 5 min) or long (24 h) retention interval. There was no effect of intra-hippocampal lidocaine on the time needed for mice to accumulate sample object exploration, suggesting that inactivation of the hippocampus did not affect sample session activity or the motivation to explore objects. Lidocaine-treated mice exhibited impaired object recognition memory, measured as reduced novel object preference, after a 24 h but not a 5 min retention interval. These data support a delay-dependent role for the hippocampus in object recognition memory, an effect consistent with the results of hippocampal lesion studies conducted in rats. However, these data are also consistent with the view that the hippocampus is involved in object recognition memory regardless of retention interval, and that object recognition processes of parahippocampal structures (e.g., perirhinal cortex) are sufficient to support object recognition memory over short retention intervals.

  6. Shift- and scale-invariant recognition of contour objects with logarithmic radial harmonic filters.

    PubMed

    Moya, A; Esteve-Taboada, J J; García, J; Ferreira, C

    2000-10-10

    The phase-only logarithmic radial harmonic (LRH) filter has been shown to be suitable for scale-invariant block object recognition. However, an important set of objects is the collection of contour functions that results from a digital edge extraction of the original block objects. These contour functions have a constant width that is independent of the scale of the original object. Therefore, since the energy of the contour objects decreases more slowly with the scale factor than does the energy of the block objects, the phase-only LRH filter has difficulties in the recognition tasks when these contour objects are used. We propose a modified LRH filter that permits the realization of a shift- and scale-invariant optical recognition of contour objects. The modified LRH filter is a complex filter that compensates the energy variation resulting from the scaling of contour objects. Optical results validate the theory and show the utility of the newly proposed method.

  7. Representational dynamics of object recognition: Feedforward and feedback information flows.

    PubMed

    Goddard, Erin; Carlson, Thomas A; Dermody, Nadene; Woolgar, Alexandra

    2016-03-01

    Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception.

  8. Human hand descriptions and gesture recognition for object manipulation.

    PubMed

    Cobos, Salvador; Ferre, Manuel; Sánchez-Urán, M Ángel; Ortego, Javier; Aracil, Rafael

    2010-06-01

    This work focuses on obtaining realistic human hand models that are suitable for manipulation tasks. A 24 degrees of freedom (DoF) kinematic model of the human hand is defined. The model reasonably satisfies realism requirements in simulation and movement. To achieve realism, intra- and inter-finger constraints are obtained. The design of the hand model with 24 DoF is based upon a morphological, physiological and anatomical study of the human hand. The model is used to develop a gesture recognition procedure that uses principal components analysis (PCA) and discriminant functions. Two simplified hand descriptions (nine and six DoF) have been developed in accordance with the constraints obtained previously. The accuracy of the simplified models is almost 5% for the nine DoF hand description and 10% for the six DoF hand description. Finally, some criteria are defined by which to select the hand description best suited to the features of the manipulation task.

  9. The Use of Grouping in Visual Object Recognition

    DTIC Science & Technology

    1988-10-01

    that object. 44 Pol i"on I Polyon 2 Polygon 3 Figure 3.7: Three randomly chosen, convex polygons. Since we assume that any object appears at randomly...I Polyon 2 Polygon 3 DISTAICE DISTRICE Figure 5.4: On the left, the probability distribution which GROPER uses for the expected distance between

  10. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  11. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  12. Resolving human object recognition in space and time

    PubMed Central

    Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2014-01-01

    A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here, we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively later. Using representational similarity analysis, we combine human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing, with sources in V1 and IT., Finally, human MEG signals were correlated to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision. PMID:24464044

  13. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  14. Object Recognition with Severe Spatial Deficits in Williams Syndrome: Sparing and Breakdown

    ERIC Educational Resources Information Center

    Landau, Barbara; Hoffman, James E.; Kurz, Nicole

    2006-01-01

    Williams syndrome (WS) is a rare genetic disorder that results in severe visual-spatial cognitive deficits coupled with relative sparing in language, face recognition, and certain aspects of motion processing. Here, we look for evidence for sparing or impairment in another cognitive system--object recognition. Children with WS, normal mental-age…

  15. Object Recognition with Severe Spatial Deficits in Williams Syndrome: Sparing and Breakdown

    ERIC Educational Resources Information Center

    Landau, Barbara; Hoffman, James E.; Kurz, Nicole

    2006-01-01

    Williams syndrome (WS) is a rare genetic disorder that results in severe visual-spatial cognitive deficits coupled with relative sparing in language, face recognition, and certain aspects of motion processing. Here, we look for evidence for sparing or impairment in another cognitive system--object recognition. Children with WS, normal mental-age…

  16. Innate visual object recognition in vertebrates: some proposed pathways and mechanisms.

    PubMed

    Sewards, Terence V; Sewards, Mark A

    2002-08-01

    Almost all vertebrates are capable of recognizing biologically relevant stimuli at or shortly after birth, and in some phylogenetically ancient species visual object recognition is exclusively innate. Extensive and detailed studies of the anuran visual system have resulted in the determination of the neural structures and pathways involved in innate prey and predator recognition in these species [Behav. Brain Sci. 10 (1987) 337; Comp. Biochem. Physiol. A 128 (2001) 417]. The structures involved include the optic tectum, pretectal nuclei and an area within the mesencephalic tegmentum. Here we investigate the structures and pathways involved in innate stimulus recognition in avian, rodent and primate species. We discuss innate stimulus preferences in maternal imprinting in chicks and argue that these preferences are due to innate visual recognition of conspecifics, entirely mediated by subtelencephalic structures. In rodent species, brainstem structures largely homologous to the components of the anuran subcortical visual system mediate innate visual object recognition. The primary components of the mammalian subcortical visual system are the superior colliculus, nucleus of the optic tract, anterior and posterior pretectal nuclei, nucleus of the posterior commissure, and an area within the mesopontine reticular formation that includes parts of the cuneiform, subcuneiform and pedunculopontine nuclei. We argue that in rodent species the innate sensory recognition systems function throughout ontogeny, acting in parallel with cortical sensory and recognition systems. In primates the structures involved in innate stimulus recognition are essentially the same as those in rodents, but overt innate recognition is only present in very early ontogeny, and after a transition period gives way to learned object recognition mediated by cortical structures. After the transition period, primate subcortical sensory systems still function to provide implicit innate stimulus

  17. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  18. Three-dimensional object rotation-tolerant recognition for integral imaging using synthetic discriminant function

    NASA Astrophysics Data System (ADS)

    Hao, Jinbo; Wang, Xiaorui; Zhang, Jianqi; Xu, Yin

    2013-04-01

    This paper presents a novel approach of three-dimensional object rotation-tolerant recognition that combines the merits of Integral Imaging (II) and Synthetic Discriminant Function (SDF). SDF aims at filters and distortion-tolerant recognition, and we use it for three-dimensional (3-D) rotation-tolerant recognition with II system. Using high relevancy of elemental images of II, the approach can not only realize 3-D rotation-tolerant recognition, but also reduce computational complexity. The correctness has been validated by experimental results.

  19. Effects of varying stimulus size on object recognition in pigeons.

    PubMed

    Peissig, Jessie J; Kirkpatrick, Kimberly; Young, Michael E; Wasserman, Edward E; Biederman, Irving

    2006-10-01

    The authors investigated the pigeon's ability to generalize object discrimination performance to smaller and larger versions of trained objects. In Experiment 1, they taught pigeons with line drawings of multipart objects and later tested the birds with both larger and smaller drawings. The pigeons exhibited significant generalization to new sizes, although they did show systematic performance decrements as the new size deviated from the original. In Experiment 2, the authors tested both linear and exponential size changes of computer-rendered basic shapes to determine which size transformation produced equivalent performance for size increases and decreases. Performance was more consistent with logarithmic than with linear scaling of size. This finding was supported in Experiment 3. Overall, the experiments suggest that the pigeon encodes size as a feature of objects and that the representation of size is most likely logarithmic.

  20. Dual-Hierarchy Graph Method for Object Indexing and Recognition

    DTIC Science & Technology

    2014-07-01

    changes, occlusion, shadows, sensor noise etc. It also handles variability in the object itself, e.g. articulation or camouflage. All available models...changes (pose, scale etc.), illumination changes, occlusion, shadows, sensor noise etc. It also handles variability in the object itself, e.g...intrinsic hierarchies is based on parts, e.g. a truck has a cabin , a trunk, wheels etc. The other hierarchy we call the "Level of Abstraction” (LOA), e.g. a

  1. Mechanisms and neural basis of object and pattern recognition: a study with chess experts.

    PubMed

    Bilalić, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang

    2010-11-01

    Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and novices performing chess-related and -unrelated (visual) search tasks. As expected, the superiority of experts was limited to the chess-specific task, as there were no differences in a control task that used the same chess stimuli but did not require chess-specific recognition. The analysis of eye movements showed that experts immediately and exclusively focused on the relevant aspects in the chess task, whereas novices also examined irrelevant aspects. With random chess positions, when pattern knowledge could not be used to guide perception, experts nevertheless maintained an advantage. Experts' superior domain-specific parafoveal vision, a consequence of their knowledge about individual domain-specific symbols, enabled improved object recognition. Functional magnetic resonance imaging corroborated this differentiation between object and pattern recognition and showed that chess-specific object recognition was accompanied by bilateral activation of the occipitotemporal junction, whereas chess-specific pattern recognition was related to bilateral activations in the middle part of the collateral sulci. Using the expertise approach together with carefully chosen controls and multiple dependent measures, we identified object and pattern recognition as two essential cognitive processes in expert visual cognition, which may also help to explain the mechanisms of everyday perception.

  2. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    PubMed

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life.

  3. Design and implementation of knowledge-based framework for ground objects recognition in remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Shaobin; Ding, Mingyue; Cai, Chao; Fu, Xiaowei; Sun, Yue; Chen, Duo

    2009-10-01

    The advance of image processing makes knowledge-based automatic image interpretation much more realistic than ever. In the domain of remote sensing image processing, the introduction of knowledge enhances the confidence of recognition of typical ground objects. There are mainly two approaches to employ knowledge: the first one is scattering knowledge in concrete program and relevant knowledge of ground objects are fixed by programming; the second is systematically storing knowledge in knowledge base to offer a unified instruction for each object recognition procedure. In this paper, a knowledge-based framework for ground objects recognition in remote sensing image is proposed. This framework takes the second means for using knowledge with a hierarchical architecture. The recognition of typical airport demonstrated the feasibility of the proposed framework.

  4. Acoustic signature recognition technique for Human-Object Interactions (HOI) in persistent surveillance systems

    NASA Astrophysics Data System (ADS)

    Alkilani, Amjad; Shirkhodaie, Amir

    2013-05-01

    Handling, manipulation, and placement of objects, hereon called Human-Object Interaction (HOI), in the environment generate sounds. Such sounds are readily identifiable by the human hearing. However, in the presence of background environment noises, recognition of minute HOI sounds is challenging, though vital for improvement of multi-modality sensor data fusion in Persistent Surveillance Systems (PSS). Identification of HOI sound signatures can be used as precursors to detection of pertinent threats that otherwise other sensor modalities may miss to detect. In this paper, we present a robust method for detection and classification of HOI events via clustering of extracted features from training of HOI acoustic sound waves. In this approach, salient sound events are preliminary identified and segmented from background via a sound energy tracking method. Upon this segmentation, frequency spectral pattern of each sound event is modeled and its features are extracted to form a feature vector for training. To reduce dimensionality of training feature space, a Principal Component Analysis (PCA) technique is employed to expedite fast classification of test feature vectors, a kd-tree and Random Forest classifiers are trained for rapid classification of training sound waves. Each classifiers employs different similarity distance matching technique for classification. Performance evaluations of classifiers are compared for classification of a batch of training HOI acoustic signatures. Furthermore, to facilitate semantic annotation of acoustic sound events, a scheme based on Transducer Mockup Language (TML) is proposed. The results demonstrate the proposed approach is both reliable and effective, and can be extended to future PSS applications.

  5. Diphenyl diselenide-supplemented diet and swimming exercise enhance novel object recognition memory in old rats.

    PubMed

    Cechella, José L; Leite, Marlon R; Rosario, Alisson R; Sampaio, Tuane B; Zeni, Gilson

    2014-01-01

    The benefits of exercise and the element selenium on mental health and cognitive performance are well documented. The purpose of the present study was to investigate whether the intake of a diet supplemented with diphenyl diselenide [(PhSe)2] and the swimming exercise could enhance memory in old Wistar rats. Male Wistar rats (24 months) were fed daily with standard diet chow or standard chow supplemented with 1 ppm of (PhSe)2 during 4 weeks. Animals were submitted to swimming training with a workload (3 % of body weight, 20 min/day for 4 weeks). After 4 weeks, the object recognition test (ORT) and the object location test (OLT) were performed. The results of this study demonstrated that intake of a supplemented diet with (PhSe)2 and swimming exercise was effective in improving short-term and long-term memory as well as spatial learning, increasing the hippocampal levels of phosphorylated cAMP-response element-binding protein (CREB) in old rats. This study also provided evidence that (PhSe)2-supplemented diet facilitated memory of old rats by modulating cAMP levels and stimulating CREB phosphorylation, without altering the levels of Akt.

  6. It’s all connected: Pathways in visual object recognition and early noun learning

    PubMed Central

    Smith, Linda B.

    2013-01-01

    A developmental pathway may be defined as the route, or chain of events, through which a new structure or function forms. For many human behaviors, including object name learning and visual object recognition, these pathways are often complex, multi-causal and include unexpected dependencies. This paper presents three principles of development that suggest the value of a developmental psychology that explicitly seeks to trace these pathways and uses empirical evidence on developmental dependencies between motor development, action on objects, visual object recognition and object name learning in 12 to 24 month old infants to make the case. The paper concludes with a consideration of the theoretical implications of this approach. PMID:24320634

  7. High-speed optical object recognition processor with massive holographic memory

    NASA Astrophysics Data System (ADS)

    Chao, Tien-Hsin; Zhou, Hanying; Reyes, George F.

    2002-09-01

    Real-time object recognition using a compact grayscale optical correlator will be introduced. A holographic memory module for storing a large bank of optimum correlation filters to accommodate large data throughput rate needed for many real-world applications has also been developed. System architecture of the optical processor and the holographic memory will be presented. Application examples of this object recognition technology will also be demonstrated.

  8. The Neural Basis of Nonvisual Object Recognition Memory in the Rat

    PubMed Central

    Albasser, Mathieu M.; Olarte-Sánchez, Cristian M.; Amin, Eman; Horne, Murray R.; Newton, Michael J.; Warburton, E. Clea; Aggleton, John P.

    2012-01-01

    Research into the neural basis of recognition memory has traditionally focused on the remembrance of visual stimuli. The present study examined the neural basis of object recognition memory in the dark, with a view to determining the extent to which it shares common pathways with visual-based object recognition. Experiment 1 assessed the expression of the immediate-early gene c-fos in rats that discriminated novel from familiar objects in the dark (Group Novel). Comparisons made with a control group that explored only familiar objects (Group Familiar) showed that Group Novel had higher c-fos activity in the rostral perirhinal cortex and the lateral entorhinal cortex. Outside the temporal region, Group Novel showed relatively increased c-fos activity in the anterior medial thalamic nucleus and the anterior cingulate cortex. Both the hippocampal CA fields and the granular retrosplenial cortex showed borderline increases in c-fos activity with object novelty. The hippocampal findings prompted Experiment 2. Here, rats with hippocampal lesions were tested in the dark for object recognition memory at different retention delays. Across two replications, no evidence was found that hippocampal lesions impair nonvisual object recognition. The results indicate that in the dark, as in the light, interrelated parahippocampal sites are activated when rats explore novel stimuli. These findings reveal a network of linked c-fos activations that share superficial features with those associated with visual recognition but differ in the fine details; for example, in the locus of the perirhinal cortex activation. While there may also be a relative increase in c-fos activation in the extended-hippocampal system to object recognition in the dark, there was no evidence that this recognition memory problem required an intact hippocampus. PMID:23244291

  9. The neural basis of nonvisual object recognition memory in the rat.

    PubMed

    Albasser, Mathieu M; Olarte-Sánchez, Cristian M; Amin, Eman; Horne, Murray R; Newton, Michael J; Warburton, E Clea; Aggleton, John P

    2013-02-01

    Research into the neural basis of recognition memory has traditionally focused on the remembrance of visual stimuli. The present study examined the neural basis of object recognition memory in the dark, with a view to determining the extent to which it shares common pathways with visual-based object recognition. Experiment 1 assessed the expression of the immediate-early gene c-fos in rats that discriminated novel from familiar objects in the dark (Group Novel). Comparisons made with a control group that explored only familiar objects (Group Familiar) showed that Group Novel had higher c-fos activity in the rostral perirhinal cortex and the lateral entorhinal cortex. Outside the temporal region, Group Novel showed relatively increased c-fos activity in the anterior medial thalamic nucleus and the anterior cingulate cortex. Both the hippocampal CA fields and the granular retrosplenial cortex showed borderline increases in c-fos activity with object novelty. The hippocampal findings prompted Experiment 2. Here, rats with hippocampal lesions were tested in the dark for object recognition memory at different retention delays. Across two replications, no evidence was found that hippocampal lesions impair nonvisual object recognition. The results indicate that in the dark, as in the light, interrelated parahippocampal sites are activated when rats explore novel stimuli. These findings reveal a network of linked c-fos activations that share superficial features with those associated with visual recognition but differ in the fine details; for example, in the locus of the perirhinal cortex activation. While there may also be a relative increase in c-fos activation in the extended-hippocampal system to object recognition in the dark, there was no evidence that this recognition memory problem required an intact hippocampus.

  10. Dissociations in the effect of delay on object recognition: evidence for an associative model of recognition memory.

    PubMed

    Tam, Shu K E; Robinson, Jasper; Jennings, Dómhnall J; Bonardi, Charlotte

    2014-01-01

    Rats were administered 3 versions of an object recognition task: In the spontaneous object recognition task (SOR) animals discriminated between a familiar object and a novel object; in the temporal order task they discriminated between 2 familiar objects, 1 of which had been presented more recently than the other; and, in the object-in-place task, they discriminated among 4 previously presented objects, 2 of which were presented in the same locations as in preexposure and 2 in different but familiar locations. In each task animals were tested at 2 delays (5 min and 2 hr) between the sample and test phases in the SOR and object-in-place task, and between the 2 sample phases in the temporal order task. Performance in the SOR was poorer with the longer delay, whereas in the temporal order task performance improved with delay. There was no effect of delay on object-in-place performance. In addition the performance of animals with neurotoxic lesions of the dorsal hippocampus was selectively impaired in the object-in-place task at the longer delay. These findings are interpreted within the framework of Wagner's (1981) model of memory.

  11. The Role of Fixation and Visual Attention in Object Recognition.

    DTIC Science & Technology

    1995-01-01

    stereo by matching features and using trigonometry to convert disparity into depth lies in the matching process (correspondence problem). This is...avoid obstacles and perform other tasks which require recognizing specific objects in the environment. An active-attentive vision system is more robust

  12. A chicken model for studying the emergence of invariant object recognition.

    PubMed

    Wood, Samantha M W; Wood, Justin N

    2015-01-01

    "Invariant object recognition" refers to the ability to recognize objects across variation in their appearance on the retina. This ability is central to visual perception, yet its developmental origins are poorly understood. Traditionally, nonhuman primates, rats, and pigeons have been the most commonly used animal models for studying invariant object recognition. Although these animals have many advantages as model systems, they are not well suited for studying the emergence of invariant object recognition in the newborn brain. Here, we argue that newly hatched chicks (Gallus gallus) are an ideal model system for studying the emergence of invariant object recognition. Using an automated controlled-rearing approach, we show that chicks can build a viewpoint-invariant representation of the first object they see in their life. This invariant representation can be built from highly impoverished visual input (three images of an object separated by 15° azimuth rotations) and cannot be accounted for by low-level retina-like or V1-like neuronal representations. These results indicate that newborn neural circuits begin building invariant object representations at the onset of vision and argue for an increased focus on chicks as an animal model for studying invariant object recognition.

  13. Object oriented image analysis based on multi-agent recognition system

    NASA Astrophysics Data System (ADS)

    Tabib Mahmoudi, Fatemeh; Samadzadegan, Farhad; Reinartz, Peter

    2013-04-01

    In this paper, the capabilities of multi-agent systems are used in order to solve object recognition difficulties in complex urban areas based on the characteristics of WorldView-2 satellite imagery and digital surface model (DSM). The proposed methodology has three main steps: pre-processing of dataset, object based image analysis and multi-agent object recognition. Classified regions obtained from object based image analysis are used as input datasets in the proposed multi-agent system in order to modify/improve results. In the first operational level of the proposed multi-agent system, various kinds of object recognition agents modify initial classified regions based on their spectral, textural and 3D structural knowledge. Then, in the second operational level, 2D structural knowledge and contextual relations are used by agents for reasoning and modification. Evaluation of the capabilities of the proposed object recognition methodology is performed on WorldView-2 imagery over Rio de Janeiro (Brazil) which has been collected in January 2010. According to the obtained results of the object based image analysis process, contextual relations and structural descriptors have high potential to modify general difficulties of object recognition. Using knowledge based reasoning and cooperative capabilities of agents in the proposed multi-agent system in this paper, most of the remaining difficulties are decreased and the accuracy of object based image analysis results is improved for about three percent.

  14. Do Simultaneously Viewed Objects Influence Scene Recognition Individually or as Groups? Two Perceptual Studies

    PubMed Central

    Gagne, Christopher R.; MacEvoy, Sean P.

    2014-01-01

    The ability to quickly categorize visual scenes is critical to daily life, allowing us to identify our whereabouts and to navigate from one place to another. Rapid scene categorization relies heavily on the kinds of objects scenes contain; for instance, studies have shown that recognition is less accurate for scenes to which incongruent objects have been added, an effect usually interpreted as evidence of objects' general capacity to activate semantic networks for scene categories they are statistically associated with. Essentially all real-world scenes contain multiple objects, however, and it is unclear whether scene recognition draws on the scene associations of individual objects or of object groups. To test the hypothesis that scene recognition is steered, at least in part, by associations between object groups and scene categories, we asked observers to categorize briefly-viewed scenes appearing with object pairs that were semantically consistent or inconsistent with the scenes. In line with previous results, scenes were less accurately recognized when viewed with inconsistent versus consistent pairs. To understand whether this reflected individual or group-level object associations, we compared the impact of pairs composed of mutually related versus unrelated objects; i.e., pairs, which, as groups, had clear associations to particular scene categories versus those that did not. Although related and unrelated object pairs equally reduced scene recognition accuracy, unrelated pairs were consistently less capable of drawing erroneous scene judgments towards scene categories associated with their individual objects. This suggests that scene judgments were influenced by the scene associations of object groups, beyond the influence of individual objects. More generally, the fact that unrelated objects were as capable of degrading categorization accuracy as related objects, while less capable of generating specific alternative judgments, indicates that the process

  15. The relationship between protein synthesis and protein degradation in object recognition memory.

    PubMed

    Furini, Cristiane R G; Myskiw, Jociane de C; Schmidt, Bianca E; Zinn, Carolina G; Peixoto, Patricia B; Pereira, Luiza D; Izquierdo, Ivan

    2015-11-01

    For decades there has been a consensus that de novo protein synthesis is necessary for long-term memory. A second round of protein synthesis has been described for both extinction and reconsolidation following an unreinforced test session. Recently, it was shown that consolidation and reconsolidation depend not only on protein synthesis but also on protein degradation by the ubiquitin-proteasome system (UPS), a major mechanism responsible for protein turnover. However, the involvement of UPS on consolidation and reconsolidation of object recognition memory remains unknown. Here we investigate in the CA1 region of the dorsal hippocampus the involvement of UPS-mediated protein degradation in consolidation and reconsolidation of object recognition memory. Animals with infusion cannulae stereotaxically implanted in the CA1 region of the dorsal hippocampus, were exposed to an object recognition task. The UPS inhibitor β-Lactacystin did not affect the consolidation and the reconsolidation of object recognition memory at doses known to affect other forms of memory (inhibitory avoidance, spatial learning in a water maze) while the protein synthesis inhibitor anisomycin impaired the consolidation and the reconsolidation of the object recognition memory. However, β-Lactacystin was able to reverse the impairment caused by anisomycin on the reconsolidation process in the CA1 region of the hippocampus. Therefore, it is possible to postulate a direct link between protein degradation and protein synthesis during the reconsolidation of the object recognition memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Object Recognition and Attention to Object Components by Preschool Children and 4-Month-Old Infants.

    ERIC Educational Resources Information Center

    Haaf, Robert A.

    2003-01-01

    This study investigated attention to and recognition of components in compound stimuli among infants and preschoolers. Oddity tasks with preschoolers and familiarization/novelty-preference tasks with infants demonstrated successful discrimination among stimuli components on basis of edge property information. Matching tasks with preschoolers and…

  17. Dissociating the Effects of Angular Disparity and Image Similarity in Mental Rotation and Object Recognition

    ERIC Educational Resources Information Center

    Cheung, Olivia S.; Hayward, William G.; Gauthier, Isabel

    2009-01-01

    Performance is often impaired linearly with increasing angular disparity between two objects in tasks that measure mental rotation or object recognition. But increased angular disparity is often accompanied by changes in the similarity between views of an object, confounding the impact of the two factors in these tasks. We examined separately the…

  18. Complementary Hemispheric Asymmetries in Object Naming and Recognition: A Voxel-Based Correlational Study

    ERIC Educational Resources Information Center

    Acres, K.; Taylor, K. I.; Moss, H. E.; Stamatakis, E. A.; Tyler, L. K.

    2009-01-01

    Cognitive neuroscientific research proposes complementary hemispheric asymmetries in naming and recognising visual objects, with a left temporal lobe advantage for object naming and a right temporal lobe advantage for object recognition. Specifically, it has been proposed that the left inferior temporal lobe plays a mediational role linking…

  19. Complementary Hemispheric Asymmetries in Object Naming and Recognition: A Voxel-Based Correlational Study

    ERIC Educational Resources Information Center

    Acres, K.; Taylor, K. I.; Moss, H. E.; Stamatakis, E. A.; Tyler, L. K.

    2009-01-01

    Cognitive neuroscientific research proposes complementary hemispheric asymmetries in naming and recognising visual objects, with a left temporal lobe advantage for object naming and a right temporal lobe advantage for object recognition. Specifically, it has been proposed that the left inferior temporal lobe plays a mediational role linking…

  20. Dissociating the Effects of Angular Disparity and Image Similarity in Mental Rotation and Object Recognition

    ERIC Educational Resources Information Center

    Cheung, Olivia S.; Hayward, William G.; Gauthier, Isabel

    2009-01-01

    Performance is often impaired linearly with increasing angular disparity between two objects in tasks that measure mental rotation or object recognition. But increased angular disparity is often accompanied by changes in the similarity between views of an object, confounding the impact of the two factors in these tasks. We examined separately the…

  1. From neural-based object recognition toward microelectronic eyes

    NASA Technical Reports Server (NTRS)

    Sheu, Bing J.; Bang, Sa Hyun

    1994-01-01

    Engineering neural network systems are best known for their abilities to adapt to the changing characteristics of the surrounding environment by adjusting system parameter values during the learning process. Rapid advances in analog current-mode design techniques have made possible the implementation of major neural network functions in custom VLSI chips. An electrically programmable analog synapse cell with large dynamic range can be realized in a compact silicon area. New designs of the synapse cells, neurons, and analog processor are presented. A synapse cell based on Gilbert multiplier structure can perform the linear multiplication for back-propagation networks. A double differential-pair synapse cell can perform the Gaussian function for radial-basis network. The synapse cells can be biased in the strong inversion region for high-speed operation or biased in the subthreshold region for low-power operation. The voltage gain of the sigmoid-function neurons is externally adjustable which greatly facilitates the search of optimal solutions in certain networks. Various building blocks can be intelligently connected to form useful industrial applications. Efficient data communication is a key system-level design issue for large-scale networks. We also present analog neural processors based on perceptron architecture and Hopfield network for communication applications. Biologically inspired neural networks have played an important role towards the creation of powerful intelligent machines. Accuracy, limitations, and prospects of analog current-mode design of the biologically inspired vision processing chips and cellular neural network chips are key design issues.

  2. Remote object recognition by analysis of surface structure

    NASA Astrophysics Data System (ADS)

    Wurster, J.; Stark, H.; Olsen, E. T.; Kogler, K.

    1995-06-01

    We present a new algorithm for the discrimination of remote objects by their surface structure. Starting from a range-azimuth profile function, we formulate a range-azimuth matrix whose largest eigenvalues are used as discriminating features to separate object classes. A simpler, competing algorithm uses the number of sign changes in the range-azimuth profile function to discriminate among classes. Whereas both algorithms work well on noiseless data, an experiment involving real data shows that the eigenvalue method is far more robust with respect to noise than is the sign-change method. Two well-known methods based on surface structure, variance, and fractal dimension were also tested on real data. Neither method furnished the aspect invariance and the discriminability of the eigenvalue method.

  3. Biologically Motivated Novel Localization Paradigm by High-Level Multiple Object Recognition in Panoramic Images

    PubMed Central

    Kim, Sungho; Shim, Min-Sheob

    2015-01-01

    This paper presents the novel paradigm of a global localization method motivated by human visual systems (HVSs). HVSs actively use the information of the object recognition results for self-position localization and for viewing direction. The proposed localization paradigm consisted of three parts: panoramic image acquisition, multiple object recognition, and grid-based localization. Multiple object recognition information from panoramic images is utilized in the localization part. High-level object information was useful not only for global localization, but also for robot-object interactions. The metric global localization (position, viewing direction) was conducted based on the bearing information of recognized objects from just one panoramic image. The feasibility of the novel localization paradigm was validated experimentally. PMID:26457323

  4. The relationship between change detection and recognition of centrally attended objects in motion pictures.

    PubMed

    Angelone, Bonnie L; Levin, Daniel T; Simons, Daniel J

    2003-01-01

    Observers typically detect changes to central objects more readily than changes to marginal objects, but they sometimes miss changes to central, attended objects as well. However, even if observers do not report such changes, they may be able to recognize the changed object. In three experiments we explored change detection and recognition memory for several types of changes to central objects in motion pictures. Observers who failed to detect a change still performed at above chance levels on a recognition task in almost all conditions. In addition, observers who detected the change were no more accurate in their recognition than those who did not detect the change. Despite large differences in the detectability of changes across conditions, those observers who missed the change did not vary in their ability to recognize the changing object.

  5. Category-specific interference of object recognition with biological motion perception.

    PubMed

    Wittinghofer, Karin; de Lussanet, Marc H E; Lappe, Markus

    2010-11-24

    The rapid and detailed recognition of human action from point-light displays is a remarkable ability and very robust against masking by motion signals. However, recognition of biological motion is strongly impaired when the typical point lights are replaced by pictures of complex objects. In a reaction time task and a detection in noise task, we asked subjects to decide if the walking direction is forward or backward. We found that complex objects as local elements impaired performance. When we compared different object categories, we found that human shapes as local objects gave more impairment than any other tested object category. Inverting or scrambling the human shapes restored the performance of walking perception. These results demonstrate an interference between object perception and biological motion recognition caused by shared processing capacities.

  6. Dissociating the effects of angular disparity and image similarity in mental rotation and object recognition.

    PubMed

    Cheung, Olivia S; Hayward, William G; Gauthier, Isabel

    2009-10-01

    Performance is often impaired linearly with increasing angular disparity between two objects in tasks that measure mental rotation or object recognition. But increased angular disparity is often accompanied by changes in the similarity between views of an object, confounding the impact of the two factors in these tasks. We examined separately the effects of angular disparity and image similarity on handedness (to test mental rotation) and identity (to test object recognition) judgments with 3-D novel objects. When similarity was approximately equated, an effect of angular disparity was only found for handedness but not identity judgments. With a fixed angular disparity, performance was better for similar than dissimilar image pairs in both tasks, with a larger effect for identity than handedness judgments. Our results suggest that mental rotation involves mental transformation procedures that depend on angular disparity, but that object recognition is predominately dependent on the similarity of image features.

  7. 3D Object Recognition: Symmetry and Virtual Views

    DTIC Science & Technology

    1992-12-01

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial

  8. Informative Feature Selection for Object Recognition via Sparse PCA

    DTIC Science & Technology

    2011-04-07

    the BMW database [17] are used for training. For each image pair in SfM, SURF features are deemed informative if the consensus of the corresponding...observe that the first two sparse PVs are sufficient for selecting in- formative features that lie on the foreground objects in the BMW database (as... BMW ) database [17]. The database consists of multiple-view images of 20 landmark buildings on the Berkeley campus. For each building, wide-baseline

  9. Observations on Cortical Mechanisms for Object Recognition and Learning

    DTIC Science & Technology

    1993-12-01

    matching. iar objects such as the Eiffel Tower (M. Potter, pers. At the output of the network the activities of the vari- comm.). ous units are...with different localizations. and dendritic circuitry (see Poggio and Torre , 1978; one that could occur unsupervised and thus is similar to Torre and...5:81-100, 1990. [55] T. Poggio and V. Torre . A theory of synaptic inter- [41] H.K. Nishihara and T. Poggio. Stereo vision for actions. In W.E

  10. Graph - Based High Resolution Satellite Image Segmentation for Object Recognition

    NASA Astrophysics Data System (ADS)

    Ravali, K.; Kumar, M. V. Ravi; Venugopala Rao, K.

    2014-11-01

    Object based image processing and analysis is challenging research in very high resolution satellite utilisation. Commonly ei ther pixel based classification or visual interpretation is used to recognize and delineate land cover categories. The pixel based classification techniques use rich spectral content of satellite images and fail to utilise spatial relations. To overcome th is drawback, traditional time consuming visual interpretation methods are being used operational ly for preparation of thematic maps. This paper addresses computational vision principles to object level image segmentation. In this study, computer vision algorithms are developed to define the boundary between two object regions and segmentation by representing image as graph. Image is represented as a graph G (V, E), where nodes belong to pixels and, edges (E) connect nodes belonging to neighbouring pixels. The transformed Mahalanobis distance has been used to define a weight function for partition of graph into components such that each component represents the region of land category. This implies that edges between two vertices in the same component have relatively low weights and edges between vertices in different components should have higher weights. The derived segments are categorised to different land cover using supervised classification. The paper presents the experimental results on real world multi-spectral remote sensing images of different landscapes such as Urban, agriculture and mixed land cover. Graph construction done in C program and list the run time for both graph construction and segmentation calculation on dual core Intel i7 system with 16 GB RAM, running 64bit window 7.

  11. The impact of interactive manipulation on the recognition of objects

    NASA Astrophysics Data System (ADS)

    Meijer, Frank; van den Broek, Egon L.; Schouten, Theo

    2008-02-01

    A new application for VR has emerged: product development, in which several stakeholders (from engineers to end users) use the same VR for development and communicate purposes. Various characteristics among these stakeholders vary considerably, which imposes potential constraints to the VR. The current paper discusses the influence of three types of exploration of objects (i.e., none, passive, active) on one of these characteristics: the ability to form mental representations or visuo-spatial ability (VSA). Through an experiment we found that all users benefit from exploring objects. Moreover, people with low VSA (e.g., end users) benefit from an interactive exploration of objects opposed to people with a medium or high VSA (e.g. engineers), who are not sensitive for the type of exploration. Hence, for VR environments in which multiple stakeholders participate (e.g. for product development), differences among their cognitive abilities (e.g., VSA) have to be taken into account to enable an efficient usage of VR.

  12. Learning AND-OR templates for object recognition and detection.

    PubMed

    Si, Zhangzhang; Zhu, Song-Chun

    2013-09-01

    This paper presents a framework for unsupervised learning of a hierarchical reconfigurable image template--the AND-OR Template (AOT) for visual objects. The AOT includes: 1) hierarchical composition as "AND" nodes, 2) deformation and articulation of parts as geometric "OR" nodes, and 3) multiple ways of composition as structural "OR" nodes. The terminal nodes are hybrid image templates (HIT) [17] that are fully generative to the pixels. We show that both the structures and parameters of the AOT model can be learned in an unsupervised way from images using an information projection principle. The learning algorithm consists of two steps: 1) a recursive block pursuit procedure to learn the hierarchical dictionary of primitives, parts, and objects, and 2) a graph compression procedure to minimize model structure for better generalizability. We investigate the factors that influence how well the learning algorithm can identify the underlying AOT. And we propose a number of ways to evaluate the performance of the learned AOTs through both synthesized examples and real-world images. Our model advances the state of the art for object detection by improving the accuracy of template matching.

  13. How Can Selection of Biologically Inspired Features Improve the Performance of a Robust Object Recognition Model?

    PubMed Central

    Ghodrati, Masoud; Khaligh-Razavi, Seyed-Mahdi; Ebrahimpour, Reza; Rajaei, Karim; Pooyan, Mohammad

    2012-01-01

    Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition. PMID:22384229

  14. How can selection of biologically inspired features improve the performance of a robust object recognition model?

    PubMed

    Ghodrati, Masoud; Khaligh-Razavi, Seyed-Mahdi; Ebrahimpour, Reza; Rajaei, Karim; Pooyan, Mohammad

    2012-01-01

    Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.

  15. A primitive-based 3D object recognition system

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    An intermediate-level knowledge-based system for decomposing segmented data into three-dimensional primitives was developed to create an approximate three-dimensional description of the real world scene from a single two-dimensional perspective view. A knowledge-based approach was also developed for high-level primitive-based matching of three-dimensional objects. Both the intermediate-level decomposition and the high-level interpretation are based on the structural and relational matching; moreover, they are implemented in a frame-based environment.

  16. Fast Object Recognition in Noisy Images Using Simulated Annealing.

    DTIC Science & Technology

    1994-12-01

    correlation coefficient is used as a measure of the match between a hypothesized object and an image. Templates are generated on-line during the search by transforming model images. Simulated annealing reduces the search time by orders of magnitude with respect to an exhaustive search. The algorithm is applied to the problem of how landmarks, for example, traffic signs, can be recognized by an autonomous vehicle or a navigating robot. The algorithm works well in noisy, real-world images of complicated scenes for model images with high information

  17. Single prolonged stress impairs social and object novelty recognition in rats.

    PubMed

    Eagle, Andrew L; Fitzpatrick, Chris J; Perrine, Shane A

    2013-11-01

    Posttraumatic stress disorder (PTSD) results from exposure to a traumatic event and manifests as re-experiencing, arousal, avoidance, and negative cognition/mood symptoms. Avoidant symptoms, as well as the newly defined negative cognitions/mood, are a serious complication leading to diminished interest in once important or positive activities, such as social interaction; however, the basis of these symptoms remains poorly understood. PTSD patients also exhibit impaired object and social recognition, which may underlie the avoidance and symptoms of negative cognition, such as social estrangement or diminished interest in activities. Previous studies have demonstrated that single prolonged stress (SPS), models PTSD phenotypes, including impairments in learning and memory. Therefore, it was hypothesized that SPS would impair social and object recognition memory. Male Sprague Dawley rats were exposed to SPS then tested in the social choice test (SCT) or novel object recognition test (NOR). These tests measure recognition of novelty over familiarity, a natural preference of rodents. Results show that SPS impaired preference for both social and object novelty. In addition, SPS impairment in social recognition may be caused by impaired behavioral flexibility, or an inability to shift behavior during the SCT. These results demonstrate that traumatic stress can impair social and object recognition memory, which may underlie certain avoidant symptoms or negative cognition in PTSD and be related to impaired behavioral flexibility.

  18. A correlation-based algorithm for recognition and tracking of partially occluded objects

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2016-09-01

    In this work, a correlation-based algorithm consisting of a set of adaptive filters for recognition of occluded objects in still and dynamic scenes in the presence of additive noise is proposed. The designed algorithm is adaptive to the input scene, which may contain different fragments of the target, false objects, and background to be rejected. The algorithm output is high correlation peaks corresponding to pieces of the target in scenes. The proposed algorithm uses a bank of composite optimum filters. The performance of the proposed algorithm for recognition partially occluded objects is compared with that of common algorithms in terms of objective metrics.

  19. An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors

    PubMed Central

    Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai

    2017-01-01

    RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553

  20. Systems in Development: Motor Skill Acquisition Facilitates 3D Object Completion

    PubMed Central

    Soska, Kasey C.; Adolph, Karen E.; Johnson, Scott P.

    2009-01-01

    How do infants learn to perceive the backs of objects that they see only from a limited viewpoint? Infants’ 3D object completion abilities emerge in conjunction with developing motor skills—independent sitting and visual-manual exploration. Twenty-eight 4.5- to 7.5-month-old infants were habituated to a limited-view object and tested with volumetrically complete and incomplete (hollow) versions of the same object. Parents reported infants’ sitting experience, and infants’ visual-manual exploration of objects was observed in a structured play session. Infants’ self-sitting experience and visual-manual exploratory skills predicted looking to the novel, incomplete object on the habituation task. Further analyses revealed that self-sitting facilitated infants’ visual inspection of objects while they manipulated them. The results are framed within a developmental systems approach, wherein infants’ sitting skill, multimodal object exploration, and object knowledge are linked in developmental time. PMID:20053012

  1. Object recognition through a multi-mode fiber

    NASA Astrophysics Data System (ADS)

    Takagi, Ryosuke; Horisaki, Ryoichi; Tanida, Jun

    2017-04-01

    We present a method of recognizing an object through a multi-mode fiber. A number of speckle patterns transmitted through a multi-mode fiber are provided to a classifier based on machine learning. We experimentally demonstrated binary classification of face and non-face targets based on the method. The measurement process of the experimental setup was random and nonlinear because a multi-mode fiber is a typical strongly scattering medium and any reference light was not used in our setup. Comparisons between three supervised learning methods, support vector machine, adaptive boosting, and neural network, are also provided. All of those learning methods achieved high accuracy rates at about 90% for the classification. The approach presented here can realize a compact and smart optical sensor. It is practically useful for medical applications, such as endoscopy. Also our study indicated a promising utilization of artificial intelligence, which has rapidly progressed, for reducing optical and computational costs in optical sensing systems.

  2. Object recognition through a multi-mode fiber

    NASA Astrophysics Data System (ADS)

    Takagi, Ryosuke; Horisaki, Ryoichi; Tanida, Jun

    2017-02-01

    We present a method of recognizing an object through a multi-mode fiber. A number of speckle patterns transmitted through a multi-mode fiber are provided to a classifier based on machine learning. We experimentally demonstrated binary classification of face and non-face targets based on the method. The measurement process of the experimental setup was random and nonlinear because a multi-mode fiber is a typical strongly scattering medium and any reference light was not used in our setup. Comparisons between three supervised learning methods, support vector machine, adaptive boosting, and neural network, are also provided. All of those learning methods achieved high accuracy rates at about 90% for the classification. The approach presented here can realize a compact and smart optical sensor. It is practically useful for medical applications, such as endoscopy. Also our study indicated a promising utilization of artificial intelligence, which has rapidly progressed, for reducing optical and computational costs in optical sensing systems.

  3. What is the role of motor simulation in action and object recognition? Evidence from apraxia.

    PubMed

    Negri, Gioia A L; Rumiati, Raffaella I; Zadini, Antonietta; Ukmar, Maja; Mahon, Bradford Z; Caramazza, Alfonso

    2007-12-01

    An important issue in contemporary cognitive neuroscience concerns the role of motor production processes in perceptual and conceptual analysis. To address this issue, we studied the performance of a large group of unilateral stroke patients across a range of tasks using the same set of common manipulable objects. All patients (n = 37) were tested for their ability to demonstrate the use of the objects, recognize the objects, recognize the corresponding object-associated pantomimes, and imitate those same pantomimes. At the group level we observed reliable correlations between object use and pantomime recognition, object use and object recognition, and pantomime imitation and pantomime recognition. At the single-case level, we document that the ability to recognize actions and objects dissociates from the ability to use those same objects. These data are problematic for the hypothesis that motor processes are constitutively involved in the recognition of actions and objects and frame new questions about the inferences that are merited by recent findings in cognitive neuroscience.

  4. Systemic and intra-rhinal-cortical 17-β estradiol administration modulate object-recognition memory in ovariectomized female rats.

    PubMed

    Gervais, Nicole J; Jacob, Sofia; Brake, Wayne G; Mumby, Dave G

    2013-09-01

    Previous studies using the novel-object-preference (NOP) test suggest that estrogen (E) replacement in ovariectomized rodents can lead to enhanced novelty preference. The present study aimed to determine: 1) whether the effect of E on NOP performance is the result of enhanced preference for novelty, per se, or facilitated object-recognition memory, and 2) whether E affects NOP performance through actions it has within the perirhinal cortex/entorhinal cortex region (PRh/EC). Ovariectomized rats received either systemic chronic low 17-β estradiol (E2; ~20 pg/ml serum) replacement alone or in combination with systemic acute high administration of estradiol benzoate (EB; 10 μg), or in combination with intracranial infusions of E2 (244.8 pg/μl) or vehicle into the PRh/EC. For one of the intracranial experiments, E2 was infused either immediately before, immediately after, or 2 h following the familiarization (i.e., learning) phase of the NOP test. In light of recent evidence that raises questions about the internal validity of the NOP test as a method of indexing object-recognition memory, we also tested rats on a delayed nonmatch-to-sample (DNMS) task of object recognition following systemic and intra-PRh/EC infusions of E2. Both systemic acute and intra-PRh/EC infusions of E enhanced novelty preference, but only when administered either before or immediately following familiarization. In contrast, high E (both systemic acute and intra-PRh/EC) impaired performance on the DNMS task. The findings suggest that while E2 in the PRh/EC can enhance novelty preference, this effect is probably not due to an improvement in object-recognition abilities.

  5. Foreign object detection via texture recognition and a neural classifier

    NASA Astrophysics Data System (ADS)

    Patel, Devesh; Hannah, I.; Davies, E. R.

    1993-10-01

    It is rate to find pieces of stone, wood, metal, or glass in food packets, but when they occur, these foreign objects (FOs) cause distress to the consumer and concern to the manufacturer. Using x-ray imaging to detect FOs within food bags, hard contaminants such as stone or metal appear darker, whereas soft contaminants such as wood or rubber appear slightly lighter than the food substrate. In this paper we concentrate on the detection of soft contaminants such as small pieces of wood in bags of frozen corn kernels. Convolution masks are used to generate textural features which are then classified into corresponding homogeneous regions on the image using an artificial neural network (ANN) classifier. The separate ANN outputs are combined using a majority operator, and region discrepancies are removed by a median filter. Comparisons with classical classifiers showed the ANN approach to have the best overall combination of characteristics for our particular problem. The detected boundaries are in good agreement with the visually perceived segmentations.

  6. Space-object identification using spatial pattern recognition

    NASA Astrophysics Data System (ADS)

    Silversmith, Paul E.

    The traditional method of determining spacecraft attitude with a star tracker is by comparing the angle measurements between stars within a certain field-of-view (FOV) with that of angle measurements in a catalog. This technique is known as the angle method. A new approach, the planar triangle method (PTM), uses the properties of planar triangles to compare stars in a FOV with stars in a catalog. Specifically, the area and polar moment of planar triangle combinations are the comparison parameters used in the method. The PTM has been shown to provide a more consistent success rate than that of the traditional angle method. The work herein presents a technique of data association through the use of the planar triangle method. Instead of comparing the properties of stars with that of a catalog, a comparison is made between the properties of resident space objects (RSOs) and a catalog comprised of Fengyun 1C debris data and simulated data. It is shown that the planar triangle method is effective in RSO identification and is robust to the presence of measurement and sensor error.

  7. Central administration of angiotensin IV rapidly enhances novel object recognition among mice.

    PubMed

    Paris, Jason J; Eans, Shainnel O; Mizrachi, Elisa; Reilley, Kate J; Ganno, Michelle L; McLaughlin, Jay P

    2013-07-01

    Angiotensin IV (Val(1)-Tyr(2)-Ile(3)-His(4)-Pro(5)-Phe(6)) has demonstrated potential cognitive-enhancing effects. The present investigation assessed and characterized: (1) dose-dependency of angiotensin IV's cognitive enhancement in a C57BL/6J mouse model of novel object recognition, (2) the time-course for these effects, (3) the identity of residues in the hexapeptide important to these effects and (4) the necessity of actions at angiotensin IV receptors for procognitive activity. Assessment of C57BL/6J mice in a novel object recognition task demonstrated that prior administration of angiotensin IV (0.1, 1.0, or 10.0, but not 0.01 nmol, i.c.v.) significantly enhanced novel object recognition in a dose-dependent manner. These effects were time dependent, with improved novel object recognition observed when angiotensin IV (0.1 nmol, i.c.v.) was administered 10 or 20, but not 30 min prior to the onset of the novel object recognition testing. An alanine scan of the angiotensin IV peptide revealed that replacement of the Val(1), Ile(3), His(4), or Phe(6) residues with Ala attenuated peptide-induced improvements in novel object recognition, whereas Tyr(2) or Pro(5) replacement did not significantly affect performance. Administration of the angiotensin IV receptor antagonist, divalinal-Ang IV (20 nmol, i.c.v.), reduced (but did not abolish) novel object recognition; however, this antagonist completely blocked the procognitive effects of angiotensin IV (0.1 nmol, i.c.v.) in this task. Rotorod testing demonstrated no locomotor effects with any angiotensin IV or divalinal-Ang IV dose tested. These data demonstrate that angiotensin IV produces a rapid enhancement of associative learning and memory performance in a mouse model that was dependent on the angiotensin IV receptor.

  8. Toward a unified model of face and object recognition in the human visual system

    PubMed Central

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  9. Effects of selective neonatal hippocampal lesions on tests of object and spatial recognition memory in monkeys.

    PubMed

    Heuer, Eric; Bachevalier, Jocelyne

    2011-04-01

    Earlier studies in monkeys have reported mild impairment in recognition memory after nonselective neonatal hippocampal lesions. To assess whether the memory impairment could have resulted from damage to cortical areas adjacent to the hippocampus, we tested adult monkeys with neonatal focal hippocampal lesions and sham-operated controls in three recognition tasks: delayed nonmatching-to-sample, object memory span, and spatial memory span. Further, to rule out that normal performance on these tasks may relate to functional sparing following neonatal hippocampal lesions, we tested adult monkeys that had received the same focal hippocampal lesions in adulthood and their controls in the same three memory tasks. Both early and late onset focal hippocampal damage did not alter performance on any of the three tasks, suggesting that damage to cortical areas adjacent to the hippocampus was likely responsible for the recognition impairment reported by the earlier studies. In addition, given that animals with early and late onset hippocampal lesions showed object and spatial recognition impairment when tested in a visual paired comparison task, the data suggest that not all object and spatial recognition tasks are solved by hippocampal-dependent memory processes. The current data may not only help explain the neural substrate for the partial recognition memory impairment reported in cases of developmental amnesia, but they are also clinically relevant given that the object and spatial memory tasks used in monkeys are often translated to investigate memory functions in several populations of human infants and children in which dysfunction of the hippocampus is suspected.

  10. Expertise modulates the neural basis of context dependent recognition of objects and their relations.

    PubMed

    Bilalić, Merim; Turella, Luca; Campitelli, Guillermo; Erb, Michael; Grodd, Wolfgang

    2012-11-01

    Recognition of objects and their relations is necessary for orienting in real life. We examined cognitive processes related to recognition of objects, their relations, and the patterns they form by using the game of chess. Chess enables us to compare experts with novices and thus gain insight in the nature of development of recognition skills. Eye movement recordings showed that experts were generally faster than novices on a task that required enumeration of relations between chess objects because their extensive knowledge enabled them to immediately focus on the objects of interest. The advantage was less pronounced on random positions where the location of chess objects, and thus typical relations between them, was randomized. Neuroimaging data related experts' superior performance to the areas along the dorsal stream-bilateral posterior temporal areas and left inferior parietal lobe were related to recognition of object and their functions. The bilateral collateral sulci, together with bilateral retrosplenial cortex, were also more sensitive to normal than random positions among experts indicating their involvement in pattern recognition. The pattern of activations suggests experts engage the same regions as novices, but also that they employ novel additional regions. Expert processing, as the final stage of development, is qualitatively different than novice processing, which can be viewed as the starting stage. Since we are all experts in real life and dealing with meaningful stimuli in typical contexts, our results underline the importance of expert-like cognitive processing on generalization of laboratory results to everyday life.

  11. Dopamine D1 receptor activation leads to object recognition memory in a coral reef fish.

    PubMed

    Hamilton, Trevor J; Tresguerres, Martin; Kline, David I

    2017-07-01

    Object recognition memory is the ability to identify previously seen objects and is an adaptive mechanism that increases survival for many species throughout the animal kingdom. Previously believed to be possessed by only the highest order mammals, it is now becoming clear that fish are also capable of this type of memory formation. Similar to the mammalian hippocampus, the dorsolateral pallium regulates distinct memory processes and is modulated by neurotransmitters such as dopamine. Caribbean bicolour damselfish (Stegastes partitus) live in complex environments dominated by coral reef structures and thus likely possess many types of complex memory abilities including object recognition. This study used a novel object recognition test in which fish were first presented two identical objects, then after a retention interval of 10 min with no objects, the fish were presented with a novel object and one of the objects they had previously encountered in the first trial. We demonstrate that the dopamine D1-receptor agonist (SKF 38393) induces the formation of object recognition memories in these fish. Thus, our results suggest that dopamine-receptor mediated enhancement of spatial memory formation in fish represents an evolutionarily conserved mechanism in vertebrates. © 2017 The Author(s).

  12. A chicken model for studying the emergence of invariant object recognition

    PubMed Central

    Wood, Samantha M. W.; Wood, Justin N.

    2015-01-01

    “Invariant object recognition” refers to the ability to recognize objects across variation in their appearance on the retina. This ability is central to visual perception, yet its developmental origins are poorly understood. Traditionally, nonhuman primates, rats, and pigeons have been the most commonly used animal models for studying invariant object recognition. Although these animals have many advantages as model systems, they are not well suited for studying the emergence of invariant object recognition in the newborn brain. Here, we argue that newly hatched chicks (Gallus gallus) are an ideal model system for studying the emergence of invariant object recognition. Using an automated controlled-rearing approach, we show that chicks can build a viewpoint-invariant representation of the first object they see in their life. This invariant representation can be built from highly impoverished visual input (three images of an object separated by 15° azimuth rotations) and cannot be accounted for by low-level retina-like or V1-like neuronal representations. These results indicate that newborn neural circuits begin building invariant object representations at the onset of vision and argue for an increased focus on chicks as an animal model for studying invariant object recognition. PMID:25767436

  13. Generalized facilitated diffusion model for DNA-binding proteins with search and recognition states.

    PubMed

    Bauer, Maximilian; Metzler, Ralf

    2012-05-16

    Transcription factors (TFs) such as the lac repressor find their target sequence on DNA at remarkably high rates. In the established Berg-von Hippel model for this search process, the TF alternates between three-dimensional diffusion in the bulk solution and one-dimensional sliding along the DNA chain. To overcome the so-called speed-stability paradox, in similar models the TF was considered as being present in two conformations (search state and recognition state) between which it switches stochastically. Combining both the facilitated diffusion model and alternating states, we obtain a generalized model. We explicitly treat bulk excursions for rodlike chains arranged in parallel and consider a simplified model for coiled DNA. Compared to previously considered facilitated diffusion models, corresponding to limiting cases of our generalized model, we surprisingly find a reduced target search rate. Moreover, at optimal conditions there is no longer an equipartition between the time spent by the protein on and off the DNA chain. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  14. Generalized Facilitated Diffusion Model for DNA-Binding Proteins with Search and Recognition States

    PubMed Central

    Bauer, Maximilian; Metzler, Ralf

    2012-01-01

    Transcription factors (TFs) such as the lac repressor find their target sequence on DNA at remarkably high rates. In the established Berg-von Hippel model for this search process, the TF alternates between three-dimensional diffusion in the bulk solution and one-dimensional sliding along the DNA chain. To overcome the so-called speed-stability paradox, in similar models the TF was considered as being present in two conformations (search state and recognition state) between which it switches stochastically. Combining both the facilitated diffusion model and alternating states, we obtain a generalized model. We explicitly treat bulk excursions for rodlike chains arranged in parallel and consider a simplified model for coiled DNA. Compared to previously considered facilitated diffusion models, corresponding to limiting cases of our generalized model, we surprisingly find a reduced target search rate. Moreover, at optimal conditions there is no longer an equipartition between the time spent by the protein on and off the DNA chain. PMID:22677385

  15. Hippocampal NMDA receptors are involved in rats' spontaneous object recognition only under high memory load condition.

    PubMed

    Sugita, Manami; Yamada, Kazuo; Iguchi, Natsumi; Ichitani, Yukio

    2015-10-22

    The possible involvement of hippocampal N-methyl-D-aspartate (NMDA) receptors in spontaneous object recognition was investigated in rats under different memory load conditions. We first estimated rats' object memory span using 3-5 objects in "Different Objects Task (DOT)" in order to confirm the highest memory load condition in object recognition memory. Rats were allowed to explore a field in which 3 (3-DOT), 4 (4-DOT), or 5 (5-DOT) different objects were presented. After a delay period, they were placed again in the same field in which one of the sample objects was replaced by another object, and their object exploration behavior was analyzed. Rats could differentiate the novel object from the familiar ones in 3-DOT and 4-DOT but not in 5-DOT, suggesting that rats' object memory span was about 4. Then, we examined the effects of hippocampal AP5 infusion on performance in both 2-DOT (2 different objects were used) and 4-DOT. The drug treatment before the sample phase impaired performance only in 4-DOT. These results suggest that hippocampal NMDA receptors play a critical role in spontaneous object recognition only when the memory load is high.

  16. Sub-OBB based object recognition and localization algorithm using range images

    NASA Astrophysics Data System (ADS)

    Hoang, Dinh-Cuong; Chen, Liang-Chia; Nguyen, Thanh-Hung

    2017-02-01

    This paper presents a novel approach to recognize and estimate pose of the 3D objects in cluttered range images. The key technical breakthrough of the developed approach can enable robust object recognition and localization under undesirable condition such as environmental illumination variation as well as optical occlusion to viewing the object partially. First, the acquired point clouds are segmented into individual object point clouds based on the developed 3D object segmentation for randomly stacked objects. Second, an efficient shape-matching algorithm called Sub-OBB based object recognition by using the proposed oriented bounding box (OBB) regional area-based descriptor is performed to reliably recognize the object. Then, the 3D position and orientation of the object can be roughly estimated by aligning the OBB of segmented object point cloud with OBB of matched point cloud in a database generated from CAD model and 3D virtual camera. To detect accurate pose of the object, the iterative closest point (ICP) algorithm is used to match the object model with the segmented point clouds. From the feasibility test of several scenarios, the developed approach is verified to be feasible for object pose recognition and localization.

  17. Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning

    PubMed Central

    Yee, Meagan; Jones, Susan S.; Smith, Linda B.

    2012-01-01

    Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015

  18. Changes in visual object recognition precede the shape bias in early noun learning.

    PubMed

    Yee, Meagan; Jones, Susan S; Smith, Linda B

    2012-01-01

    Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children's ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning.

  19. [When shape-invariant recognition ('A' = 'a') fails. A case study of pure alexia and kinesthetic facilitation].

    PubMed

    Diesfeldt, H F A

    2011-06-01

    A right-handed patient, aged 72, manifested alexia without agraphia, a right homonymous hemianopia and an impaired ability to identify visually presented objects. He was completely unable to read words aloud and severely deficient in naming visually presented letters. He responded to orthographic familiarity in the lexical decision tasks of the Psycholinguistic Assessments of Language Processing in Aphasia (PALPA) rather than to the lexicality of the letter strings. He was impaired at deciding whether two letters of different case (e.g., A, a) are the same, though he could detect real letters from made-up ones or from their mirror image. Consequently, his core deficit in reading was posited at the level of the abstract letter identifiers. When asked to trace a letter with his right index finger, kinesthetic facilitation enabled him to read letters and words aloud. Though he could use intact motor representations of letters in order to facilitate recognition and reading, the slow, sequential and error-prone process of reading letter by letter made him abandon further training.

  20. Zif268/Egr1 gain of function facilitates hippocampal synaptic plasticity and long-term spatial recognition memory.

    PubMed

    Penke, Zsuzsa; Morice, Elise; Veyrac, Alexandra; Gros, Alexandra; Chagneau, Carine; LeBlanc, Pascale; Samson, Nathalie; Baumgärtel, Karsten; Mansuy, Isabelle M; Davis, Sabrina; Laroche, Serge

    2014-01-05

    It is well established that Zif268/Egr1, a member of the Egr family of transcription factors, is critical for the consolidation of several forms of memory; however, it is as yet uncertain whether increasing expression of Zif268 in neurons can facilitate memory formation. Here, we used an inducible transgenic mouse model to specifically induce Zif268 overexpression in forebrain neurons and examined the effect on recognition memory and hippocampal synaptic transmission and plasticity. We found that Zif268 overexpression during the establishment of memory for objects did not change the ability to form a long-term memory of objects, but enhanced the capacity to form a long-term memory of the spatial location of objects. This enhancement was paralleled by increased long-term potentiation in the dentate gyrus of the hippocampus and by increased activity-dependent expression of Zif268 and selected Zif268 target genes. These results provide novel evidence that transcriptional mechanisms engaging Zif268 contribute to determining the strength of newly encoded memories.

  1. Zif268/Egr1 gain of function facilitates hippocampal synaptic plasticity and long-term spatial recognition memory

    PubMed Central

    Penke, Zsuzsa; Morice, Elise; Veyrac, Alexandra; Gros, Alexandra; Chagneau, Carine; LeBlanc, Pascale; Samson, Nathalie; Baumgärtel, Karsten; Mansuy, Isabelle M.; Davis, Sabrina; Laroche, Serge

    2014-01-01

    It is well established that Zif268/Egr1, a member of the Egr family of transcription factors, is critical for the consolidation of several forms of memory; however, it is as yet uncertain whether increasing expression of Zif268 in neurons can facilitate memory formation. Here, we used an inducible transgenic mouse model to specifically induce Zif268 overexpression in forebrain neurons and examined the effect on recognition memory and hippocampal synaptic transmission and plasticity. We found that Zif268 overexpression during the establishment of memory for objects did not change the ability to form a long-term memory of objects, but enhanced the capacity to form a long-term memory of the spatial location of objects. This enhancement was paralleled by increased long-term potentiation in the dentate gyrus of the hippocampus and by increased activity-dependent expression of Zif268 and selected Zif268 target genes. These results provide novel evidence that transcriptional mechanisms engaging Zif268 contribute to determining the strength of newly encoded memories. PMID:24298160

  2. Representations of Shape in Object Recognition and Long-Term Visual Memory

    DTIC Science & Technology

    1993-02-11

    Pinker (1989) proposed the Multiple- Views-Plus-Transformation theory of object recognition. The foundation of this theory is th at objecto a,- represented... Pinker (1990) have shown that such shapes are immediately and consistently recognized independently of their orientation. Consequentially, throughout...along which parts may be located. Tarr and Pinker have shown that such contrasts lead to the use of orientation-dependent recognition mechanisms utilizing

  3. Effect of Metaphoric (Visual/Verbal) Strategies in Facilitating Student Achievement of Different Educational Objectives.

    ERIC Educational Resources Information Center

    Williams, Vicki Sloan; Dwyer, Francis M.

    1999-01-01

    Describes a study of college students that examined the instructional effect of visual and verbal metaphors in facilitating student achievement of different educational objectives. The effect of students' verbal ability and the amount of time they spent interacting with their respective instructional modules were also measured. (Author/LRW)

  4. Crowded and Sparse Domains in Object Recognition: Consequences for Categorization and Naming

    ERIC Educational Resources Information Center

    Gale, Tim M.; Laws, Keith R.; Foley, Kerry

    2006-01-01

    Some models of object recognition propose that items from structurally crowded categories (e.g., living things) permit faster access to superordinate semantic information than structurally dissimilar categories (e.g., nonliving things), but slower access to individual object information when naming items. We present four experiments that utilize…

  5. Crowded and Sparse Domains in Object Recognition: Consequences for Categorization and Naming

    ERIC Educational Resources Information Center

    Gale, Tim M.; Laws, Keith R.; Foley, Kerry

    2006-01-01

    Some models of object recognition propose that items from structurally crowded categories (e.g., living things) permit faster access to superordinate semantic information than structurally dissimilar categories (e.g., nonliving things), but slower access to individual object information when naming items. We present four experiments that utilize…

  6. Modeling guidance and recognition in categorical search: bridging human and computer object detection.

    PubMed

    Zelinsky, Gregory J; Peng, Yifan; Berg, Alexander C; Samaras, Dimitris

    2013-10-08

    Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery.

  7. Modeling guidance and recognition in categorical search: Bridging human and computer object detection

    PubMed Central

    Zelinsky, Gregory J.; Peng, Yifan; Berg, Alexander C.; Samaras, Dimitris

    2013-01-01

    Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery. PMID:24105460

  8. Perirhinal cortex resolves feature ambiguity in configural object recognition and perceptual oddity tasks.

    PubMed

    Bartko, Susan J; Winters, Boyer D; Cowell, Rosemary A; Saksida, Lisa M; Bussey, Timothy J

    2007-12-01

    The perirhinal cortex (PRh) has a well-established role in object recognition memory. More recent studies suggest that PRh is also important for two-choice visual discrimination tasks. Specifically, it has been suggested that PRh contains conjunctive representations that help resolve feature ambiguity, which occurs when a task cannot easily be solved on the basis of features alone. However, no study has examined whether the ability of PRh to resolve configural feature ambiguity is related to its role in object recognition. Therefore, we examined whether bilateral excitotoxic lesions of PRh or PPRh (perirhinal plus post-rhinal cortices) in the rat would cause deficits in a configural spontaneous object recognition task, and a configural simultaneous oddity discrimination task, in which the task could not be solved on the basis of features, but could only be solved using conjunctive representations. As predicted by simulations using a computational model, rats with PPRh lesions were impaired during a minimal-delay configural object recognition task. These same rats were impaired during a zero-delay configural object recognition task. Furthermore, rats with localized PRh lesions were impaired in a configural simultaneous oddity discrimination task. These findings support the idea that PRh contains conjunctive representations for the resolution of feature ambiguity and that these representations underlie a dual role for PRh in memory and perception.

  9. Intraperirhinal cortex administration of the synthetic cannabinoid, HU210, disrupts object recognition memory in rats.

    PubMed

    Sticht, Martin A; Jacklin, Derek L; Mechoulam, Raphael; Parker, Linda A; Winters, Boyer D

    2015-03-25

    Cannabinoids disrupt learning and memory in human and nonhuman participants. Object recognition memory, which is particularly susceptible to the impairing effects of cannabinoids, relies critically on the perirhinal cortex (PRh); however, to date, the effects of cannabinoids within PRh have not been assessed. In the present study, we evaluated the effects of localized administration of the synthetic cannabinoid, HU210 (0.01, 1.0 μg/hemisphere), into PRh on spontaneous object recognition in Long-Evans rats. Animals received intra-PRh infusions of HU210 before the sample phase, and object recognition memory was assessed at various delays in a subsequent retention test. We found that presample intra-PRh HU210 dose dependently (1.0 μg but not 0.01 μg) interfered with spontaneous object recognition performance, exerting an apparently more pronounced effect when memory demands were increased. These novel findings show that cannabinoid agonists in PRh disrupt object recognition memory.

  10. Object recognition in clutter: cortical responses depend on the type of learning

    PubMed Central

    Hegdé, Jay; Thompson, Serena K.; Brady, Mark; Kersten, Daniel

    2012-01-01

    Theoretical studies suggest that the visual system uses prior knowledge of visual objects to recognize them in visual clutter, and posit that the strategies for recognizing objects in clutter may differ depending on whether or not the object was learned in clutter to begin with. We tested this hypothesis using functional magnetic resonance imaging (fMRI) of human subjects. We trained subjects to recognize naturalistic, yet novel objects in strong or weak clutter. We then tested subjects' recognition performance for both sets of objects in strong clutter. We found many brain regions that were differentially responsive to objects during object recognition depending on whether they were learned in strong or weak clutter. In particular, the responses of the left fusiform gyrus (FG) reliably reflected, on a trial-to-trial basis, subjects' object recognition performance for objects learned in the presence of strong clutter. These results indicate that the visual system does not use a single, general-purpose mechanism to cope with clutter. Instead, there are two distinct spatial patterns of activation whose responses are attributable not to the visual context in which the objects were seen, but to the context in which the objects were learned. PMID:22723774

  11. Performance of a neural-network-based 3-D object recognition system

    NASA Astrophysics Data System (ADS)

    Rak, Steven J.; Kolodzy, Paul J.

    1991-08-01

    Object recognition in laser radar sensor imagery is a challenging application of neural networks. The task involves recognition of objects at a variety of distances and aspects with significant levels of sensor noise. These variables are related to sensor parameters such as sensor signal strength and angular resolution, as well as object range and viewing aspect. The effect of these parameters on a fixed recognition system based on log-polar mapped features and an unsupervised neural network classifier are investigated. This work is an attempt to quantify the design parameters of a laser radar measurement system with respect to classifying and/or identifying objects by the shape of their silhouettes. Experiments with vehicle silhouettes rotated through 90 deg-of-view angle from broadside to head-on ('out-of-plane' rotation) have been used to quantify the performance of a log-polar map/neural-network based 3-D object recognition system. These experiments investigated several key issues such as category stability, category memory compression, image fidelity, and viewing aspect. Initial results indicate a compression from 720 possible categories (8 vehicles X 90 out-of-plane rotations) to a classifier memory with approximately 30 stable recognition categories. These results parallel the human experience of studying an object from several viewing angles yet recognizing it through a wide range of viewing angles. Results are presented illustrating category formation for an eight vehicle dataset as a function of several sensor parameters. These include: (1) sensor noise, as a function of carrier-to-noise ratio; (2) pixels on the vehicle, related to angular resolution and target range; and (3) viewing aspect, as related to sensor-to-platform depression angle. This work contributes to the formation of a three- dimensional object recognition system.

  12. Grouping in object recognition: the role of a Gestalt law in letter identification.

    PubMed

    Pelli, Denis G; Majaj, Najib J; Raizman, Noah; Christian, Christopher J; Kim, Edward; Palomares, Melanie C

    2009-02-01

    The Gestalt psychologists reported a set of laws describing how vision groups elements to recognize objects. The Gestalt laws "prescribe for us what we are to recognize 'as one thing'" (Kohler, 1920). Were they right? Does object recognition involve grouping? Tests of the laws of grouping have been favourable, but mostly assessed only detection, not identification, of the compound object. The grouping of elements seen in the detection experiments with lattices and "snakes in the grass" is compelling, but falls far short of the vivid everyday experience of recognizing a familiar, meaningful, named thing, which mediates the ordinary identification of an object. Thus, after nearly a century, there is hardly any evidence that grouping plays a role in ordinary object recognition. To assess grouping in object recognition, we made letters out of grating patches and measured threshold contrast for identifying these letters in visual noise as a function of perturbation of grating orientation, phase, and offset. We define a new measure, "wiggle", to characterize the degree to which these various perturbations violate the Gestalt law of good continuation. We find that efficiency for letter identification is inversely proportional to wiggle and is wholly determined by wiggle, independent of how the wiggle was produced. Thus the effects of three different kinds of shape perturbation on letter identifiability are predicted by a single measure of goodness of continuation. This shows that letter identification obeys the Gestalt law of good continuation and may be the first confirmation of the original Gestalt claim that object recognition involves grouping.

  13. Contributions of low and high spatial frequency processing to impaired object recognition circuitry in schizophrenia.

    PubMed

    Calderone, Daniel J; Hoptman, Matthew J; Martínez, Antígona; Nair-Collins, Sangeeta; Mauro, Cristina J; Bar, Moshe; Javitt, Daniel C; Butler, Pamela D

    2013-08-01

    Patients with schizophrenia exhibit cognitive and sensory impairment, and object recognition deficits have been linked to sensory deficits. The "frame and fill" model of object recognition posits that low spatial frequency (LSF) information rapidly reaches the prefrontal cortex (PFC) and creates a general shape of an object that feeds back to the ventral temporal cortex to assist object recognition. Visual dysfunction findings in schizophrenia suggest a preferential loss of LSF information. This study used functional magnetic resonance imaging (fMRI) and resting state functional connectivity (RSFC) to investigate the contribution of visual deficits to impaired object "framing" circuitry in schizophrenia. Participants were shown object stimuli that were intact or contained only LSF or high spatial frequency (HSF) information. For controls, fMRI revealed preferential activation to LSF information in precuneus, superior temporal, and medial and dorsolateral PFC areas, whereas patients showed a preference for HSF information or no preference. RSFC revealed a lack of connectivity between early visual areas and PFC for patients. These results demonstrate impaired processing of LSF information during object recognition in schizophrenia, with patients instead displaying increased processing of HSF information. This is consistent with findings of a preference for local over global visual information in schizophrenia.

  14. Exploring tiny images: the roles of appearance and contextual information for machine and human object recognition.

    PubMed

    Parikh, Devi; Zitnick, C Lawrence; Chen, Tsuhan

    2012-10-01

    Typically, object recognition is performed based solely on the appearance of the object. However, relevant information also exists in the scene surrounding the object. In this paper, we explore the roles that appearance and contextual information play in object recognition. Through machine experiments and human studies, we show that the importance of contextual information varies with the quality of the appearance information, such as an image's resolution. Our machine experiments explicitly model context between object categories through the use of relative location and relative scale, in addition to co-occurrence. With the use of our context model, our algorithm achieves state-of-the-art performance on the MSRC and Corel data sets. We perform recognition tests for machines and human subjects on low and high resolution images, which vary significantly in the amount of appearance information present, using just the object appearance information, the combination of appearance and context, as well as just context without object appearance information (blind recognition). We also explore the impact of the different sources of context (co-occurrence, relative-location, and relative-scale). We find that the importance of different types of contextual information varies significantly across data sets such as MSRC and PASCAL.

  15. How basic-level objects facilitate question-asking in a categorization task

    PubMed Central

    Ruggeri, Azzurra; Feufel, Markus A.

    2015-01-01

    The ability to categorize information is essential to everyday tasks such as identifying the cause of an event given a set of likely explanations or pinpointing the correct from a set of possible diagnoses by sequentially probing questions. In three studies, we investigated how the level of inclusiveness at which objects are presented (basic-level vs. subordinate-level) influences children's (7- and 10-year-olds) and adults' performance in a sequential binary categorization task. Study 1 found a robust facilitating effect of basic-level objects on the ability to ask effective questions in a computerized version of the Twenty Questions game. Study 2 suggested that this facilitating effect might be due to the kinds of object-differentiating features participants generate when provided with basic-level as compared to subordinate-level objects. Study 3 ruled out the alternative hypothesis that basic-level objects facilitate the selection of the most efficient among a given set of features. PMID:26217262

  16. Priming of Visual Search Facilitates Attention Shifts: Evidence From Object-Substitution Masking.

    PubMed

    Kristjánsson, Árni

    2016-03-01

    Priming of visual search strongly affects visual function, releasing items from crowding and during free-choice primed targets are chosen over unprimed ones. Two accounts of priming have been proposed: attentional facilitation of primed features and postperceptual episodic memory retrieval that involves mapping responses to visual events. Here, well-known masking effects were used to assess the two accounts. Object-substitution masking has been considered to reflect attentional processing: It does not occur when a target is precued and is strengthened when distractors are present. Conversely, metacontrast masking has been connected to lower level processing where attention exerts little effect. If priming facilitates attention shifts, it should mitigate object-substitution masking, while lower level masking might not be similarly influenced. Observers searched for an odd-colored target among distractors. Unpredictably (on 20% of trials), object-substitution masks or metacontrast masks appeared around the target. Object-substitution masking was strongly mitigated for primed target colors, while metacontrast masking was mostly unaffected. This argues against episodic retrieval accounts of priming, placing the priming locus firmly within the realm of attentional processing. The results suggest that priming of visual search facilitates attention shifts to the target, which allows better spatiotemporal resolution that overcomes object-substitution masking.

  17. Object Recognition and Object Segregation in 4.5-Month-Old Infants.

    ERIC Educational Resources Information Center

    Needham, Amy

    2001-01-01

    Investigated in 6 experiments how 4.5-month-old infants' perception of a display is affected by an immediate prior experience with an object similar to part of the test display. Found that infants' use of a prior experience is disrupted by changes in the features of the object, but not by change in its spatial orientation. (JPB)

  18. Spontaneous object recognition: a promising approach to the comparative study of memory

    PubMed Central

    Blaser, Rachel; Heyser, Charles

    2015-01-01

    Spontaneous recognition of a novel object is a popular measure of exploratory behavior, perception and recognition memory in rodent models. Because of its relative simplicity and speed of testing, the variety of stimuli that can be used, and its ecological validity across species, it is also an attractive task for comparative research. To date, variants of this test have been used with vertebrate and invertebrate species, but the methods have seldom been sufficiently standardized to allow cross-species comparison. Here, we review the methods necessary for the study of novel object recognition in mammalian and non-mammalian models, as well as the results of these experiments. Critical to the use of this test is an understanding of the organism’s initial response to a novel object, the modulation of exploration by context, and species differences in object perception and exploratory behaviors. We argue that with appropriate consideration of species differences in perception, object affordances, and natural exploratory behaviors, the spontaneous object recognition test can be a valid and versatile tool for translational research with non-mammalian models. PMID:26217207

  19. Combining feature- and correspondence-based methods for visual object recognition.

    PubMed

    Westphal, Günter; Würtz, Rolf P

    2009-07-01

    We present an object recognition system built on a combination of feature- and correspondence-based pattern recognizers. The feature-based part, called preselection network, is a single-layer feedforward network weighted with the amount of information contributed by each feature to the decision at hand. For processing arbitrary objects, we employ small, regular graphs whose nodes are attributed with Gabor amplitudes, termed parquet graphs. The preselection network can quickly rule out most irrelevant matches and leaves only the ambiguous cases, so-called model candidates, to be verified by a rudimentary version of elastic graph matching, a standard correspondence-based technique for face and object recognition. According to the model, graphs are constructed that describe the object in the input image well. We report the results of experiments on standard databases for object recognition. The method achieved high recognition rates on identity and pose. Unlike many other models, it can also cope with varying background, multiple objects, and partial occlusion.

  20. Cultural differences in visual object recognition in 3-year-old children

    PubMed Central

    Kuwabara, Megumi; Smith, Linda B.

    2016-01-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576

  1. Orientation estimation of anatomical structures in medical images for object recognition

    NASA Astrophysics Data System (ADS)

    Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian

    2011-03-01

    Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.

  2. Organization of face and object recognition in modular neural network models.

    PubMed

    Dailey, M N.; Cottrell, G W.

    1999-10-01

    There is strong evidence that face processing in the brain is localized. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent neural mechanisms. In this paper, we use computational models to show how the face processing specialization apparently underlying prosopagnosia and visual object agnosia could be attributed to (1) a relatively simple competitive selection mechanism that, during development, devotes neural resources to the tasks they are best at performing, (2) the developing infant's need to perform subordinate classification (identification) of faces early on, and (3) the infant's low visual acuity at birth. Inspired by de Schonen, Mancini and Liegeois' arguments (1998) [de Schonen, S., Mancini, J., Liegeois, F. (1998). About functional cortical specialization: the development of face recognition. In: F. Simon & G. Butterworth, The development of sensory, motor, and cognitive capacities in early infancy (pp. 103-116). Hove, UK: Psychology Press] that factors like these could bias the visual system to develop a processing subsystem particularly useful for face recognition, and Jacobs and Kosslyn's experiments (1994) [Jacobs, R. A., & Kosslyn, S. M. (1994). Encoding shape and spatial relations-the role of receptive field size in coordination complementary representations. Cognitive Science, 18(3), 361-368] in the mixtures of experts (ME) modeling paradigm, we provide a preliminary computational demonstration of how this theory accounts for the double dissociation between face and object processing. We present two feed-forward computational models of visual processing. In both models, the selection mechanism is a gating network that mediates a competition between modules attempting to classify input stimuli. In Model I, when the modules

  3. Retrieval and reconsolidation of object recognition memory are independent processes in the perirhinal cortex.

    PubMed

    Balderas, I; Rodriguez-Ortiz, C J; Bermudez-Rattoni, F

    2013-12-03

    Reconsolidation refers to the destabilization/re-stabilization process upon memory reactivation. However, the parameters needed to induce reconsolidation remain unclear. Here we evaluated the capacity of memory retrieval to induce reconsolidation of object recognition memory in rats. To assess whether retrieval is indispensable to trigger reconsolidation, we injected muscimol in the perirhinal cortex to block retrieval, and anisomycin (ani) to impede reconsolidation. We observed that ani impaired reconsolidation in the absence of retrieval. Therefore, stored memory underwent reconsolidation even though it was not recalled. These results indicate that retrieval and reconsolidation of object recognition memory are independent processes.

  4. Structural determinants of imidazoacridinones facilitating antitumor activity are crucial for substrate recognition by ABCG2.

    PubMed

    Bram, Eran E; Adar, Yamit; Mesika, Nufar; Sabisz, Michal; Skladanowski, Andrzej; Assaraf, Yehuda G

    2009-05-01

    Symadex is the lead acridine compound of a novel class of imidazoacridinones (IAs) currently undergoing phase II clinical trials for the treatment of various cancers. Recently, we have shown that Symadex is extruded by ABCG2-overexpressing lung cancer A549/K1.5 cells, thereby resulting in a marked resistance to certain IAs. To identify the IA residues essential for substrate recognition by ABCG2, we here explored the ability of ABCG2 to extrude and confer resistance to a series of 23 IAs differing at defined residue(s) surrounding their common 10-azaanthracene structure. Taking advantage of the inherent fluorescent properties of IAs, ABCG2-dependent efflux and drug resistance were determined in A549/K1.5 cells using flow cytometry in the presence or absence of fumitremorgin C, a specific ABCG2 transport inhibitor. We find that a hydroxyl group at one of the R1, R2, or R3 positions in the proximal IA ring was essential for ABCG2-mediated efflux and consequent IA resistance. Moreover, elongation of the common distal aliphatic side chain attenuated ABCG2-dependent efflux, thereby resulting in the retention of parental cell sensitivity. Hence, the current study offers novel molecular insight into the structural determinants that facilitate ABCG2-mediated drug efflux and consequent drug resistance using a unique platform of fluorescent IAs. Moreover, these results establish that the IA determinants mediating cytotoxicity are precisely those that facilitate ABCG2-dependent drug efflux and IA resistance. The possible clinical implications for the future design of novel acridines that overcome ABCG2-dependent multidrug resistance are discussed.

  5. On the three-quarter view advantage of familiar object recognition.

    PubMed

    Nonose, Kohei; Niimi, Ryosuke; Yokosawa, Kazuhiko

    2016-11-01

    A three-quarter view, i.e., an oblique view, of familiar objects often leads to a higher subjective goodness rating when compared with other orientations. What is the source of the high goodness for oblique views? First, we confirmed that object recognition performance was also best for oblique views around 30° view, even when the foreshortening disadvantage of front- and side-views was minimized (Experiments 1 and 2). In Experiment 3, we measured subjective ratings of view goodness and two possible determinants of view goodness: familiarity of view, and subjective impression of three-dimensionality. Three-dimensionality was measured as the subjective saliency of visual depth information. The oblique views were rated best, most familiar, and as approximating greatest three-dimensionality on average; however, the cluster analyses showed that the "best" orientation systematically varied among objects. We found three clusters of objects: front-preferred objects, oblique-preferred objects, and side-preferred objects. Interestingly, recognition performance and the three-dimensionality rating were higher for oblique views irrespective of the clusters. It appears that recognition efficiency is not the major source of the three-quarter view advantage. There are multiple determinants and variability among objects. This study suggests that the classical idea that a canonical view has a unique advantage in object perception requires further discussion.

  6. Object recognition in congruent and incongruent natural scenes: a life-span study.

    PubMed

    Rémy, F; Saint-Aubert, L; Bacon-Macé, N; Vayssière, N; Barbeau, E; Fabre-Thorpe, M

    2013-10-18

    Efficient processing of our complex visual environment is essential and many daily visual tasks rely on accurate and fast object recognition. It is therefore important to evaluate how object recognition performance evolves during the course of adulthood. Surprisingly, this ability has not yet been investigated in the aged population, although several neuroimaging studies have reported altered activity in high-level visual ventral regions when elderly subjects process natural stimuli. In the present study, color photographs of various objects embedded in contextual scenes were used to assess object categorization performance in 97 participants aged from 20 to 91. Objects were either animals or pieces of furniture, embedded in either congruent or incongruent contexts. In every age group, subjects showed reduced categorization performance, both in terms of accuracy and speed, when objects were seen in incongruent vs. congruent contexts. In subjects over 60 years old, object categorization was greatly slowed down when compared to young and middle-aged subjects. Moreover, subjects over 75 years old evidenced a significant decrease in categorization accuracy when objects were seen in incongruent contexts. This indicates that incongruence of the scene may be particularly disturbing in late adulthood, therefore impairing object recognition. Our results suggest that daily visual processing of complex natural environments may be less efficient with age, which might impact performance in everyday visual tasks.

  7. Ontogeny of object versus location recognition in the rat: acquisition and retention effects.

    PubMed

    Westbrook, Sara R; Brennan, Lauren E; Stanton, Mark E

    2014-11-01

    Novel object and location recognition tasks harness the rat's natural tendency to explore novelty (Berlyne, 1950) to study incidental learning. The present study examined the ontogenetic profile of these two tasks and retention of spatial learning between postnatal day (PD) 17 and 31. Experiment 1 showed that rats ages PD17, 21, and 26 recognize novel objects, but only PD21 and PD26 rats recognize a novel location of a familiar object. These results suggest that novel object recognition develops before PD17, while object location recognition emerges between PD17 and PD21. Experiment 2 studied the ontogenetic profile of object location memory retention in PD21, 26, and 31 rats. PD26 and PD31 rats retained the object location memory for both 10-min and 24-hr delays. PD21 rats failed to retain the object location memory for the 24-hr delay, suggesting differential development of short- versus long-term memory in the ontogeny of object location memory.

  8. Joint Segmentation and Recognition of Categorized Objects from Noisy Web Image Collection.

    PubMed

    Wang, Le; Hua, Gang; Xue, Jianru; Gao, Zhanning; Zheng, Nanning

    2014-07-14

    The segmentation of categorized objects addresses the problem of joint segmentation of a single category of object across a collection of images, where categorized objects are referred to objects in the same category. Most existing methods of segmentation of categorized objects made the assumption that all images in the given image collection contain the target object. In other words, the given image collection is noise free. Therefore, they may not work well when there are some noisy images which are not in the same category, such as those image collections gathered by a text query from modern image search engines. To overcome this limitation, we propose a method for automatic segmentation and recognition of categorized objects from noisy Web image collections. This is achieved by cotraining an automatic object segmentation algorithm that operates directly on a collection of images, and an object category recognition algorithm that identifies which images contain the target object. The object segmentation algorithm is trained on a subset of images from the given image collection which are recognized to contain the target object with high confidence, while training the object category recognition model is guided by the intermediate segmentation results obtained from the object segmentation algorithm. This way, our co-training algorithm automatically identifies the set of true positives in the noisy Web image collection, and simultaneously extracts the target objects from all the identified images. Extensive experiments validated the efficacy of our proposed approach on four datasets: 1) the Weizmann horse dataset, 2) the MSRC object category dataset, 3) the iCoseg dataset, and 4) a new 30-categories dataset including 15,634 Web images with both hand-annotated category labels and ground truth segmentation labels. It is shown that our method compares favorably with the state-of-the-art, and has the ability to deal with noisy image collections.

  9. An Event-Based Neurobiological Recognition System with Orientation Detector for Objects in Multiple Orientations.

    PubMed

    Wang, Hanyu; Xu, Jiangtao; Gao, Zhiyuan; Lu, Chengye; Yao, Suying; Ma, Jianguo

    2016-01-01

    A new multiple orientation event-based neurobiological recognition system is proposed by integrating recognition and tracking function in this paper, which is used for asynchronous address-event representation (AER) image sensors. The characteristic of this system has been enriched to recognize the objects in multiple orientations with only training samples moving in a single orientation. The system extracts multi-scale and multi-orientation line features inspired by models of the primate visual cortex. An orientation detector based on modified Gaussian blob tracking algorithm is introduced for object tracking and orientation detection. The orientation detector and feature extraction block work in simultaneous mode, without any increase in categorization time. An addresses lookup table (addresses LUT) is also presented to adjust the feature maps by addresses mapping and reordering, and they are categorized in the trained spiking neural network. This recognition system is evaluated with the MNIST dataset which have played important roles in the development of computer vision, and the accuracy is increased owing to the use of both ON and OFF events. AER data acquired by a dynamic vision senses (DVS) are also tested on the system, such as moving digits, pokers, and vehicles. The experimental results show that the proposed system can realize event-based multi-orientation recognition. The work presented in this paper makes a number of contributions to the event-based vision processing system for multi-orientation object recognition. It develops a new tracking-recognition architecture to feedforward categorization system and an address reorder approach to classify multi-orientation objects using event-based data. It provides a new way to recognize multiple orientation objects with only samples in single orientation.

  10. An Event-Based Neurobiological Recognition System with Orientation Detector for Objects in Multiple Orientations

    PubMed Central

    Wang, Hanyu; Xu, Jiangtao; Gao, Zhiyuan; Lu, Chengye; Yao, Suying; Ma, Jianguo

    2016-01-01

    A new multiple orientation event-based neurobiological recognition system is proposed by integrating recognition and tracking function in this paper, which is used for asynchronous address-event representation (AER) image sensors. The characteristic of this system has been enriched to recognize the objects in multiple orientations with only training samples moving in a single orientation. The system extracts multi-scale and multi-orientation line features inspired by models of the primate visual cortex. An orientation detector based on modified Gaussian blob tracking algorithm is introduced for object tracking and orientation detection. The orientation detector and feature extraction block work in simultaneous mode, without any increase in categorization time. An addresses lookup table (addresses LUT) is also presented to adjust the feature maps by addresses mapping and reordering, and they are categorized in the trained spiking neural network. This recognition system is evaluated with the MNIST dataset which have played important roles in the development of computer vision, and the accuracy is increased owing to the use of both ON and OFF events. AER data acquired by a dynamic vision senses (DVS) are also tested on the system, such as moving digits, pokers, and vehicles. The experimental results show that the proposed system can realize event-based multi-orientation recognition. The work presented in this paper makes a number of contributions to the event-based vision processing system for multi-orientation object recognition. It develops a new tracking-recognition architecture to feedforward categorization system and an address reorder approach to classify multi-orientation objects using event-based data. It provides a new way to recognize multiple orientation objects with only samples in single orientation. PMID:27867346

  11. Conscious intention to speak proactively facilitates lexical access during overt object naming

    PubMed Central

    Strijkers, Kristof; Holcomb, Phillip J.; Costa, Albert

    2013-01-01

    The present study explored when and how the top-down intention to speak influences the language production process. We did so by comparing the brain’s electrical response for a variable known to affect lexical access, namely word frequency, during overt object naming and non-verbal object categorization. We found that during naming, the event-related brain potentials elicited for objects with low frequency names started to diverge from those with high frequency names as early as 152 ms after stimulus onset, while during non-verbal categorization the same frequency comparison appeared 200 ms later eliciting a qualitatively different brain response. Thus, only when participants had the conscious intention to name an object the brain rapidly engaged in lexical access. The data offer evidence that top-down intention to speak proactively facilitates the activation of words related to perceived objects. PMID:24039339

  12. Altered object-in-place recognition memory, prepulse inhibition, and locomotor activity in the offspring of rats exposed to a viral mimetic during pregnancy.

    PubMed

    Howland, J G; Cazakoff, B N; Zhang, Y

    2012-01-10

    Infection during pregnancy (i.e., prenatal infection) increases the risk of psychiatric illnesses such as schizophrenia and autism in the adult offspring. The present experiments examined the effects of prenatal immune challenge on behavior in three paradigms relevant to these disorders: prepulse inhibition (PPI) of the acoustic startle response, locomotor responses to an unfamiliar environment and the N-methyl-d-aspartate antagonist MK-801, and three forms of recognition memory. Pregnant Long-Evans rats were exposed to the viral mimetic polyinosinic-polycytidylic acid (PolyI:C; 4 mg/kg, i.v.) on gestational day 15. Offspring were tested for PPI and locomotor activity before puberty (postnatal days (PNDs)35 and 36) and during young adulthood (PNDs 56 and 57). Four prepulse-pulse intervals (30, 50, 80, and 140 ms) were employed in the PPI test. Recognition memory testing was performed using three different spontaneous novelty recognition tests (object, object location, and object-in-place recognition) after PND 60. Regardless of sex, offspring of PolyI:C-treated dams showed disrupted PPI at 50-, 80-, and 140-ms prepulse-pulse intervals. In the prepubescent rats, we observed prepulse facilitation for the 30-ms prepulse-pulse interval trials that was selectively retained in the adult PolyI:C-treated offspring. Locomotor responses to MK-801 were significantly reduced before puberty, whereas responses to an unfamiliar environment were increased in young adulthood. Both male and female PolyI:C-treated offspring showed intact object and object location recognition memory, whereas male PolyI:C-treated offspring displayed significantly impaired object-in-place recognition memory. Females were unable to perform the object-in-place test. The present results demonstrate that prenatal immune challenge during mid/late gestation disrupts PPI and locomotor behavior. In addition, the selective impairment of object-in-place recognition memory suggests tasks that depend on prefrontal

  13. Adaptive object recognition model using incremental feature representation and hierarchical classification.

    PubMed

    Jeong, Sungmoon; Lee, Minho

    2012-01-01

    This paper presents an adaptive object recognition model based on incremental feature representation and a hierarchical feature classifier that offers plasticity to accommodate additional input data and reduces the problem of forgetting previously learned information. The incremental feature representation method applies adaptive prototype generation with a cortex-like mechanism to conventional feature representation to enable an incremental reflection of various object characteristics, such as feature dimensions in the learning process. A feature classifier based on using a hierarchical generative model recognizes various objects with variant feature dimensions during the learning process. Experimental results show that the adaptive object recognition model successfully recognizes single and multiple-object classes with enhanced stability and flexibility.

  14. The influence of surface color information and color knowledge information in object recognition.

    PubMed

    Bramão, Inês; Faísca, Luís; Petersson, Karl Magnus; Reis, Alexandra

    2010-01-01

    In order to clarify whether the influence of color knowledge information in object recognition depends on the presence of the appropriate surface color, we designed a name-object verification task. The relationship between color and shape information provided by the name and by the object photo was manipulated in order to assess color interference independently of shape interference. We tested three different versions for each object: typically colored, black and white, and nontypically colored. The response times on the nonmatching trials were used to measure the interference between the name and the photo. We predicted that the more similar the name and the photo are, the longer it would take to respond. Overall, the color similarity effect disappeared in the black-and-white and nontypical color conditions, suggesting that the influence of color knowledge on object recognition depends on the presence of the appropriate surface color information.

  15. Coincident orientation of objects and viewpoint-dependence in scene recognition.

    PubMed

    Li, Jing; Zhang, Kan

    2012-02-01

    Viewpoint-dependence is a well-known phenomenon in which participants' spatial memory is better for previously experienced points of view than for novel ones. In the current study, partial-scene-recognition was used to examine the effect of coincident orientation of all the objects on viewpoint-dependence in spatial memory. When objects in scenes had no clear orientations (e.g., balls), participants' recognition of experienced directions was better than that of novel ones, indicating that there was viewpoint-dependence. However, when the objects in scenes were toy bears with clear orientations, the coincident orientation of objects (315 degrees), which was not experienced, shared the advantage of the experienced direction (0 degrees), and participants were equally likely to choose either direction when reconstructing the spatial representation in memory. These findings suggest that coincident orientation of objects may affect egocentric representations in spatial memory.

  16. Environmental enrichment improves novel object recognition and enhances agonistic behavior in male mice.

    PubMed

    Mesa-Gresa, Patricia; Pérez-Martinez, Asunción; Redolat, Rosa

    2013-01-01

    Environmental enrichment (EE) is an experimental paradigm in which rodents are housed in complex environments containing objects that provide stimulation, the effects of which are expected to improve the welfare of these subjects. EE has been shown to considerably improve learning and memory in rodents. However, knowledge about the effects of EE on social interaction is generally limited and rather controversial. Thus, our aim was to evaluate both novel object recognition and agonistic behavior in NMRI mice receiving EE, hypothesizing enhanced cognition and slightly enhanced agonistic interaction upon EE rearing. During a 4-week period half the mice (n = 16) were exposed to EE and the other half (n = 16) remained in a standard environment (SE). On PND 56-57, animals performed the object recognition test, in which recognition memory was measured using a discrimination index. The social interaction test consisted of an encounter between an experimental animal and a standard opponent. Results indicated that EE mice explored the new object for longer periods than SE animals (P < .05). During social encounters, EE mice devoted more time to sociability and agonistic behavior (P < .05) than their non-EE counterparts. In conclusion, EE has been shown to improve object recognition and increase agonistic behavior in adolescent/early adulthood mice. In the future we intend to extend this study on a longitudinal basis in order to assess in more depth the effect of EE and the consistency of the above-mentioned observations in NMRI mice. Copyright © 2013 Wiley Periodicals, Inc.

  17. The role of histamine receptors in the consolidation of object recognition memory.

    PubMed

    da Silveira, Clarice Krás Borges; Furini, Cristiane R G; Benetti, Fernando; Monteiro, Siomara da Cruz; Izquierdo, Ivan

    2013-07-01

    Findings have shown that histamine receptors in the hippocampus modulate the acquisition and extinction of fear motivated learning. In order to determine the role of hippocampal histaminergic receptors on recognition memory, adult male Wistar rats with indwelling infusion cannulae stereotaxically placed in the CA1 region of dorsal hippocampus were trained in an object recognition learning task involving exposure to two different stimulus objects in an enclosed environment. In the test session, one of the objects presented during training was replaced by a novel one. Recognition memory retention was assessed 24 h after training by comparing the time spent in exploration (sniffing and touching) of the known object with that of the novel one. When infused in the CA1 region immediately, 30, 120 or 360 min posttraining, the H1-receptor antagonist, pyrilamine, the H2-receptor antagonist, ranitidine, and the H3-receptor agonist, imetit, blocked long-term memory retention in a time dependent manner (30-120 min) without affecting general exploratory behavior, anxiety state or hippocampal function. Our data indicate that histaminergic system modulates consolidation of object recognition memory through H1, H2 and H3 receptors.

  18. Augmented reality three-dimensional object visualization and recognition with axially distributed sensing.

    PubMed

    Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-01-15

    An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.

  19. Feature discovery in gray level imagery for one-class object recognition

    SciTech Connect

    Koch, M.W.; Moya, M.M.

    1993-12-31

    Feature extraction transforms an object`s image representation to an alternate reduced representation. In one-class object recognition, we would like this alternate representation to give improved discrimination between the object and all possible non-objects and improved generation between different object poses. Feature selection can be time-consuming and difficult to optimize so we have investigated unsupervised neural networks for feature discovery. We first discuss an inherent limitation in competitive type neural networks for discovering features in gray level images. We then show how Sanger`s Generalized Hebbian Algorithm (GHA) removes this limitation and describe a novel GHA application for learning object features that discriminate the object from clutter. Using a specific example, we show how these features are better at distinguishing the target object from other nontarget object with Carpenter`s ART 2-A as the pattern classifier.

  20. Hippocampal BDNF treatment facilitates consolidation of spatial memory in spontaneous place recognition in rats.

    PubMed

    Ozawa, Takaaki; Yamada, Kazuo; Ichitani, Yukio

    2014-04-15

    In order to investigate the role of brain-derived neurotrophic factor (BDNF) in the consolidation of spatial memory, we examined the relationship between the increase of hippocampal BDNF and the establishment of long-term spatial memory in spontaneous place recognition test in rats. The test consisted of a sample phase, delay interval, and a test phase, and preferred exploration of the object in a novel place compared with that in a familiar place was assessed in the test phase. In experiment 1, dorsal hippocampal administration of anisomycin, a protein synthesis inhibitor, before the sample phase (20 min) abolished the preference for the novel place object in the test phase conducted 24h later. This impairment was reversed by the dorsal hippocampal BDNF treatment immediately after the sample phase, although the BDNF treatment alone did not improve performance. In experiment 2, we used a shorter sample phase condition (5 min) in which control rats did not show any preference for the novel place object in the test phase after 24h delay, and found that BDNF treatment immediately after the sample phase caused rats' significant preference for it. Results suggest an important role of hippocampal BDNF as a product of protein synthesis that is required for the consolidation of spatial memory.

  1. Effects of exposure to heavy particles and aging on object recognition memory in rats

    NASA Astrophysics Data System (ADS)

    Rabin, Bernard; Joseph, James; Shukitt-Hale, Barbara; Carrihill-Knoll, Kirsty; Shannahan, Ryan; Hering, Kathleen

    Exposure to HZE particles produces changes in neurocognitive performance. These changes, including deficits in spatial learning and memory, object recognition memory and operant responding, are also observed in the aged organism. As such, it has been proposed that exposure to heavy particles produces "accelerated aging". Because aging is an ongoing process, it is possible that there would be an interaction between the effects of exposure and the effects of aging, such that doses of HZE particles that do not affect the performance of younger organisms will affect the performance of organisms as they age. The present experiments were designed to test the hypothesis that young rats that had been exposed to HZE particles would show a progressive deterioration in object recognition memory as a function of the age of testing. Rats were exposed to 12 C, 28 S or 48 Ti particles at the N.A.S.A. Space Radiation Laboratory at Brookhaven National Laboratory. Following irradiation the rats were shipped to UMBC for behavioral testing. HZE particle-induced changes in object recognition memory were tested using a standard procedure: rats were placed in an open field and allowed to interact with two identical objects for up to 30 sec; twenty-four hrs later the rats were again placed in the open field, this time containing one familiar and one novel object. Non-irradiated control animals spent significantly more time with the novel object than with the familiar object. In contrast, the rats that been exposed to heavy particles spent equal amounts of time with both the novel and familiar object. The lowest dose of HZE particles which produced a disruption of object recognition memory was determined three months and eleven months following exposure. The threshold dose needed to disrupt object recognition memory three months following irradiation varied as a function of the specific particle and energy. When tested eleven months following irradiation, doses of HZE particles that did

  2. Effects of selective neonatal hippocampal lesions on tests of object and spatial recognition memory in monkeys

    PubMed Central

    Heuer, Eric; Bachevalier, Jocelyne

    2011-01-01

    Earlier studies in monkeys have reported mild impairment in recognition memory following nonselective neonatal hippocampal lesions (Bachevalier, Beauregard, & Alvarado, 1999; Rehbein, Killiany, & Mahut, 2005). To assess whether the memory impairment could have resulted from damage to cortical areas adjacent to the hippocampus, we tested adult monkeys with neonatal focal hippocampal lesions and sham-operated controls in three recognition tasks: delayed nonmatching-to-sample, object memory span, and spatial memory span. Further, to rule out that normal performance on these tasks may relate to functional sparing following neonatal hippocampal lesions, we tested adult monkeys that had received the same focal hippocampal lesions in adulthood and their controls in the same three memory tasks. Both early and late onset focal hippocampal damage did not alter performance on any of the three tasks, suggesting that damage to cortical areas adjacent to the hippocampus was likely responsible for the recognition impairment reported by the earlier studies. In addition, given that animals with early and late onset hippocampal lesions showed object and spatial recognition impairment when tested in a visual paired comparison task (Zeamer, Meunier, & Bachevalier, Submitted; Zeamer, Heuer & Bachevalier, 2010), the data suggest that not all object and spatial recognition tasks are solved by hippocampal-dependent memory processes. The current data may not only help explain the neural substrate for the partial recognition memory impairment reported in cases of developmental amnesia (Adlam, Malloy, Mishkin, & Vargha-Khadem, 2009), but they are also clinically relevant given that the object and spatial memory tasks used in monkeys are often translated to investigate memory functions in several populations of human infants and children in which dysfunction of the hippocampus is suspected. PMID:21341885

  3. Recognition of 3D objects for autonomous mobile robot's navigation in automated shipbuilding

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Cho, Hyungsuck

    2007-10-01

    Nowadays many parts of shipbuilding process are automated, but the painting process is not, because of the difficulty of automated on-line painting quality measurement, harsh painting environment and the difficulty of robot navigation. However, the painting automation is necessary, because it can provide consistent performance of painting film thickness. Furthermore, autonomous mobile robots are strongly required for flexible painting work. However, the main problem of autonomous mobile robot's navigation is that there are many obstacles which are not expressed in the CAD data. To overcome this problem, obstacle detection and recognition are necessary to avoid obstacles and painting work effectively. Until now many object recognition algorithms have been studied, especially 2D object recognition methods using intensity image have been widely studied. However, in our case environmental illumination does not exist, so these methods cannot be used. To overcome this, to use 3D range data must be used, but the problem of using 3D range data is high computational cost and long estimation time of recognition due to huge data base. In this paper, we propose a 3D object recognition algorithm based on PCA (Principle Component Analysis) and NN (Neural Network). In the algorithm, the novelty is that the measured 3D range data is transformed into intensity information, and then adopts the PCA and NN algorithm for transformed intensity information to reduce the processing time and make the data easy to handle which are disadvantages of previous researches of 3D object recognition. A set of experimental results are shown to verify the effectiveness of the proposed algorithm.

  4. Physical exercise during pregnancy improves object recognition memory in adult offspring.

    PubMed

    Robinson, A M; Bucci, D J

    2014-01-03

    Exercising during pregnancy has been shown to improve spatial learning and short-term memory, as well as increase brain-derived neurotrophic factor mRNA levels and hippocampal cell survival in juvenile offspring. However, it remains unknown if these effects endure into adulthood. In addition, few studies have considered how maternal exercise can impact cognitive functions that do not rely on the hippocampus. To address these issues, the present study tested the effects of maternal exercise during pregnancy on object recognition memory, which relies on the perirhinal cortex (PER), in adult offspring. Pregnant rats were given access to a running wheel throughout gestation and the adult male offspring were subsequently tested in an object recognition memory task at three different time points, each spaced 2-weeks apart, beginning at 60 days of age. At each time point, offspring from exercising mothers were able to successfully discriminate between novel and familiar objects in that they spent more time exploring the novel object than the familiar object. The offspring of non-exercising mothers were not able to successfully discriminate between objects and spent an equal amount of time with both objects. A subset of rats was euthanized 1h after the final object recognition test to assess c-FOS expression in the PER. The offspring of exercising mothers had more c-FOS expression in the PER than the offspring of non-exercising mothers. By comparison, c-FOS levels in the adjacent auditory cortex did not differ between groups. These results indicate that maternal exercise during pregnancy can improve object recognition memory in adult male offspring and increase c-FOS expression in the PER; suggesting that exercise during the gestational period may enhance brain function of the offspring. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Physical Exercise During Pregnancy Improves Object Recognition Memory in Adult Offspring

    PubMed Central

    Robinson, Andrea M.; Bucci, David J.

    2013-01-01

    Exercising during pregnancy has been shown to improve spatial learning and short-term memory, as well as increase BDNF mRNA levels and hippocampal cell survival in juvenile offspring. However, it remains unknown if these effects endure into adulthood. In addition, few studies have considered how maternal exercise can impact cognitive functions that do not rely on the hippocampus. To address these issues, the present study tested the effects of maternal exercise during pregnancy on object recognition memory, which relies on the perirhinal cortex (PER), in adult offspring. Pregnant rats were given access to a running wheel throughout gestation and the adult male offspring were subsequently tested in an object recognition memory task at three different time points, each spaced 2-weeks apart, beginning at 60 days of age. At each time point, offspring from exercising mothers were able to successfully discriminate between novel and familiar objects in that they spent more time exploring the novel object than the familiar object. The offspring of non-exercising mothers were not able to successfully discriminate between objects and spent an equal amount of time with both objects. A subset of rats was euthanized 1 hr after the final object recognition test to assess c-FOS expression in the PER. The offspring of exercising mothers had more c-FOS expression in the PER than the offspring of non-exercising mothers. By comparison, c-FOS levels in the adjacent auditory cortex did not differ between groups. These results indicate that maternal exercise during pregnancy can improve object recognition memory in adult male offspring and increase c-FOS expression in the PER; suggesting that exercise during the gestational period may enhance brain function of the offspring. PMID:24157927

  6. Cross-modal object recognition and dynamic weighting of sensory inputs in a fish

    PubMed Central

    Schumacher, Sarah; Burt de Perera, Theresa; Thenert, Johanna; von der Emde, Gerhard

    2016-01-01

    Most animals use multiple sensory modalities to obtain information about objects in their environment. There is a clear adaptive advantage to being able to recognize objects cross-modally and spontaneously (without prior training with the sense being tested) as this increases the flexibility of a multisensory system, allowing an animal to perceive its world more accurately and react to environmental changes more rapidly. So far, spontaneous cross-modal object recognition has only been shown in a few mammalian species, raising the question as to whether such a high-level function may be associated with complex mammalian brain structures, and therefore absent in animals lacking a cerebral cortex. Here we use an object-discrimination paradigm based on operant conditioning to show, for the first time to our knowledge, that a nonmammalian vertebrate, the weakly electric fish Gnathonemus petersii, is capable of performing spontaneous cross-modal object recognition and that the sensory inputs are weighted dynamically during this task. We found that fish trained to discriminate between two objects with either vision or the active electric sense, were subsequently able to accomplish the task using only the untrained sense. Furthermore we show that cross-modal object recognition is influenced by a dynamic weighting of the sensory inputs. The fish weight object-related sensory inputs according to their reliability, to minimize uncertainty and to enable an optimal integration of the senses. Our results show that spontaneous cross-modal object recognition and dynamic weighting of sensory inputs are present in a nonmammalian vertebrate. PMID:27313211

  7. Where vision meets memory: prefrontal-posterior networks for visual object constancy during categorization and recognition.

    PubMed

    Schendan, Haline E; Stern, Chantal E

    2008-07-01

    Objects seen from unusual relative to more canonical views require more time to categorize and recognize, and, according to object model verification theories, additionally recruit prefrontal processes for cognitive control that interact with parietal processes for mental rotation. To test this using functional magnetic resonance imaging, people categorized and recognized known objects from unusual and canonical views. Canonical views activated some components of a default network more on categorization than recognition. Activation to unusual views showed that both ventral and dorsal visual pathways, and prefrontal cortex, have key roles in visual object constancy. Unusual views activated object-sensitive and mental rotation (and not saccade) regions in ventrocaudal intraparietal, transverse occipital, and inferotemporal sulci, and ventral premotor cortex for verification processes of model testing on any task. A collateral-lingual sulci "place" area activated for mental rotation, working memory, and unusual views on correct recognition and categorization trials to accomplish detailed spatial matching. Ventrolateral prefrontal cortex and object-sensitive lateral occipital sulcus activated for mental rotation and unusual views on categorization more than recognition, supporting verification processes of model prediction. This visual knowledge framework integrates vision and memory theories to explain how distinct prefrontal-posterior networks enable meaningful interactions with objects in diverse situations.

  8. Mechanisms and Neural Basis of Object and Pattern Recognition: A Study with Chess Experts

    ERIC Educational Resources Information Center

    Bilalic, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang

    2010-01-01

    Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and…

  9. Developmental Trajectories of Part-Based and Configural Object Recognition in Adolescence

    ERIC Educational Resources Information Center

    Juttner, Martin; Wakui, Elley; Petters, Dean; Kaur, Surinder; Davidoff, Jules

    2013-01-01

    Three experiments assessed the development of children's part and configural (part-relational) processing in object recognition during adolescence. In total, 312 school children aged 7-16 years and 80 adults were tested in 3-alternative forced choice (3-AFC) tasks. They judged the correct appearance of upright and inverted presented familiar…

  10. Mechanisms and Neural Basis of Object and Pattern Recognition: A Study with Chess Experts

    ERIC Educational Resources Information Center

    Bilalic, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang

    2010-01-01

    Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and…

  11. Application of genetic algorithm for automatic recognition of partially occluded objects

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1994-07-01

    Automatic recognition of partially occluded objects that are sensed by imaging sensors is a challenging problem in image understanding (IU), automatic target recognition (ATR), and computer vision fields. In this paper I address this problem by using a genetic algorithm (GA) as part of a model-based recognition scheme. The partially occluded object segments are rotated, translated, and scaled. Then each transform parameter is encoded into a binary string and used in a genetic algorithm. The suggested transformation is then applied to the sensed segment and the resulting object is matched against a library of stored targets. The fitness criterion is a distance function that measures the similarity between the segmented object and the stored target models. The GA by performing the process of mutation, reproduction, and crossover suggests optimum transform parameter sets. The empirical results of the application of the approach on a set of real ladar data of military targets shows that correct recognition for up to 50% target occlusion is possible.

  12. Developmental Changes in Visual Object Recognition between 18 and 24 Months of Age

    ERIC Educational Resources Information Center

    Pereira, Alfredo F.; Smith, Linda B.

    2009-01-01

    Two experiments examined developmental changes in children's visual recognition of common objects during the period of 18 to 24 months. Experiment 1 examined children's ability to recognize common category instances that presented three different kinds of information: (1) richly detailed and prototypical instances that presented both local and…

  13. Perirhinal Cortex Resolves Feature Ambiguity in Configural Object Recognition and Perceptual Oddity Tasks

    ERIC Educational Resources Information Center

    Bartko, Susan J.; Winters, Boyer D.; Cowell, Rosemary A.; Saksida, Lisa M.; Bussey, Timothy J.

    2007-01-01

    The perirhinal cortex (PRh) has a well-established role in object recognition memory. More recent studies suggest that PRh is also important for two-choice visual discrimination tasks. Specifically, it has been suggested that PRh contains conjunctive representations that help resolve feature ambiguity, which occurs when a task cannot easily be…

  14. Category Specificity in Normal Episodic Learning: Applications to Object Recognition and Category-Specific Agnosia

    ERIC Educational Resources Information Center

    Bukach, Cindy M.; Bub, Daniel N.; Masson, Michael E. J.; Lindsay, D. Stephen

    2004-01-01

    Studies of patients with category-specific agnosia (CSA) have given rise to multiple theories of object recognition, most of which assume the existence of a stable, abstract semantic memory system. We applied an episodic view of memory to questions raised by CSA in a series of studies examining normal observers' recall of newly learned attributes…

  15. Developmental Changes in Visual Object Recognition between 18 and 24 Months of Age

    ERIC Educational Resources Information Center

    Pereira, Alfredo F.; Smith, Linda B.

    2009-01-01

    Two experiments examined developmental changes in children's visual recognition of common objects during the period of 18 to 24 months. Experiment 1 examined children's ability to recognize common category instances that presented three different kinds of information: (1) richly detailed and prototypical instances that presented both local and…

  16. Category Specificity in Normal Episodic Learning: Applications to Object Recognition and Category-Specific Agnosia

    ERIC Educational Resources Information Center

    Bukach, Cindy M.; Bub, Daniel N.; Masson, Michael E. J.; Lindsay, D. Stephen

    2004-01-01

    Studies of patients with category-specific agnosia (CSA) have given rise to multiple theories of object recognition, most of which assume the existence of a stable, abstract semantic memory system. We applied an episodic view of memory to questions raised by CSA in a series of studies examining normal observers' recall of newly learned attributes…

  17. Developmental Trajectories of Part-Based and Configural Object Recognition in Adolescence

    ERIC Educational Resources Information Center

    Juttner, Martin; Wakui, Elley; Petters, Dean; Kaur, Surinder; Davidoff, Jules

    2013-01-01

    Three experiments assessed the development of children's part and configural (part-relational) processing in object recognition during adolescence. In total, 312 school children aged 7-16 years and 80 adults were tested in 3-alternative forced choice (3-AFC) tasks. They judged the correct appearance of upright and inverted presented familiar…

  18. Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing

    DTIC Science & Technology

    2013-09-01

    results for precision, recall, and F-measure indicate that the best approach to use for image segmentation is Sobel edge detection and to use Canny...or Sobel for object recognition. The process for this report would not work for a warfighter or analyst. It has poor performance. Additionally...1 2.1. Sobel Edge Detection

  19. Comparing object recognition from binary and bipolar edge images for visual prostheses.

    PubMed

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2016-11-01

    Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition.

  20. Visual object recognition and attention in Parkinson's disease patients with visual hallucinations.

    PubMed

    Meppelink, Anne Marthe; Koerts, Janneke; Borg, Maarten; Leenders, Klaus Leonard; van Laar, Teus

    2008-10-15

    Visual hallucinations (VH) are common in Parkinson's disease (PD) and are hypothesized to be due to impaired visual perception and attention deficits. We investigated whether PD patients with VH showed attention deficits, a more specific impairment of higher order visual perception, or both. Forty-two volunteers participated in this study, including 14 PD patients with VH, 14 PD patients without VH and 14 healthy controls (HC), matched for age, gender, education level and for level of executive function. We created movies with images of animals, people, and objects dynamically appearing out of random noise. Time until recognition of the image was recorded. Sustained attention was tested using the Test of Attentional Performance. PD patients with VH recognized all images but were significantly slower in image recognition than both PD patients without VH and HC. PD patients with VH showed decreased sustained attention compared to PD patients without VH who again performed worse than HC. In conclusion, the recognition of objects is intact in PD patients with VH; however, these patients where significantly slower in image recognition than patients without VH and HC, which was not explained by executive dysfunction. Both image recognition speed and sustained attention decline in PD, in a more progressive way if VH start to occur. (c) 2008 Movement Disorder Society.

  1. Comparing object recognition from binary and bipolar edge images for visual prostheses

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2016-11-01

    Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition.

  2. Blockade of glutamatergic transmission in perirhinal cortex impairs object recognition memory in macaques.

    PubMed

    Malkova, Ludise; Forcelli, Patrick A; Wellman, Laurie L; Dybdal, David; Dubach, Mark F; Gale, Karen

    2015-03-25

    The perirhinal cortex (PRc) is essential for visual recognition memory, as shown by electrophysiological recordings and lesion studies in a variety of species. However, relatively little is known about the functional contributions of perirhinal subregions. Here we used a systematic mapping approach to identify the critical subregions of PRc through transient, focal blockade of glutamate receptors by intracerebral infusion of kynurenic acid. Nine macaques were tested for visual recognition memory using the delayed nonmatch-to-sample task. We found that inactivation of medial PRc (consisting of Area 35 together with the medial portion of Area 36), but not lateral PRc (the lateral portion of Area 36), resulted in a significant delay-dependent impairment. Significant impairment was observed with 30 and 60 s delays but not with 10 s delays. The magnitude of impairment fell within the range previously reported after PRc lesions. Furthermore, we identified a restricted area located within the most anterior part of medial PRc as critical for this effect. Moreover, we found that focal blockade of either NMDA receptors by the receptor-specific antagonist AP-7 or AMPA receptors by the receptor-specific antagonist NBQX was sufficient to disrupt object recognition memory. The present study expands the knowledge of the role of PRc in recognition memory by identifying a subregion within this area that is critical for this function. Our results also indicate that, like in the rodent, both NMDA and AMPA-mediated transmission contributes to object recognition memory.

  3. Moment invariants applied to the recognition of objects using neural networks

    NASA Astrophysics Data System (ADS)

    Gonzaga, Adilson; Ferreira Costa, Jose A.

    1996-11-01

    Visual pattern recognition and visual object recognition are central aspects of high level computer vision systems. This paper describes a method of recognizing patterns and objects in digital images with several types of objects in different positions. The moment invariants of such real work, noise containing images are processed by a neural network, which performs a pattern classification. Two learning methods are adopted for training the network: the conjugate gradient and the Levenber-Maquardt algorithms, both in conjunction with simulated annealing, for different sets of error conditions and features. Real images are used for testing the net's correct class assignments and rejections. We present results and comments focusing on the system's capacity to generalize, even in the presence of noise, geometrical transformations, object shadows and other types of image degradation. One advantage of the artificial neural network employed is its low execution time, allowing the system to be integrated to an assembly industry line for automated visual inspection.

  4. A temporal context repetition effect in rats during a novel object recognition memory task.

    PubMed

    Manns, Joseph R; Galloway, Claire R; Sederberg, Per B

    2015-09-01

    Recent research in humans has used formal models of temporal context, broadly defined as a lingering representation of recent experience, to explain a wide array of recall and recognition memory phenomena. One difficulty in extending this work to studies of experimental animals has been the challenge of developing a task to test temporal context effects on performance in rodents. The current study presents results from a novel object recognition memory paradigm that was adapted from a task used in humans and demonstrates a temporal context repetition effect in rats. Specifically, the findings indicate that repeating the first two objects from a once-encountered sequence of three objects incidentally cues memory for the third object, even in its absence. These results reveal that temporal context influences item memory in rats similar to the manner in which it influences memory in humans and also highlight a new task for future studies of temporal context in experimental animals.

  5. Perirhinal cortex lesions impair tests of object recognition memory but spare novelty detection.

    PubMed

    Olarte-Sánchez, Cristian M; Amin, Eman; Warburton, E Clea; Aggleton, John P

    2015-12-01

    The present study examined why perirhinal cortex lesions in rats impair the spontaneous ability to select novel objects in preference to familiar objects, when both classes of object are presented simultaneously. The study began by repeating this standard finding, using a test of delayed object recognition memory. As expected, the perirhinal cortex lesions reduced the difference in exploration times for novel vs. familiar stimuli. In contrast, the same rats with perirhinal cortex lesions appeared to perform normally when the preferential exploration of novel vs. familiar objects was tested sequentially, i.e. when each trial consisted of only novel or only familiar objects. In addition, there was no indication that the perirhinal cortex lesions reduced total levels of object exploration for novel objects, as would be predicted if the lesions caused novel stimuli to appear familiar. Together, the results show that, in the absence of perirhinal cortex tissue, rats still receive signals of object novelty, although they may fail to link that information to the appropriate object. Consequently, these rats are impaired in discriminating the source of object novelty signals, leading to deficits on simultaneous choice tests of recognition.

  6. Progestogens’ effects and mechanisms for object recognition memory across the lifespan

    PubMed Central

    Walf, Alicia A.; Koonce, Carolyn J.; Frye, Cheryl A.

    2016-01-01

    This review explores the effects of female reproductive hormones, estrogens and progestogens, with a focus on progesterone and allopregnanolone, on object memory. Progesterone and its metabolites, in particular allopregnanolone, exert various effects on both cognitive and non-mnemonic functions in females. The well-known object recognition task is a valuable experimental paradigm that can be used to determine the effects and mechanisms of progestogens for mnemonic effects across the lifespan, which will be discussed herein. In this task there is little test-decay when different objects are used as targets and baseline valance for objects is controlled. This allows repeated testing, within-subjects designs, and longitudinal assessments, which aid understanding of changes in hormonal milieu. Objects are not aversive or food-based, which are hormone-sensitive factors. This review focuses on published data from our laboratory, and others, using the object recognition task in rodents to assess the role and mechanisms of progestogens throughout the lifespan. Improvements in object recognition performance of rodents are often associated with higher hormone levels in the hippocampus and prefrontal cortex during natural cycles, with hormone replacement following ovariectomy in young animals, or with aging. The capacity for reversal of age- and reproductive senescence-related decline in cognitive performance, and changes in neural plasticity that may be dissociated from peripheral effects with such decline, are discussed. The focus here will be on the effects of brain-derived factors, such as the neurosteroid, allopregnanolone, and other hormones, for enhancing object recognition across the lifespan. PMID:26235328

  7. Real-time optical multiple object recognition and tracking system and method

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)

    1987-01-01

    The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.

  8. Real-time optical multiple object recognition and tracking system and method

    NASA Astrophysics Data System (ADS)

    Chao, Tien-Hsin; Liu, Hua Kuang

    1987-12-01

    The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.

  9. LASSBio-579, a prototype antipsychotic drug, and clozapine are effective in novel object recognition task, a recognition memory model.

    PubMed

    Antonio, Camila B; Betti, Andresa H; Herzfeldt, Vivian; Barreiro, Eliezer J; Fraga, Carlos A M; Rates, Stela M K

    2016-06-01

    Previous studies on the N-phenylpiperazine derivative LASSBio-579 have suggested that LASSBio-579 has an atypical antipsychotic profile. It binds to D2, D4 and 5-HT1A receptors and is effective in animal models of schizophrenia symptoms (prepulse inhibition disruption, apomorphine-induced climbing and amphetamine-induced stereotypy). In the current study, we evaluated the effect of LASSBio-579, clozapine (atypical antipsychotic) and haloperidol (typical antipsychotic) in the novel object recognition task, a recognition memory model with translational value. Haloperidol (0.01 mg/kg, orally) impaired the ability of the animals (CF1 mice) to recognize the novel object on short-term and long-term memory tasks, whereas LASSBio-579 (5 mg/kg, orally) and clozapine (1 mg/kg, orally) did not. In another set of experiments, animals previously treated with ketamine (10 mg/kg, intraperitoneally) or vehicle (saline 1 ml/100 g, intraperitoneally) received LASSBio-579, clozapine or haloperidol at different time-points: 1 h before training (encoding/consolidation); immediately after training (consolidation); or 1 h before long-term memory testing (retrieval). LASSBio-579 and clozapine protected against the long-term memory impairment induced by ketamine when administered at the stages of encoding, consolidation and retrieval of memory. These findings point to the potential of LASSBio-579 for treating cognitive symptoms of schizophrenia and other disorders.

  10. Learning invariant object recognition from temporal correlation in a hierarchical network.

    PubMed

    Lessmann, Markus; Würtz, Rolf P

    2014-06-01

    Invariant object recognition, which means the recognition of object categories independent of conditions like viewing angle, scale and illumination, is a task of great interest that humans can fulfill much better than artificial systems. During the last years several basic principles were derived from neurophysiological observations and careful consideration: (1) Developing invariance to possible transformations of the object by learning temporal sequences of visual features that occur during the respective alterations. (2) Learning in a hierarchical structure, so basic level (visual) knowledge can be reused for different kinds of objects. (3) Using feedback to compare predicted input with the current one for choosing an interpretation in the case of ambiguous signals. In this paper we propose a network which implements all of these concepts in a computationally efficient manner which gives very good results on standard object datasets. By dynamically switching off weakly active neurons and pruning weights computation is sped up and thus handling of large databases with several thousands of images and a number of categories in a similar order becomes possible. The involved parameters allow flexible adaptation to the information content of training data and allow tuning to different databases relatively easily. Precondition for successful learning is that training images are presented in an order assuring that images of the same object under similar viewing conditions follow each other. Through an implementation with sparse data structures the system has moderate memory demands and still yields very good recognition rates.

  11. 3D Object Recognition of a Robotic Navigation Aid for the Visually Impaired.

    PubMed

    Ye, Cang; Qian, Xiangfei

    2017-09-01

    This paper presents a 3D object recognition method and its implementation on a Robotic Navigation Aid (RNA) to allow real-time detection of indoor structural objects for the navigation of a blind person. The method segments a point cloud into numerous planar patches and extracts their Inter-Plane Relationships (IPRs). Based on the existing IPRs of the object models, the method defines 6 High Level Features (HLFs) and determines the HLFs for each patch. A Gaussian-Mixture-Model-based plane classifier is then devised to classify each planar patch into one belonging to a particular object model. Finally, a recursive plane clustering procedure is used to cluster the classified planes into the model objects. As the proposed method uses geometric context to detect an object, it is robust to the object's visual appearance change. As a result, it is ideal for detecting structural objects (e.g., stairways, doorways, etc.). In addition, it has high scalability and parallelism. The method is also capable of detecting some indoor non-structural objects. Experimental results demonstrate that the proposed method has a high success rate in object recognition.

  12. Cascade fuzzy ART: a new extensible database for model-based object recognition

    NASA Astrophysics Data System (ADS)

    Hung, Hai-Lung; Liao, Hong-Yuan M.; Lin, Shing-Jong; Lin, Wei-Chung; Fan, Kuo-Chin

    1996-02-01

    In this paper, we propose a cascade fuzzy ART (CFART) neural network which can be used as an extensible database in a model-based object recognition system. The proposed CFART networks can accept both binary and continuous inputs. Besides, it preserves the prominent characteristics of a fuzzy ART network and extends the fuzzy ART's capability toward a hierarchical class representation of input patterns. The learning processes of the proposed network are unsupervised and self-organizing, which include coupled top-down searching and bottom-up learning processes. In addition, a global searching tree is built to speed up the learning and recognition processes.

  13. Image quality analysis and improvement of Ladar reflective tomography for space object recognition

    NASA Astrophysics Data System (ADS)

    Wang, Jin-cheng; Zhou, Shi-wei; Shi, Liang; Hu, Yi-Hua; Wang, Yong

    2016-01-01

    Some problems in the application of Ladar reflective tomography for space object recognition are studied in this work. An analytic target model is adopted to investigate the image reconstruction properties with limited relative angle range, which are useful to verify the target shape from the incomplete image, analyze the shadowing effect of the target and design the satellite payloads against recognition via reflective tomography approach. We proposed an iterative maximum likelihood method basing on Bayesian theory, which can effectively compress the pulse width and greatly improve the image resolution of incoherent LRT system without loss of signal to noise ratio.

  14. Vision: are models of object recognition catching up with the brain?

    PubMed

    Poggio, Tomaso; Ullman, Shimon

    2013-12-01

    Object recognition has been a central yet elusive goal of computational vision. For many years, computer performance seemed highly deficient and unable to emulate the basic capabilities of the human recognition system. Over the past decade or so, computer scientists and neuroscientists have developed algorithms and systems-and models of visual cortex-that have come much closer to human performance in visual identification and categorization. In this personal perspective, we discuss the ongoing struggle of visual models to catch up with the visual cortex, identify key reasons for the relatively rapid improvement of artificial systems and models, and identify open problems for computational vision in this domain.

  15. Optimized shape semantic graph representation for object understanding and recognition in point clouds

    NASA Astrophysics Data System (ADS)

    Ning, Xiaojuan; Wang, Yinghui; Meng, Weiliang; Zhang, Xiaopeng

    2016-10-01

    To understand and recognize the three-dimensional (3-D) objects represented as point cloud data, we use an optimized shape semantic graph (SSG) to describe 3-D objects. Based on the decomposed components of an object, the boundary surface of different components and the topology of components, the SSG gives a semantic description that is consistent with human vision perception. The similarity measurement of the SSG for different objects is effective for distinguishing the type of object and finding the most similar one. Experiments using a shape database show that the SSG is valuable for capturing the components of the objects and the corresponding relations between them. The SSG is not only suitable for an object without any loops but also appropriate for an object with loops to represent the shape and the topology. Moreover, a two-step progressive similarity measurement strategy is proposed to effectively improve the recognition rate in the shape database containing point-sample data.

  16. Mechanisms of Visual Object Recognition in Infancy: Five-Month-Olds Generalize beyond the Interpolation of Familiar Views

    ERIC Educational Resources Information Center

    Mash, Clay; Arterberry, Martha E.; Bornstein, Marc H.

    2007-01-01

    This work examined predictions of the interpolation of familiar views (IFV) account of object recognition performance in 5-month-olds. Infants were familiarized to an object either from a single viewpoint or from multiple viewpoints varying in rotation around a single axis. Object recognition was then tested in both conditions with the same object…

  17. Transformation-tolerant object recognition in rats revealed by visual priming.

    PubMed

    Tafazoli, Sina; Di Filippo, Alessandro; Zoccolan, Davide

    2012-01-04

    Successful use of rodents as models for studying object vision crucially depends on the ability of their visual system to construct representations of visual objects that tolerate (i.e., remain relatively unchanged with respect to) the tremendous changes in object appearance produced, for instance, by size and viewpoint variation. Whether this is the case is still controversial, despite some recent demonstration of transformation-tolerant object recognition in rats. In fact, it remains unknown to what extent such a tolerant recognition has a spontaneous, perceptual basis, or, alternatively, mainly reflects learning of arbitrary associative relations among trained object appearances. In this study, we addressed this question by training rats to categorize a continuum of morph objects resulting from blending two object prototypes. The resulting psychometric curve (reporting the proportion of responses to one prototype along the morph line) served as a reference when, in a second phase of the experiment, either prototype was briefly presented as a prime, immediately before a test morph object. The resulting shift of the psychometric curve showed that recognition became biased toward the identity of the prime. Critically, this bias was observed also when the primes were transformed along a variety of dimensions (i.e., size, position, viewpoint, and their combination) that the animals had never experienced before. These results indicate that rats spontaneously perceive different views/appearances of an object as similar (i.e., as instances of the same object) and argue for the existence of neuronal substrates underlying formation of transformation-tolerant object representations in rats.

  18. c-Fos expression correlates with performance on novel object and novel place recognition tests.

    PubMed

    Mendez, Marta; Arias, Natalia; Uceda, Sara; Arias, Jorge L

    2015-08-01

    In rodents, many studies have been carried out using novelty-preference paradigms. The results show that the perirhinal cortex and the hippocampus are involved in the recognition of a novel object, "what", and its new position, "where", respectively. We employed these two variants of a novelty-preference paradigm to assess whether the expression of the immediate-early gene c-fos in the dorsal hippocampus and perirhinal cortex correlates with the performance discrimination ratio (d2), on the respective versions of the novelty preference tests. A control group (CO) was added to explore c-fos activation not specific to recognition. The results showed different patterns of c-Fos protein expression in the hippocampus and perirhinal cortex. The Where Group presented more c-Fos positive nuclei than the What and CO groups in the CA1 and CA3 regions, whereas in the perirhinal cortex, the What Group showed more c-Fos positive nuclei than the Where and CO groups. The correlation results indicate that levels of c-Fos in the CA1 area and perirhinal cortex correlate with effective exploration, d2, on the respective versions of the novelty preference tests, novel place and novel object recognition. These data suggest that the hippocampal CA1 and perirhinal cortex are specifically related to the level of recognition of place and objects, respectively.

  19. Grouping in object recognition: The role of a Gestalt law in letter identification

    PubMed Central

    Pelli, Denis G.; Majaj, Najib J.; Raizman, Noah; Christian, Christopher J.; Kim, Edward; Palomares, Melanie C.

    2009-01-01

    The Gestalt psychologists reported a set of laws describing how vision groups elements to recognize objects. The Gestalt laws “prescribe for us what we are to recognize ‘as one thing’” (Köhler, 1920). Were they right? Does object recognition involve grouping? Tests of the laws of grouping have been favourable, but mostly assessed only detection, not identification, of the compound object. The grouping of elements seen in the detection experiments with lattices and “snakes in the grass” is compelling, but falls far short of the vivid everyday experience of recognizing a familiar, meaningful, named thing, which mediates the ordinary identification of an object. Thus, after nearly a century, there is hardly any evidence that grouping plays a role in ordinary object recognition. To assess grouping in object recognition, we made letters out of grating patches and measured threshold contrast for identifying these letters in visual noise as a function of perturbation of grating orientation, phase, and offset. We define a new measure, “wiggle”, to characterize the degree to which these various perturbations violate the Gestalt law of good continuation. We find that efficiency for letter identification is inversely proportional to wiggle and is wholly determined by wiggle, independent of how the wiggle was produced. Thus the effects of three different kinds of shape perturbation on letter identifiability are predicted by a single measure of goodness of continuation. This shows that letter identification obeys the Gestalt law of good continuation and may be the first confirmation of the original Gestalt claim that object recognition involves grouping. PMID:19424881

  20. Acute restraint stress and corticosterone transiently disrupts novelty preference in an object recognition task.

    PubMed

    Vargas-López, Viviana; Torres-Berrio, Angélica; González-Martínez, Lina; Múnera, Alejandro; Lamprea, Marisol R

    2015-09-15

    The object recognition task is a procedure based on rodents' natural tendency to explore novel objects which is frequently used for memory testing. However, in some instances novelty preference is replaced by familiarity preference, raising questions regarding the validity of novelty preference as a pure recognition memory index. Acute stress- and corticosterone administration-induced novel object preference disruption has been frequently interpreted as memory impairment; however, it is still not clear whether such effect can be actually attributed to either mnemonic disruption or altered novelty seeking. Seventy-five adult male Wistar rats were trained in an object recognition task and subjected to either acute stress or corticosterone administration to evaluate the effect of stress or corticosterone on an object recognition task. Acute stress was induced by restraining movement for 1 or 4h, ending 30 min before the sample trial. Corticosterone was injected intraperitoneally 10 min before the test trial which was performed either 1 or 24h after the sample trial. Four-hour, but not 1-h, stress induced familiar object preference during the test trial performed 1h after the sample trial; however, acute stress had no effects on the test when performed 24h after sample trial. Systemic administration of corticosterone before the test trial performed either 1 or 24h after the sample trial also resulted in familiar object preference. However, neither acute stress nor corticosterone induced changes in locomotor behaviour. Taken together, such results suggested that acute stress probably does not induce memory retrieval impairment but, instead, induces an emotional arousing state which motivates novelty avoidance.

  1. Multiple degree of freedom object recognition using optical relational graph decision nets

    NASA Technical Reports Server (NTRS)

    Casasent, David P.; Lee, Andrew J.

    1988-01-01

    Multiple-degree-of-freedom object recognition concerns objects with no stable rest position with all scale, rotation, and aspect distortions possible. It is assumed that the objects are in a fairly benign background, so that feature extractors are usable. In-plane distortion invariance is provided by use of a polar-log coordinate transform feature space, and out-of-plane distortion invariance is provided by linear discriminant function design. Relational graph decision nets are considered for multiple-degree-of-freedom pattern recognition. The design of Fisher (1936) linear discriminant functions and synthetic discriminant function for use at the nodes of binary and multidecision nets is discussed. Case studies are detailed for two-class and multiclass problems. Simulation results demonstrate the robustness of the processors to quantization of the filter coefficients and to noise.

  2. Expanded Dempster-Shafer reasoning technique for image feature integration and object recognition

    NASA Astrophysics Data System (ADS)

    Zhu, Quiming; Huang, Yinghua; Payne, Matt G.

    1992-12-01

    Integration of information from multiple sources has been one of the key steps to the success of general vision systems. It is also an essential problem to the development of color image understanding algorithms that make full use of the multichannel color data for object recognition. This paper presents a feature integration system characterized by a hybrid combination of a statistic-based reasoning technique and a symbolic logic-based inference method. A competitive evidence enhancement scheme is used in the process to fuse information from multiple sources. The scheme expands the Dempster-Shafer's function of combination and improves the reliability of the object recognition. When applied to integrate the object features extracted from the multiple spectra of the color images, the system alleviates the drawback of traditional Baysian classification system.

  3. Visual Crowding: a fundamental limit on conscious perception and object recognition

    PubMed Central

    Whitney, David; Levi, Dennis M.

    2011-01-01

    Crowding, the inability to recognize objects in clutter, sets a fundamental limit on conscious visual perception and object recognition throughout most of the visual field. Despite how widespread and essential it is to object recognition, reading, and visually guided action, a solid operational definition of what crowding is has only recently become clear. The goal of this review is to provide a broad-based synthesis of the most recent findings in this area, to define what crowding is and is not, and to set the stage for future work that will extend crowding well beyond low-level vision. Here we define five diagnostic criteria for what counts as crowding, and further describe factors that both escape and break crowding. All of these lead to the conclusion that crowding occurs at multiple stages in the visual hierarchy. PMID:21420894

  4. False recognition of objects in visual scenes: findings from a combined direct and indirect memory test.

    PubMed

    Weinstein, Yana; Nash, Robert A

    2013-01-01

    We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415-1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants' speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from "true" memories of studied pictures.

  5. Semantic memory retrieval: cortical couplings in object recognition in the N400 window.

    PubMed

    Supp, Gernot G; Schlögl, Alois; Fiebach, Christian J; Gunter, Thomas C; Vigliocco, Gabriella; Pfurtscheller, Gert; Petsche, Hellmuth

    2005-02-01

    To characterize the regional changes in neuronal couplings and information transfer related to semantic aspects of object recognition in humans we used partial-directed EEG-coherence analysis (PDC). We examined the differences of processing recognizable and unrecognizable pictures as reflected by changes in cortical networks within the time-window of a determined event-related potential (ERP) component, namely the N400. Fourteen participants performed an image recognition task, while sequentially confronted with pictures of recognizable and unrecognizable objects. The time-window of N400 as indicative of object semantics was defined from the ERP. Differences of PDC in the beta-band between these tasks were represented topographically as patterns of electrical couplings, possibly indicating changing degrees of functional cooperation between brain areas. Successful memory retrieval of picture meaning appears to be supported by networks comprising left temporal and parietal regions and bilateral frontal brain areas.

  6. 3-D Object Recognition Using Combined Overhead And Robot Eye-In-Hand Vision System

    NASA Astrophysics Data System (ADS)

    Luc, Ren C.; Lin, Min-Hsiung

    1987-10-01

    A new approach for recognizing 3-D objects using a combined overhead and eye-in-hand vision system is presented. A novel eye-in-hand vision system using a fiber-optic image array is described. The significance of this approach is the fast and accurate recognition of 3-D object information compared to traditional stereo image processing. For the recognition of 3-D objects, the over-head vision system will take 2-D top view image and the eye-in-hand vision system will take side view images orthogonal to the top view image plane. We have developed and demonstrated a unique approach to integrate this 2-D information into a 3-D representation based on a new approach called "3-D Volumetric Descrip-tion from 2-D Orthogonal Projections". The Unimate PUMA 560 and TRAPIX 5500 real-time image processor have been used to test the success of the entire system.

  7. Multiple degree of freedom object recognition using optical relational graph decision nets

    NASA Technical Reports Server (NTRS)

    Casasent, David P.; Lee, Andrew J.

    1988-01-01

    Multiple-degree-of-freedom object recognition concerns objects with no stable rest position with all scale, rotation, and aspect distortions possible. It is assumed that the objects are in a fairly benign background, so that feature extractors are usable. In-plane distortion invariance is provided by use of a polar-log coordinate transform feature space, and out-of-plane distortion invariance is provided by linear discriminant function design. Relational graph decision nets are considered for multiple-degree-of-freedom pattern recognition. The design of Fisher (1936) linear discriminant functions and synthetic discriminant function for use at the nodes of binary and multidecision nets is discussed. Case studies are detailed for two-class and multiclass problems. Simulation results demonstrate the robustness of the processors to quantization of the filter coefficients and to noise.

  8. Relating visual to verbal semantic knowledge: the evaluation of object rec