Science.gov

Sample records for object recognition system

  1. Cognitive object recognition system (CORS)

    NASA Astrophysics Data System (ADS)

    Raju, Chaitanya; Varadarajan, Karthik Mahesh; Krishnamurthi, Niyant; Xu, Shuli; Biederman, Irving; Kelley, Troy

    2010-04-01

    We have developed a framework, Cognitive Object Recognition System (CORS), inspired by current neurocomputational models and psychophysical research in which multiple recognition algorithms (shape based geometric primitives, 'geons,' and non-geometric feature-based algorithms) are integrated to provide a comprehensive solution to object recognition and landmarking. Objects are defined as a combination of geons, corresponding to their simple parts, and the relations among the parts. However, those objects that are not easily decomposable into geons, such as bushes and trees, are recognized by CORS using "feature-based" algorithms. The unique interaction between these algorithms is a novel approach that combines the effectiveness of both algorithms and takes us closer to a generalized approach to object recognition. CORS allows recognition of objects through a larger range of poses using geometric primitives and performs well under heavy occlusion - about 35% of object surface is sufficient. Furthermore, geon composition of an object allows image understanding and reasoning even with novel objects. With reliable landmarking capability, the system improves vision-based robot navigation in GPS-denied environments. Feasibility of the CORS system was demonstrated with real stereo images captured from a Pioneer robot. The system can currently identify doors, door handles, staircases, trashcans and other relevant landmarks in the indoor environment.

  2. Method and System for Object Recognition Search

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor); Stubberud, Allen R. (Inventor)

    2012-01-01

    A method for object recognition using shape and color features of the object to be recognized. An adaptive architecture is used to recognize and adapt the shape and color features for moving objects to enable object recognition.

  3. A Neural Network Object Recognition System

    DTIC Science & Technology

    1990-07-01

    useful for exploring different neural network configurations. There are three main computation phases of a model based object recognition system...segmentation, feature extraction, and object classification. This report focuses on the object classification stage. For segmentation, a neural network based...are available with the current system. Neural network based feature extraction may be added at a later date. The classification stage consists of a

  4. A neuromorphic system for video object recognition

    PubMed Central

    Khosla, Deepak; Chen, Yang; Kim, Kyungnam

    2014-01-01

    Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS), is inspired by computational neuroscience models of feed-forward object detection and classification pipelines for processing visual data. The NEOVUS architecture is inspired by the ventral (what) and dorsal (where) streams of the mammalian visual pathway and integrates retinal processing, object detection based on form and motion modeling, and object classification based on convolutional neural networks. The object recognition performance and energy use of the NEOVUS was evaluated by the Defense Advanced Research Projects Agency (DARPA) under the Neovision2 program using three urban area video datasets collected from a mix of stationary and moving platforms. These datasets are challenging and include a large number of objects of different types in cluttered scenes, with varying illumination and occlusion conditions. In a systematic evaluation of five different teams by DARPA on these datasets, the NEOVUS demonstrated the best performance with high object recognition accuracy and the lowest energy consumption. Its energy use was three orders of magnitude lower than two independent state of the art baseline computer vision systems. The dynamic power requirement for the complete system mapped to commercial off-the-shelf (COTS) hardware that includes a 5.6 Megapixel color camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W), for an effective energy consumption of 5.45 nanoJoules (nJ) per bit of incoming video. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition toward enabling practical low-power and mobile video processing

  5. A neuromorphic system for video object recognition.

    PubMed

    Khosla, Deepak; Chen, Yang; Kim, Kyungnam

    2014-01-01

    Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS), is inspired by computational neuroscience models of feed-forward object detection and classification pipelines for processing visual data. The NEOVUS architecture is inspired by the ventral (what) and dorsal (where) streams of the mammalian visual pathway and integrates retinal processing, object detection based on form and motion modeling, and object classification based on convolutional neural networks. The object recognition performance and energy use of the NEOVUS was evaluated by the Defense Advanced Research Projects Agency (DARPA) under the Neovision2 program using three urban area video datasets collected from a mix of stationary and moving platforms. These datasets are challenging and include a large number of objects of different types in cluttered scenes, with varying illumination and occlusion conditions. In a systematic evaluation of five different teams by DARPA on these datasets, the NEOVUS demonstrated the best performance with high object recognition accuracy and the lowest energy consumption. Its energy use was three orders of magnitude lower than two independent state of the art baseline computer vision systems. The dynamic power requirement for the complete system mapped to commercial off-the-shelf (COTS) hardware that includes a 5.6 Megapixel color camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W), for an effective energy consumption of 5.45 nanoJoules (nJ) per bit of incoming video. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition toward enabling practical low-power and mobile video processing

  6. Individual differences in involvement of the visual object recognition system during visual word recognition.

    PubMed

    Laszlo, Sarah; Sacchi, Elizabeth

    2015-01-01

    Individuals with dyslexia often evince reduced activation during reading in left hemisphere (LH) language regions. This can be observed along with increased activation in the right hemisphere (RH), especially in areas associated with object recognition - a pattern referred to as RH compensation. The mechanisms of RH compensation are relatively unclear. We hypothesize that RH compensation occurs when the RH object recognition system is called upon to supplement an underperforming LH visual word form recognition system. We tested this by collecting ERPs while participants with a range of reading abilities viewed words, objects, and word/object ambiguous items (e.g., "SMILE" shaped like a smile). Less experienced readers differentiate words, objects, and ambiguous items less strongly, especially over the RH. We suggest that this lack of differentiation may have negative consequences for dyslexic individuals demonstrating RH compensation.

  7. New neural-networks-based 3D object recognition system

    NASA Astrophysics Data System (ADS)

    Abolmaesumi, Purang; Jahed, M.

    1997-09-01

    Three-dimensional object recognition has always been one of the challenging fields in computer vision. In recent years, Ulman and Basri (1991) have proposed that this task can be done by using a database of 2-D views of the objects. The main problem in their proposed system is that the correspondent points should be known to interpolate the views. On the other hand, their system should have a supervisor to decide which class does the represented view belong to. In this paper, we propose a new momentum-Fourier descriptor that is invariant to scale, translation, and rotation. This descriptor provides the input feature vectors to our proposed system. By using the Dystal network, we show that the objects can be classified with over 95% precision. We have used this system to classify the objects like cube, cone, sphere, torus, and cylinder. Because of the nature of the Dystal network, this system reaches to its stable point by a single representation of the view to the system. This system can also classify the similar views to a single class (e.g., for the cube, the system generated 9 different classes for 50 different input views), which can be used to select an optimum database of training views. The system is also very flexible to the noise and deformed views.

  8. Automatic object recognition

    NASA Technical Reports Server (NTRS)

    Ranganath, H. S.; Mcingvale, Pat; Sage, Heinz

    1988-01-01

    Geometric and intensity features are very useful in object recognition. An intensity feature is a measure of contrast between object pixels and background pixels. Geometric features provide shape and size information. A model based approach is presented for computing geometric features. Knowledge about objects and imaging system is used to estimate orientation of objects with respect to the line of sight.

  9. Visual object recognition for mobile tourist information systems

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander

    2005-03-01

    We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.

  10. Poka Yoke system based on image analysis and object recognition

    NASA Astrophysics Data System (ADS)

    Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.

    2015-11-01

    Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).

  11. A primitive-based 3D object recognition system

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    An intermediate-level knowledge-based system for decomposing segmented data into three-dimensional primitives was developed to create an approximate three-dimensional description of the real world scene from a single two-dimensional perspective view. A knowledge-based approach was also developed for high-level primitive-based matching of three-dimensional objects. Both the intermediate-level decomposition and the high-level interpretation are based on the structural and relational matching; moreover, they are implemented in a frame-based environment.

  12. Vision-based object detection and recognition system for intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Ran, Bin; Liu, Henry X.; Martono, Wilfung

    1999-01-01

    Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.

  13. Object oriented image analysis based on multi-agent recognition system

    NASA Astrophysics Data System (ADS)

    Tabib Mahmoudi, Fatemeh; Samadzadegan, Farhad; Reinartz, Peter

    2013-04-01

    In this paper, the capabilities of multi-agent systems are used in order to solve object recognition difficulties in complex urban areas based on the characteristics of WorldView-2 satellite imagery and digital surface model (DSM). The proposed methodology has three main steps: pre-processing of dataset, object based image analysis and multi-agent object recognition. Classified regions obtained from object based image analysis are used as input datasets in the proposed multi-agent system in order to modify/improve results. In the first operational level of the proposed multi-agent system, various kinds of object recognition agents modify initial classified regions based on their spectral, textural and 3D structural knowledge. Then, in the second operational level, 2D structural knowledge and contextual relations are used by agents for reasoning and modification. Evaluation of the capabilities of the proposed object recognition methodology is performed on WorldView-2 imagery over Rio de Janeiro (Brazil) which has been collected in January 2010. According to the obtained results of the object based image analysis process, contextual relations and structural descriptors have high potential to modify general difficulties of object recognition. Using knowledge based reasoning and cooperative capabilities of agents in the proposed multi-agent system in this paper, most of the remaining difficulties are decreased and the accuracy of object based image analysis results is improved for about three percent.

  14. Neuropeptide S interacts with the basolateral amygdala noradrenergic system in facilitating object recognition memory consolidation.

    PubMed

    Han, Ren-Wen; Xu, Hong-Jiao; Zhang, Rui-San; Wang, Pei; Chang, Min; Peng, Ya-Li; Deng, Ke-Yu; Wang, Rui

    2014-01-01

    The noradrenergic activity in the basolateral amygdala (BLA) was reported to be involved in the regulation of object recognition memory. As the BLA expresses high density of receptors for Neuropeptide S (NPS), we investigated whether the BLA is involved in mediating NPS's effects on object recognition memory consolidation and whether such effects require noradrenergic activity. Intracerebroventricular infusion of NPS (1nmol) post training facilitated 24-h memory in a mouse novel object recognition task. The memory-enhancing effect of NPS could be blocked by the β-adrenoceptor antagonist propranolol. Furthermore, post-training intra-BLA infusions of NPS (0.5nmol/side) improved 24-h memory for objects, which was impaired by co-administration of propranolol (0.5μg/side). Taken together, these results indicate that NPS interacts with the BLA noradrenergic system in improving object recognition memory during consolidation.

  15. Toward a unified model of face and object recognition in the human visual system

    PubMed Central

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  16. An Event-Based Neurobiological Recognition System with Orientation Detector for Objects in Multiple Orientations

    PubMed Central

    Wang, Hanyu; Xu, Jiangtao; Gao, Zhiyuan; Lu, Chengye; Yao, Suying; Ma, Jianguo

    2016-01-01

    A new multiple orientation event-based neurobiological recognition system is proposed by integrating recognition and tracking function in this paper, which is used for asynchronous address-event representation (AER) image sensors. The characteristic of this system has been enriched to recognize the objects in multiple orientations with only training samples moving in a single orientation. The system extracts multi-scale and multi-orientation line features inspired by models of the primate visual cortex. An orientation detector based on modified Gaussian blob tracking algorithm is introduced for object tracking and orientation detection. The orientation detector and feature extraction block work in simultaneous mode, without any increase in categorization time. An addresses lookup table (addresses LUT) is also presented to adjust the feature maps by addresses mapping and reordering, and they are categorized in the trained spiking neural network. This recognition system is evaluated with the MNIST dataset which have played important roles in the development of computer vision, and the accuracy is increased owing to the use of both ON and OFF events. AER data acquired by a dynamic vision senses (DVS) are also tested on the system, such as moving digits, pokers, and vehicles. The experimental results show that the proposed system can realize event-based multi-orientation recognition. The work presented in this paper makes a number of contributions to the event-based vision processing system for multi-orientation object recognition. It develops a new tracking-recognition architecture to feedforward categorization system and an address reorder approach to classify multi-orientation objects using event-based data. It provides a new way to recognize multiple orientation objects with only samples in single orientation. PMID:27867346

  17. An Event-Based Neurobiological Recognition System with Orientation Detector for Objects in Multiple Orientations.

    PubMed

    Wang, Hanyu; Xu, Jiangtao; Gao, Zhiyuan; Lu, Chengye; Yao, Suying; Ma, Jianguo

    2016-01-01

    A new multiple orientation event-based neurobiological recognition system is proposed by integrating recognition and tracking function in this paper, which is used for asynchronous address-event representation (AER) image sensors. The characteristic of this system has been enriched to recognize the objects in multiple orientations with only training samples moving in a single orientation. The system extracts multi-scale and multi-orientation line features inspired by models of the primate visual cortex. An orientation detector based on modified Gaussian blob tracking algorithm is introduced for object tracking and orientation detection. The orientation detector and feature extraction block work in simultaneous mode, without any increase in categorization time. An addresses lookup table (addresses LUT) is also presented to adjust the feature maps by addresses mapping and reordering, and they are categorized in the trained spiking neural network. This recognition system is evaluated with the MNIST dataset which have played important roles in the development of computer vision, and the accuracy is increased owing to the use of both ON and OFF events. AER data acquired by a dynamic vision senses (DVS) are also tested on the system, such as moving digits, pokers, and vehicles. The experimental results show that the proposed system can realize event-based multi-orientation recognition. The work presented in this paper makes a number of contributions to the event-based vision processing system for multi-orientation object recognition. It develops a new tracking-recognition architecture to feedforward categorization system and an address reorder approach to classify multi-orientation objects using event-based data. It provides a new way to recognize multiple orientation objects with only samples in single orientation.

  18. Performance of a neural-network-based 3-D object recognition system

    NASA Astrophysics Data System (ADS)

    Rak, Steven J.; Kolodzy, Paul J.

    1991-08-01

    Object recognition in laser radar sensor imagery is a challenging application of neural networks. The task involves recognition of objects at a variety of distances and aspects with significant levels of sensor noise. These variables are related to sensor parameters such as sensor signal strength and angular resolution, as well as object range and viewing aspect. The effect of these parameters on a fixed recognition system based on log-polar mapped features and an unsupervised neural network classifier are investigated. This work is an attempt to quantify the design parameters of a laser radar measurement system with respect to classifying and/or identifying objects by the shape of their silhouettes. Experiments with vehicle silhouettes rotated through 90 deg-of-view angle from broadside to head-on ('out-of-plane' rotation) have been used to quantify the performance of a log-polar map/neural-network based 3-D object recognition system. These experiments investigated several key issues such as category stability, category memory compression, image fidelity, and viewing aspect. Initial results indicate a compression from 720 possible categories (8 vehicles X 90 out-of-plane rotations) to a classifier memory with approximately 30 stable recognition categories. These results parallel the human experience of studying an object from several viewing angles yet recognizing it through a wide range of viewing angles. Results are presented illustrating category formation for an eight vehicle dataset as a function of several sensor parameters. These include: (1) sensor noise, as a function of carrier-to-noise ratio; (2) pixels on the vehicle, related to angular resolution and target range; and (3) viewing aspect, as related to sensor-to-platform depression angle. This work contributes to the formation of a three- dimensional object recognition system.

  19. 3-D Object Recognition Using Combined Overhead And Robot Eye-In-Hand Vision System

    NASA Astrophysics Data System (ADS)

    Luc, Ren C.; Lin, Min-Hsiung

    1987-10-01

    A new approach for recognizing 3-D objects using a combined overhead and eye-in-hand vision system is presented. A novel eye-in-hand vision system using a fiber-optic image array is described. The significance of this approach is the fast and accurate recognition of 3-D object information compared to traditional stereo image processing. For the recognition of 3-D objects, the over-head vision system will take 2-D top view image and the eye-in-hand vision system will take side view images orthogonal to the top view image plane. We have developed and demonstrated a unique approach to integrate this 2-D information into a 3-D representation based on a new approach called "3-D Volumetric Descrip-tion from 2-D Orthogonal Projections". The Unimate PUMA 560 and TRAPIX 5500 real-time image processor have been used to test the success of the entire system.

  20. Real-time optical multiple object recognition and tracking system and method

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)

    1987-01-01

    The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.

  1. Real-time optical multiple object recognition and tracking system and method

    NASA Astrophysics Data System (ADS)

    Chao, Tien-Hsin; Liu, Hua Kuang

    1987-12-01

    The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.

  2. A knowledge-based object recognition system for applications in the space station

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  3. An Intelligent Systems Approach to Automated Object Recognition: A Preliminary Study

    USGS Publications Warehouse

    Maddox, Brian G.; Swadley, Casey L.

    2002-01-01

    Attempts at fully automated object recognition systems have met with varying levels of success over the years. However, none of the systems have achieved high enough accuracy rates to be run unattended. One of the reasons for this may be that they are designed from the computer's point of view and rely mainly on image-processing methods. A better solution to this problem may be to make use of modern advances in computational intelligence and distributed processing to try to mimic how the human brain is thought to recognize objects. As humans combine cognitive processes with detection techniques, such a system would combine traditional image-processing techniques with computer-based intelligence to determine the identity of various objects in a scene.

  4. Real-time optical multiple object recognition and tracking system and method

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor); Liu, Hua-Kuang (Inventor)

    1990-01-01

    System for optically recognizing and tracking a plurality of objects within a field of vision. Laser (46) produces a coherent beam (48). Beam splitter (24) splits the beam into object (26) and reference (28) beams. Beam expanders (50) and collimators (52) transform the beams (26, 28) into coherent collimated light beams (26', 28'). A two-dimensional SLM (54), disposed in the object beam (26'), modulates the object beam with optical information as a function of signals from a first camera (16) which develops X and Y signals reflecting the contents of its field of vision. A hololens (38), positioned in the object beam (26') subsequent to the modulator (54), focuses the object beam at a plurality of focal points (42). A planar transparency-forming film (32), disposed with the focal points on an exposable surface, forms a multiple position interference filter (62) upon exposure of the surface and development processing of the film (32). A reflector (53) directing the reference beam (28') onto the film (32), exposes the surface, with images focused by the hololens (38), to form interference patterns on the surface. There is apparatus (16', 64) for sensing and indicating light passage through respective ones of the positions of the filter (62), whereby recognition of objects corresponding to respective ones of the positions of the filter (62) is affected. For tracking, apparatus (64) focuses light passing through the filter (62) onto a matrix of CCD's in a second camera (16') to form a two-dimensional display of the recognized objects.

  5. An optimal sensing strategy of a proximity sensor system for recognition and localization of polyhedral objects

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Hahn, Hern S.

    1990-01-01

    An algorithm is presented for the recognition and localization of thre-dimensional polyhedral objects based on an optical proximity sensor system capable of measuring the depth and orientation of a local area of an object surface. Emphasis is given to the determination of an optimal sensor trajectory or an optimal probing, for efficient discrimination among all the possible interpretations. The determination of an optimal sensor trajectory for the next probing consists of the selection of optimal beam orientations based on the surface normal vector distribution of the multiple interpretation image (MII) and the selection of an optimal probing plane by projecting the MII onto the projection plane perpendicular to a selected beam orientation and deriving the optimal path on the projection plane. The selection of optimal beam orientation and probing plane is based on the measure of discrimination power of a cluster of surfaces of an MII. Simulation results are shown.

  6. Coordinate Transformations in Object Recognition

    ERIC Educational Resources Information Center

    Graf, Markus

    2006-01-01

    A basic problem of visual perception is how human beings recognize objects after spatial transformations. Three central classes of findings have to be accounted for: (a) Recognition performance varies systematically with orientation, size, and position; (b) recognition latencies are sequentially additive, suggesting analogue transformation…

  7. Object recognition memory in zebrafish.

    PubMed

    May, Zacnicte; Morrill, Adam; Holcombe, Adam; Johnston, Travis; Gallup, Joshua; Fouad, Karim; Schalomon, Melike; Hamilton, Trevor James

    2016-01-01

    The novel object recognition, or novel-object preference (NOP) test is employed to assess recognition memory in a variety of organisms. The subject is exposed to two identical objects, then after a delay, it is placed back in the original environment containing one of the original objects and a novel object. If the subject spends more time exploring one object, this can be interpreted as memory retention. To date, this test has not been fully explored in zebrafish (Danio rerio). Zebrafish possess recognition memory for simple 2- and 3-dimensional geometrical shapes, yet it is unknown if this translates to complex 3-dimensional objects. In this study we evaluated recognition memory in zebrafish using complex objects of different sizes. Contrary to rodents, zebrafish preferentially explored familiar over novel objects. Familiarity preference disappeared after delays of 5 mins. Leopard danios, another strain of D. rerio, also preferred the familiar object after a 1 min delay. Object preference could be re-established in zebra danios by administration of nicotine tartrate salt (50mg/L) prior to stimuli presentation, suggesting a memory-enhancing effect of nicotine. Additionally, exploration biases were present only when the objects were of intermediate size (2 × 5 cm). Our results demonstrate zebra and leopard danios have recognition memory, and that low nicotine doses can improve this memory type in zebra danios. However, exploration biases, from which memory is inferred, depend on object size. These findings suggest zebrafish ecology might influence object preference, as zebrafish neophobia could reflect natural anti-predatory behaviour.

  8. Recurrent Processing during Object Recognition

    PubMed Central

    O’Reilly, Randall C.; Wyatte, Dean; Herd, Seth; Mingus, Brian; Jilk, David J.

    2013-01-01

    How does the brain learn to recognize objects visually, and perform this difficult feat robustly in the face of many sources of ambiguity and variability? We present a computational model based on the biology of the relevant visual pathways that learns to reliably recognize 100 different object categories in the face of naturally occurring variability in location, rotation, size, and lighting. The model exhibits robustness to highly ambiguous, partially occluded inputs. Both the unified, biologically plausible learning mechanism and the robustness to occlusion derive from the role that recurrent connectivity and recurrent processing mechanisms play in the model. Furthermore, this interaction of recurrent connectivity and learning predicts that high-level visual representations should be shaped by error signals from nearby, associated brain areas over the course of visual learning. Consistent with this prediction, we show how semantic knowledge about object categories changes the nature of their learned visual representations, as well as how this representational shift supports the mapping between perceptual and conceptual knowledge. Altogether, these findings support the potential importance of ongoing recurrent processing throughout the brain’s visual system and suggest ways in which object recognition can be understood in terms of interactions within and between processes over time. PMID:23554596

  9. Acoustic signature recognition technique for Human-Object Interactions (HOI) in persistent surveillance systems

    NASA Astrophysics Data System (ADS)

    Alkilani, Amjad; Shirkhodaie, Amir

    2013-05-01

    Handling, manipulation, and placement of objects, hereon called Human-Object Interaction (HOI), in the environment generate sounds. Such sounds are readily identifiable by the human hearing. However, in the presence of background environment noises, recognition of minute HOI sounds is challenging, though vital for improvement of multi-modality sensor data fusion in Persistent Surveillance Systems (PSS). Identification of HOI sound signatures can be used as precursors to detection of pertinent threats that otherwise other sensor modalities may miss to detect. In this paper, we present a robust method for detection and classification of HOI events via clustering of extracted features from training of HOI acoustic sound waves. In this approach, salient sound events are preliminary identified and segmented from background via a sound energy tracking method. Upon this segmentation, frequency spectral pattern of each sound event is modeled and its features are extracted to form a feature vector for training. To reduce dimensionality of training feature space, a Principal Component Analysis (PCA) technique is employed to expedite fast classification of test feature vectors, a kd-tree and Random Forest classifiers are trained for rapid classification of training sound waves. Each classifiers employs different similarity distance matching technique for classification. Performance evaluations of classifiers are compared for classification of a batch of training HOI acoustic signatures. Furthermore, to facilitate semantic annotation of acoustic sound events, a scheme based on Transducer Mockup Language (TML) is proposed. The results demonstrate the proposed approach is both reliable and effective, and can be extended to future PSS applications.

  10. Real-time unconstrained object recognition: a processing pipeline based on the mammalian visual system.

    PubMed

    Aguilar, Mario; Peot, Mark A; Zhou, Jiangying; Simons, Stephen; Liao, Yuwei; Metwalli, Nader; Anderson, Mark B

    2012-03-01

    The mammalian visual system is still the gold standard for recognition accuracy, flexibility, efficiency, and speed. Ongoing advances in our understanding of function and mechanisms in the visual system can now be leveraged to pursue the design of computer vision architectures that will revolutionize the state of the art in computer vision.

  11. A Model-Based System For Object Recognition In Aerial Scenes

    NASA Astrophysics Data System (ADS)

    Cullen, M. F.; Hord, R. M.; Miller, S. F.

    1987-03-01

    Preliminary results of a system that uses model descriptions of objects to predict and match features derived from aerial images are presented. The system is organized into several phases: 1) processing of image scenes to obtain image primitives, 2) goal-oriented sorting of primitives into classes of related features, 3) prediction of the location of object model features in the image, and 4) matching image features to the model predicted features. The matching approach is centered upon a compatibility figure of merit between a set of image features and model features chosen to direct the search. The search process utilizes an iterative hypothesis generation and verication cycle. A "search matrix" is con-structed from image features and model features according to a first approximation of compatibility based upon orientation. Currently, linear features are used as primitives. Input to the matching algorithm is in the form of line segments extracted from an image scene via edge operatiors and a Hough transform technique for grouping. Additional processing is utilized to derive closed boundaries and complete edge descriptions. Line segments are then sorted into specific classes such that, on a higher level, a priori knowledge about a particular scene can be used to control the priority of line segments in the search process. Additional knowledge about the object model under consideration is utilized to construct the search matrix with the classes of line segments most likely containing the model description. It is shown that these techniques result in a, reduction in the size of the object recognition search space and hence in the time to locate the object in the image. The current system is implemented on a Symbolics LispTM machine. While experimentation continues, we have rewritten and tested the search process and several image processing functions for parallel implementation on a Connection Machine TM computer. It is shown that several orders of magnitude faster

  12. Visual object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Chang, Chu-Yin (Inventor); English, James D. (Inventor); Tardella, Neil M. (Inventor)

    2010-01-01

    This invention describes a method for identifying and tracking an object from two-dimensional data pictorially representing said object by an object-tracking system through processing said two-dimensional data using at least one tracker-identifier belonging to the object-tracking system for providing an output signal containing: a) a type of the object, and/or b) a position or an orientation of the object in three-dimensions, and/or c) an articulation or a shape change of said object in said three dimensions.

  13. Systemic and intra-rhinal-cortical 17-β estradiol administration modulate object-recognition memory in ovariectomized female rats.

    PubMed

    Gervais, Nicole J; Jacob, Sofia; Brake, Wayne G; Mumby, Dave G

    2013-09-01

    Previous studies using the novel-object-preference (NOP) test suggest that estrogen (E) replacement in ovariectomized rodents can lead to enhanced novelty preference. The present study aimed to determine: 1) whether the effect of E on NOP performance is the result of enhanced preference for novelty, per se, or facilitated object-recognition memory, and 2) whether E affects NOP performance through actions it has within the perirhinal cortex/entorhinal cortex region (PRh/EC). Ovariectomized rats received either systemic chronic low 17-β estradiol (E2; ~20 pg/ml serum) replacement alone or in combination with systemic acute high administration of estradiol benzoate (EB; 10 μg), or in combination with intracranial infusions of E2 (244.8 pg/μl) or vehicle into the PRh/EC. For one of the intracranial experiments, E2 was infused either immediately before, immediately after, or 2 h following the familiarization (i.e., learning) phase of the NOP test. In light of recent evidence that raises questions about the internal validity of the NOP test as a method of indexing object-recognition memory, we also tested rats on a delayed nonmatch-to-sample (DNMS) task of object recognition following systemic and intra-PRh/EC infusions of E2. Both systemic acute and intra-PRh/EC infusions of E enhanced novelty preference, but only when administered either before or immediately following familiarization. In contrast, high E (both systemic acute and intra-PRh/EC) impaired performance on the DNMS task. The findings suggest that while E2 in the PRh/EC can enhance novelty preference, this effect is probably not due to an improvement in object-recognition abilities.

  14. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  15. Probabilistic view clustering in object recognition

    NASA Astrophysics Data System (ADS)

    Camps, Octavia I.; Christoffel, Douglas W.; Pathak, Anjali

    1992-11-01

    To recognize objects and to determine their poses in a scene we need to find correspondences between the features extracted from the image and those of the object models. Models are commonly represented by describing a few characteristic views of the object representing groups of views with similar properties. Most feature-based matching schemes assume that all the features that are potentially visible in a view will appear with equal probability, and the resulting matching algorithms have to allow for 'errors' without really understanding what they mean. PREMIO is an object recognition system that uses CAD models of 3D objects and knowledge of surface reflectance properties, light sources, sensor characteristics, and feature detector algorithms to estimate the probability of the features being detectable and correctly matched. The purpose of this paper is to describe the predictions generated by PREMIO, how they are combined into a single probabilistic model, and illustrative examples showing its use in object recognition.

  16. Object recognition by artificial cortical maps.

    PubMed

    Plebe, Alessio; Domenella, Rosaria Grazia

    2007-09-01

    Object recognition is one of the most important functions of the human visual system, yet one of the least understood, this despite the fact that vision is certainly the most studied function of the brain. We understand relatively well how several processes in the cortical visual areas that support recognition capabilities take place, such as orientation discrimination and color constancy. This paper proposes a model of the development of object recognition capability, based on two main theoretical principles. The first is that recognition does not imply any sort of geometrical reconstruction, it is instead fully driven by the two dimensional view captured by the retina. The second assumption is that all the processing functions involved in recognition are not genetically determined or hardwired in neural circuits, but are the result of interactions between epigenetic influences and basic neural plasticity mechanisms. The model is organized in modules roughly related to the main visual biological areas, and is implemented mainly using the LISSOM architecture, a recent neural self-organizing map model that simulates the effects of intercortical lateral connections. This paper shows how recognition capabilities, similar to those found in brain ventral visual areas, can develop spontaneously by exposure to natural images in an artificial cortical model.

  17. Statistical Model For Pseudo-Moving Objects Recognition In Video Surveillance Systems

    NASA Astrophysics Data System (ADS)

    Vishnyakov, B.; Egorov, A.; Sidyakin, S.; Malin, I.; Vizilter, Y.

    2014-08-01

    This paper considers a statistical approach to define pseudo-moving (false) objects in video surveillance systems by constructing systems of hypothesis with the criteria based on statistical behavioral particularities. The obtained results are integrated in two ways: using the Bayes' theorem or the logistic regression. FAR-FRR curves are plotted for each system of hypothesis and also for the decision rule. The results of the proposed methods are obtained on test video databases.

  18. The uncrowded window of object recognition

    PubMed Central

    Pelli, Denis G; Tillman, Katharine A

    2009-01-01

    It is now emerging that vision is usually limited by object spacing rather than size. The visual system recognizes an object by detecting and then combining its features. ‘Crowding’ occurs when objects are too close together and features from several objects are combined into a jumbled percept. Here, we review the explosion of studies on crowding—in grating discrimination, letter and face recognition, visual search, selective attention, and reading—and find a universal principle, the Bouma law. The critical spacing required to prevent crowding is equal for all objects, although the effect is weaker between dissimilar objects. Furthermore, critical spacing at the cortex is independent of object position, and critical spacing at the visual field is proportional to object distance from fixation. The region where object spacing exceeds critical spacing is the ‘uncrowded window’. Observers cannot recognize objects outside of this window and its size limits the speed of reading and search. PMID:18828191

  19. Relations among Early Object Recognition Skills: Objects and Letters

    ERIC Educational Resources Information Center

    Augustine, Elaine; Jones, Susan S.; Smith, Linda B.; Longfield, Erica

    2015-01-01

    Human visual object recognition is multifaceted and comprised of several domains of expertise. Developmental relations between young children's letter recognition and their 3-dimensional object recognition abilities are implicated on several grounds but have received little research attention. Here, we ask how preschoolers' success in recognizing…

  20. REKRIATE: A Knowledge Representation System for Object Recognition and Scene Interpretation

    NASA Astrophysics Data System (ADS)

    Meystel, Alexander M.; Bhasin, Sanjay; Chen, X.

    1990-02-01

    What humans actually observe and how they comprehend this information is complex due to Gestalt processes and interaction of context in predicting the course of thinking and enforcing one idea while repressing another. How we extract the knowledge from the scene, what we get from the scene indeed and what we bring from our mechanisms of perception are areas separated by a thin, ill-defined line. The purpose of this paper is to present a system for Representing Knowledge and Recognizing and Interpreting Attention Trailed Entities dubbed as REKRIATE. It will be used as a tool for discovering the underlying principles involved in knowledge representation required for conceptual learning. REKRIATE has some inherited knowledge and is given a vocabulary which is used to form rules for identification of the object. It has various modalities of sensing and has the ability to measure the distance between the objects in the image as well as the similarity between different images of presumably the same object. All sensations received from matrix of different sensors put into an adequate form. The methodology proposed is applicable to not only the pictorial or visual world representation, but to any sensing modality. It is based upon the two premises: a) inseparability of all domains of the world representation including linguistic, as well as those formed by various sensor modalities. and b) representativity of the object at several levels of resolution simultaneously.

  1. Exploiting core knowledge for visual object recognition.

    PubMed

    Schurgin, Mark W; Flombaum, Jonathan I

    2017-03-01

    Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record

  2. Infant visual attention and object recognition.

    PubMed

    Reynolds, Greg D

    2015-05-15

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy.

  3. Three-Dimensional Object Recognition and Registration for Robotic Grasping Systems Using a Modified Viewpoint Feature Histogram.

    PubMed

    Chen, Chin-Sheng; Chen, Po-Chun; Hsu, Chih-Ming

    2016-11-23

    This paper presents a novel 3D feature descriptor for object recognition and to identify poses when there are six-degrees-of-freedom for mobile manipulation and grasping applications. Firstly, a Microsoft Kinect sensor is used to capture 3D point cloud data. A viewpoint feature histogram (VFH) descriptor for the 3D point cloud data then encodes the geometry and viewpoint, so an object can be simultaneously recognized and registered in a stable pose and the information is stored in a database. The VFH is robust to a large degree of surface noise and missing depth information so it is reliable for stereo data. However, the pose estimation for an object fails when the object is placed symmetrically to the viewpoint. To overcome this problem, this study proposes a modified viewpoint feature histogram (MVFH) descriptor that consists of two parts: a surface shape component that comprises an extended fast point feature histogram and an extended viewpoint direction component. The MVFH descriptor characterizes an object's pose and enhances the system's ability to identify objects with mirrored poses. Finally, the refined pose is further estimated using an iterative closest point when the object has been recognized and the pose roughly estimated by the MVFH descriptor and it has been registered on a database. The estimation results demonstrate that the MVFH feature descriptor allows more accurate pose estimation. The experiments also show that the proposed method can be applied in vision-guided robotic grasping systems.

  4. From brain synapses to systems for learning and memory: Object recognition, spatial navigation, timed conditioning, and movement control.

    PubMed

    Grossberg, Stephen

    2015-09-24

    This article provides an overview of neural models of synaptic learning and memory whose expression in adaptive behavior depends critically on the circuits and systems in which the synapses are embedded. It reviews Adaptive Resonance Theory, or ART, models that use excitatory matching and match-based learning to achieve fast category learning and whose learned memories are dynamically stabilized by top-down expectations, attentional focusing, and memory search. ART clarifies mechanistic relationships between consciousness, learning, expectation, attention, resonance, and synchrony. ART models are embedded in ARTSCAN architectures that unify processes of invariant object category learning, recognition, spatial and object attention, predictive remapping, and eye movement search, and that clarify how conscious object vision and recognition may fail during perceptual crowding and parietal neglect. The generality of learned categories depends upon a vigilance process that is regulated by acetylcholine via the nucleus basalis. Vigilance can get stuck at too high or too low values, thereby causing learning problems in autism and medial temporal amnesia. Similar synaptic learning laws support qualitatively different behaviors: Invariant object category learning in the inferotemporal cortex; learning of grid cells and place cells in the entorhinal and hippocampal cortices during spatial navigation; and learning of time cells in the entorhinal-hippocampal system during adaptively timed conditioning, including trace conditioning. Spatial and temporal processes through the medial and lateral entorhinal-hippocampal system seem to be carried out with homologous circuit designs. Variations of a shared laminar neocortical circuit design have modeled 3D vision, speech perception, and cognitive working memory and learning. A complementary kind of inhibitory matching and mismatch learning controls movement. This article is part of a Special Issue entitled SI: Brain and Memory.

  5. Three-Dimensional Object Recognition and Registration for Robotic Grasping Systems Using a Modified Viewpoint Feature Histogram

    PubMed Central

    Chen, Chin-Sheng; Chen, Po-Chun; Hsu, Chih-Ming

    2016-01-01

    This paper presents a novel 3D feature descriptor for object recognition and to identify poses when there are six-degrees-of-freedom for mobile manipulation and grasping applications. Firstly, a Microsoft Kinect sensor is used to capture 3D point cloud data. A viewpoint feature histogram (VFH) descriptor for the 3D point cloud data then encodes the geometry and viewpoint, so an object can be simultaneously recognized and registered in a stable pose and the information is stored in a database. The VFH is robust to a large degree of surface noise and missing depth information so it is reliable for stereo data. However, the pose estimation for an object fails when the object is placed symmetrically to the viewpoint. To overcome this problem, this study proposes a modified viewpoint feature histogram (MVFH) descriptor that consists of two parts: a surface shape component that comprises an extended fast point feature histogram and an extended viewpoint direction component. The MVFH descriptor characterizes an object’s pose and enhances the system’s ability to identify objects with mirrored poses. Finally, the refined pose is further estimated using an iterative closest point when the object has been recognized and the pose roughly estimated by the MVFH descriptor and it has been registered on a database. The estimation results demonstrate that the MVFH feature descriptor allows more accurate pose estimation. The experiments also show that the proposed method can be applied in vision-guided robotic grasping systems. PMID:27886080

  6. Breaking Object Correspondence Across Saccadic Eye Movements Deteriorates Object Recognition.

    PubMed

    Poth, Christian H; Herwig, Arvid; Schneider, Werner X

    2015-01-01

    Visual perception is based on information processing during periods of eye fixations that are interrupted by fast saccadic eye movements. The ability to sample and relate information on task-relevant objects across fixations implies that correspondence between presaccadic and postsaccadic objects is established. Postsaccadic object information usually updates and overwrites information on the corresponding presaccadic object. The presaccadic object representation is then lost. In contrast, the presaccadic object is conserved when object correspondence is broken. This helps transsaccadic memory but it may impose attentional costs on object recognition. Therefore, we investigated how breaking object correspondence across the saccade affects postsaccadic object recognition. In Experiment 1, object correspondence was broken by a brief postsaccadic blank screen. Observers made a saccade to a peripheral object which was displaced during the saccade. This object reappeared either immediately after the saccade or after the blank screen. Within the postsaccadic object, a letter was briefly presented (terminated by a mask). Observers reported displacement direction and letter identity in different blocks. Breaking object correspondence by blanking improved displacement identification but deteriorated postsaccadic letter recognition. In Experiment 2, object correspondence was broken by changing the object's contrast-polarity. There were no object displacements and observers only reported letter identity. Again, breaking object correspondence deteriorated postsaccadic letter recognition. These findings identify transsaccadic object correspondence as a key determinant of object recognition across the saccade. This is in line with the recent hypothesis that breaking object correspondence results in separate representations of presaccadic and postsaccadic objects which then compete for limited attentional processing resources (Schneider, 2013). Postsaccadic object recognition is

  7. Recognition memory impairments caused by false recognition of novel objects.

    PubMed

    Yeung, Lok-Kin; Ryan, Jennifer D; Cowell, Rosemary A; Barense, Morgan D

    2013-11-01

    A fundamental assumption underlying most current theories of amnesia is that memory impairments arise because previously studied information either is lost rapidly or is made inaccessible (i.e., the old information appears to be new). Recent studies in rodents have challenged this view, suggesting instead that under conditions of high interference, recognition memory impairments following medial temporal lobe damage arise because novel information appears as though it has been previously seen. Here, we developed a new object recognition memory paradigm that distinguished whether object recognition memory impairments were driven by previously viewed objects being treated as if they were novel or by novel objects falsely recognized as though they were previously seen. In this indirect, eyetracking-based passive viewing task, older adults at risk for mild cognitive impairment showed false recognition to high-interference novel items (with a significant degree of feature overlap with previously studied items) but normal novelty responses to low-interference novel items (with a lower degree of feature overlap). The indirect nature of the task minimized the effects of response bias and other memory-based decision processes, suggesting that these factors cannot solely account for false recognition. These findings support the counterintuitive notion that recognition memory impairments in this memory-impaired population are not characterized by forgetting but rather are driven by the failure to differentiate perceptually similar objects, leading to the false recognition of novel objects as having been seen before.

  8. Object and event recognition for stroke rehabilitation

    NASA Astrophysics Data System (ADS)

    Ghali, Ahmed; Cunningham, Andrew S.; Pridmore, Tony P.

    2003-06-01

    Stroke is a major cause of disability and health care expenditure around the world. Existing stroke rehabilitation methods can be effective but are costly and need to be improved. Even modest improvements in the effectiveness of rehabilitation techniques could produce large benefits in terms of quality of life. The work reported here is part of an ongoing effort to integrate virtual reality and machine vision technologies to produce innovative stroke rehabilitation methods. We describe a combined object recognition and event detection system that provides real time feedback to stroke patients performing everyday kitchen tasks necessary for independent living, e.g. making a cup of coffee. The image plane position of each object, including the patient"s hand, is monitored using histogram-based recognition methods. The relative positions of hand and objects are then reported to a task monitor that compares the patient"s actions against a model of the target task. A prototype system has been constructed and is currently undergoing technical and clinical evaluation.

  9. Object Recognition Memory and the Rodent Hippocampus

    ERIC Educational Resources Information Center

    Broadbent, Nicola J.; Gaskin, Stephane; Squire, Larry R.; Clark, Robert E.

    2010-01-01

    In rodents, the novel object recognition task (NOR) has become a benchmark task for assessing recognition memory. Yet, despite its widespread use, a consensus has not developed about which brain structures are important for task performance. We assessed both the anterograde and retrograde effects of hippocampal lesions on performance in the NOR…

  10. BDNF controls object recognition memory reconsolidation.

    PubMed

    Radiske, Andressa; Rossato, Janine I; Gonzalez, Maria Carolina; Köhler, Cristiano A; Bevilaqua, Lia R; Cammarota, Martín

    2017-03-06

    Reconsolidation restabilizes memory after reactivation. Previously, we reported that the hippocampus is engaged in object recognition memory reconsolidation to allow incorporation of new information into the original engram. Here we show that BDNF is sufficient for this process, and that blockade of BDNF function in dorsal CA1 impairs updating of the reactivated recognition memory trace.

  11. The Role of Object Recognition in Young Infants' Object Segregation.

    ERIC Educational Resources Information Center

    Carey, Susan; Williams, Travis

    2001-01-01

    Discusses Needham's findings by asserting that they extend understanding of infant perception by showing that the memory representations infants draw upon have bound together information about shape, color, and pattern. Considers the distinction between two senses of "recognition" and asks in which sense object recognition contributes to object…

  12. Neural network system for 3-D object recognition and pose estimation from a single arbitrary 2-D view

    NASA Astrophysics Data System (ADS)

    Khotanzad, Alireza R.; Liou, James H.

    1992-09-01

    In this paper, a robust, and fast system for recognition as well as pose estimation of a 3-D object from a single 2-D perspective of it taken from an arbitrary viewpoint is developed. The approach is invariant to location, orientation, and scale of the object in the perspective. The silhouette of the object in the 2-D perspective is first normalized with respect to location and scale. A set of rotation invariant features derived from complex and orthogonal pseudo- Zernike moments of the image are then extracted. The next stage includes a bank of multilayer feed-forward neural networks (NN) each of which classifies the extracted features. The training set for these nets consists of perspective views of each object taken from several different viewing angles. The NNs in the bank differ in the size of their hidden layer nodes as well as their initial conditions but receive the same input. The classification decisions of all the nets are combined through a majority voting scheme. It is shown that this collective decision making yields better results compared to a single NN operating alone. After the object is classified, two of its pose parameters, namely elevation and aspect angles, are estimated by another module of NNs in a two-stage process. The first stage identifies the likely region of the space that the object is being viewed from. In the second stage, an NN estimator for the identified region is used to compute the pose angles. Extensive experimental studies involving clean and noisy images of seven military ground vehicles are carried out. The performance is compared to two other traditional methods, namely a nearest neighbor rule and a binary decision tree classifier and it is shown that our approach has major advantages over them.

  13. Increasing the object recognition distance of compact open air on board vision system

    NASA Astrophysics Data System (ADS)

    Kirillov, Sergey; Kostkin, Ivan; Strotov, Valery; Dmitriev, Vladimir; Berdnikov, Vadim; Akopov, Eduard; Elyutin, Aleksey

    2016-10-01

    The aim of this work was developing an algorithm eliminating the atmospheric distortion and improves image quality. The proposed algorithm is entirely software without using additional hardware photographic equipment. . This algorithm does not required preliminary calibration. It can work equally effectively with the images obtained at a distances from 1 to 500 meters. An algorithm for the open air images improve designed for Raspberry Pi model B on-board vision systems is proposed. The results of experimental examination are given.

  14. Neural-Network Object-Recognition Program

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  15. Object recognition approach based on feature fusion

    NASA Astrophysics Data System (ADS)

    Wang, Runsheng

    2001-09-01

    Multi-sensor information fusion plays an important pole in object recognition and many other application fields. Fusion performance is tightly depended on the fusion level selected and the approach used. Feature level fusion is a potential and difficult fusion level though there might be mainly three fusion levels. Two schemes are developed for key issues of feature level fusion in this paper. In feature selecting, a normal method developed is to analyze the mutual relationship among the features that can be used, and to be applied to order features. In object recognition, a multi-level recognition scheme is developed, whose procedure can be controlled and updated by analyzing the decision result obtained in order to achieve a final reliable result. The new approach is applied to recognize work-piece objects with twelve classes in optical images and open-country objects with four classes based on infrared image sequence and MMW radar. Experimental results are satisfied.

  16. Quantifying the Energy Efficiency of Object Recognition and Optical Flow

    DTIC Science & Technology

    2014-03-28

    Bruce D Lucas, Takeo Kanade, et al. An Iterative Image Registration Technique with an Application to Stereo Vision. In IJCAI, volume 81, pages 674–679...board unmanned aerial vehicle (UAV) vision processing. Specifically, we focus on object recognition, object tracking, and optical flow. Given that on...6] with >1M labeled images ) for training and evaluating object recognition systems. It turns out that large datasets are a lynchpin of high-accuracy

  17. The role of nitric oxide in the object recognition memory.

    PubMed

    Pitsikas, Nikolaos

    2015-05-15

    The novel object recognition task (NORT) assesses recognition memory in animals. It is a non-rewarded paradigm that it is based on spontaneous exploratory behavior in rodents. This procedure is widely used for testing the effects of compounds on recognition memory. Recognition memory is a type of memory severely compromised in schizophrenic and Alzheimer's disease patients. Nitric oxide (NO) is sought to be an intra- and inter-cellular messenger in the central nervous system and its implication in learning and memory is well documented. Here I intended to critically review the role of NO-related compounds on different aspects of recognition memory. Current analysis shows that both NO donors and NO synthase (NOS) inhibitors are involved in object recognition memory and suggests that NO might be a promising target for cognition impairments. However, the potential neurotoxicity of NO would add a note of caution in this context.

  18. Recognition of object domain by color distribution

    NASA Technical Reports Server (NTRS)

    Mugitani, Takako; Mifune, Mitsuru; Nagata, Shigeki

    1988-01-01

    For the image processing of an object in its natural image, it is necessary to extract in advance the object to be processed from its image. To accomplish this the outer shape of an object is extracted through human instructions, which requires a great deal of time and patience. A method involving the setting of a model of color distribution on the surface of an object is described. This method automatically provides color recognition, a piece of knowledge that represents the properties of an object, from its natural image. A method for recognizing and extracting the object in the image according to the color recognized is also described.

  19. Integration trumps selection in object recognition.

    PubMed

    Saarela, Toni P; Landy, Michael S

    2015-03-30

    Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection.

  20. Integration trumps selection in object recognition

    PubMed Central

    Saarela, Toni P.; Landy, Michael S.

    2015-01-01

    Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154

  1. Divergent short- and long-term effects of acute stress in object recognition memory are mediated by endogenous opioid system activation.

    PubMed

    Nava-Mesa, Mauricio O; Lamprea, Marisol R; Múnera, Alejandro

    2013-11-01

    Acute stress induces short-term object recognition memory impairment and elicits endogenous opioid system activation. The aim of this study was thus to evaluate whether opiate system activation mediates the acute stress-induced object recognition memory changes. Adult male Wistar rats were trained in an object recognition task designed to test both short- and long-term memory. Subjects were randomly assigned to receive an intraperitoneal injection of saline, 1 mg/kg naltrexone or 3 mg/kg naltrexone, four and a half hours before the sample trial. Five minutes after the injection, half the subjects were submitted to movement restraint during four hours while the other half remained in their home cages. Non-stressed subjects receiving saline (control) performed adequately during the short-term memory test, while stressed subjects receiving saline displayed impaired performance. Naltrexone prevented such deleterious effect, in spite of the fact that it had no intrinsic effect on short-term object recognition memory. Stressed subjects receiving saline and non-stressed subjects receiving naltrexone performed adequately during the long-term memory test; however, control subjects as well as stressed subjects receiving a high dose of naltrexone performed poorly. Control subjects' dissociated performance during both memory tests suggests that the short-term memory test induced a retroactive interference effect mediated through light opioid system activation; such effect was prevented either by low dose naltrexone administration or by strongly activating the opioid system through acute stress. Both short-term memory retrieval impairment and long-term memory improvement observed in stressed subjects may have been mediated through strong opioid system activation, since they were prevented by high dose naltrexone administration. Therefore, the activation of the opioid system plays a dual modulating role in object recognition memory.

  2. Neurocomputational bases of object and face recognition.

    PubMed Central

    Biederman, I; Kalocsai, P

    1997-01-01

    A number of behavioural phenomena distinguish the recognition of faces and objects, even when members of a set of objects are highly similar. Because faces have the same parts in approximately the same relations, individuation of faces typically requires specification of the metric variation in a holistic and integral representation of the facial surface. The direct mapping of a hypercolumn-like pattern of activation onto a representation layer that preserves relative spatial filter values in a two-dimensional (2D) coordinate space, as proposed by C. von der Malsburg and his associates, may account for many of the phenomena associated with face recognition. An additional refinement, in which each column of filters (termed a 'jet') is centred on a particular facial feature (or fiducial point), allows selectivity of the input into the holistic representation to avoid incorporation of occluding or nearby surfaces. The initial hypercolumn representation also characterizes the first stage of object perception, but the image variation for objects at a given location in a 2D coordinate space may be too great to yield sufficient predictability directly from the output of spatial kernels. Consequently, objects can be represented by a structural description specifying qualitative (typically, non-accidental) characterizations of an object's parts, the attributes of the parts, and the relations among the parts, largely based on orientation and depth discontinuities (as shown by Hummel & Biederman). A series of experiments on the name priming or physical matching of complementary images (in the Fourier domain) of objects and faces documents that whereas face recognition is strongly dependent on the original spatial filter values, evidence from object recognition indicates strong invariance to these values, even when distinguishing among objects that are as similar as faces. PMID:9304687

  3. Object recognition difficulty in visual apperceptive agnosia.

    PubMed

    Grossman, M; Galetta, S; D'Esposito, M

    1997-04-01

    Two patients with visual apperceptive agnosia were examined on tasks assessing the appreciation of visual material. Elementary visual functioning was relatively preserved, but they had profound difficulty recognizing and naming line drawings. More detailed evaluation revealed accurate recognition of regular geometric shapes and colors, but performance deteriorated when the shapes were made more complex visually, when multiple-choice arrays contained larger numbers of simple targets and foils, and when a mental manipulation such as a rotation was required. The recognition of letters and words was similarly compromised. Naming, recognition, and anomaly judgments of colored pictures and real objects were more accurate than similar decisions involving black-and-white line drawings. Visual imagery for shapes, letters, and objects appeared to be more accurate than visual perception of the same materials. We hypothesize that object recognition difficulty in visual apperceptive agnosia is due to two related factors: the impaired appreciation of the visual perceptual features that constitute objects, and a limitation in the cognitive resources that are available for processing demanding material within the visual modality.

  4. High speed optical object recognition processor with massive holographic memory

    NASA Technical Reports Server (NTRS)

    Chao, T.; Zhou, H.; Reyes, G.

    2002-01-01

    Real-time object recognition using a compact grayscale optical correlator will be introduced. A holographic memory module for storing a large bank of optimum correlation filters, to accommodate the large data throughput rate needed for many real-world applications, has also been developed. System architecture of the optical processor and the holographic memory will be presented. Application examples of this object recognition technology will also be demonstrated.

  5. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  6. Shape and Color Features for Object Recognition Search

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.; Stubberud, Allen R.

    2012-01-01

    A bio-inspired shape feature of an object of interest emulates the integration of the saccadic eye movement and horizontal layer in vertebrate retina for object recognition search where a single object can be used one at a time. The optimal computational model for shape-extraction-based principal component analysis (PCA) was also developed to reduce processing time and enable the real-time adaptive system capability. A color feature of the object is employed as color segmentation to empower the shape feature recognition to solve the object recognition in the heterogeneous environment where a single technique - shape or color - may expose its difficulties. To enable the effective system, an adaptive architecture and autonomous mechanism were developed to recognize and adapt the shape and color feature of the moving object. The bio-inspired object recognition based on bio-inspired shape and color can be effective to recognize a person of interest in the heterogeneous environment where the single technique exposed its difficulties to perform effective recognition. Moreover, this work also demonstrates the mechanism and architecture of the autonomous adaptive system to enable the realistic system for the practical use in the future.

  7. Object recognition with hierarchical discriminant saliency networks

    PubMed Central

    Han, Sunhyoung; Vasconcelos, Nuno

    2014-01-01

    The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and

  8. 3D object recognition based on local descriptors

    NASA Astrophysics Data System (ADS)

    Jakab, Marek; Benesova, Wanda; Racev, Marek

    2015-01-01

    In this paper, we propose an enhanced method of 3D object description and recognition based on local descriptors using RGB image and depth information (D) acquired by Kinect sensor. Our main contribution is focused on an extension of the SIFT feature vector by the 3D information derived from the depth map (SIFT-D). We also propose a novel local depth descriptor (DD) that includes a 3D description of the key point neighborhood. Thus defined the 3D descriptor can then enter the decision-making process. Two different approaches have been proposed, tested and evaluated in this paper. First approach deals with the object recognition system using the original SIFT descriptor in combination with our novel proposed 3D descriptor, where the proposed 3D descriptor is responsible for the pre-selection of the objects. Second approach demonstrates the object recognition using an extension of the SIFT feature vector by the local depth description. In this paper, we present the results of two experiments for the evaluation of the proposed depth descriptors. The results show an improvement in accuracy of the recognition system that includes the 3D local description compared with the same system without the 3D local description. Our experimental system of object recognition is working near real-time.

  9. A new method of edge detection for object recognition

    USGS Publications Warehouse

    Maddox, Brian G.; Rhew, Benjamin

    2004-01-01

    Traditional edge detection systems function by returning every edge in an input image. This can result in a large amount of clutter and make certain vectorization algorithms less accurate. Accuracy problems can then have a large impact on automated object recognition systems that depend on edge information. A new method of directed edge detection can be used to limit the number of edges returned based on a particular feature. This results in a cleaner image that is easier for vectorization. Vectorized edges from this process could then feed an object recognition system where the edge data would also contain information as to what type of feature it bordered.

  10. Where you look can influence haptic object recognition.

    PubMed

    Lawson, Rebecca; Boylan, Amy; Edwards, Lauren

    2014-02-01

    We investigated whether the relative position of objects and the body would influence haptic recognition. People felt objects on the right or left side of their body midline, using their right hand. Their head was turned towards or away from the object, and they could not see their hands or the object. People were better at naming 2-D raised line drawings and 3-D small-scale models of objects and also real, everyday objects when they looked towards them. However, this head-towards benefit was reliable only when their right hand crossed their body midline to feel objects on their left side. Thus, haptic object recognition was influenced by people's head position, although vision of their hand and the object was blocked. This benefit of turning the head towards the object being explored suggests that proprioceptive and haptic inputs are remapped into an external coordinate system and that this remapping is harder when the body is in an unusual position (with the hand crossing the body midline and the head turned away from the hand). The results indicate that haptic processes align sensory inputs from the hand and head even though either hand-centered or object-centered coordinate systems should suffice for haptic object recognition.

  11. Generalized Sparselet Models for Real-Time Multiclass Object Recognition.

    PubMed

    Song, Hyun Oh; Girshick, Ross; Zickler, Stefan; Geyer, Christopher; Felzenszwalb, Pedro; Darrell, Trevor

    2015-05-01

    The problem of real-time multiclass object recognition is of great practical importance in object recognition. In this paper, we describe a framework that simultaneously utilizes shared representation, reconstruction sparsity, and parallelism to enable real-time multiclass object detection with deformable part models at 5Hz on a laptop computer with almost no decrease in task performance. Our framework is trained in the standard structured output prediction formulation and is generically applicable for speeding up object recognition systems where the computational bottleneck is in multiclass, multi-convolutional inference. We experimentally demonstrate the efficiency and task performance of our method on PASCAL VOC, subset of ImageNet, Caltech101 and Caltech256 dataset.

  12. A Proposed Biologically Inspired Model for Object Recognition

    NASA Astrophysics Data System (ADS)

    Al-Absi, Hamada R. H.; Abdullah, Azween B.

    Object recognition has attracted the attention of many researchers as it is considered as one of the most important problems in computer vision. Two main approaches have been utilized to develop object recognition solutions i.e. machine and biological vision. Many algorithms have been developed in machine vision. Recently, Biology has inspired computer scientist to map the features of the human and primate's visual systems into computational models. Some of these models are based on the feed-forward mechanism of information processing in cortex; however, the performance of these models has been affected by the increase of clutter in the scene. Another mechanism of information processing in cortex is called the feedback. This mechanism has also been mapped into computational models. However, the results were also not satisfying. In this paper an object recognition model based on the integration of the feed-forward and feedback functions in the visual cortex is proposed.

  13. Automatic anatomy recognition via fuzzy object models

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Odhner, Dewey; Falcão, Alexandre X.; Ciesielski, Krzysztof C.; Miranda, Paulo A. V.; Matsumoto, Monica; Grevera, George J.; Saboury, Babak; Torigian, Drew A.

    2012-02-01

    To make Quantitative Radiology a reality in routine radiological practice, computerized automatic anatomy recognition (AAR) during radiological image reading becomes essential. As part of this larger goal, last year at this conference we presented a fuzzy strategy for building body-wide group-wise anatomic models. In the present paper, we describe the further advances made in fuzzy modeling and the algorithms and results achieved for AAR by using the fuzzy models. The proposed AAR approach consists of three distinct steps: (a) Building fuzzy object models (FOMs) for each population group G. (b) By using the FOMs to recognize the individual objects in any given patient image I under group G. (c) To delineate the recognized objects in I. This paper will focus mostly on (b). FOMs are built hierarchically, the smaller sub-objects forming the offspring of larger parent objects. The hierarchical pose relationships from the parent to offspring are codified in the FOMs. Several approaches are being explored currently, grouped under two strategies, both being hierarchical: (ra1) those using search strategies; (ra2) those strategizing a one-shot approach by which the model pose is directly estimated without searching. Based on 32 patient CT data sets each from the thorax and abdomen and 25 objects modeled, our analysis indicates that objects do not all scale uniformly with patient size. Even the simplest among the (ra2) strategies of recognizing the root object and then placing all other descendants as per the learned parent-to-offspring pose relationship bring the models on an average within about 18 mm of the true locations.

  14. Optical Recognition And Tracking Of Objects

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1988-01-01

    Separate objects moving independently tracked simultaneously. System uses coherent optical techniques to obtain correlation between each object and reference image. Moving objects monitored by charge-coupled-device television camera, output fed to liquid-crystal television (LCTV) display. Acting as spatial light modulator, LCTV impresses images of moving objects on collimated laser beam. Beam spatially low-pass filtered to remove high-spatial-frequency television grid pattern.

  15. Rule-Based Orientation Recognition Of A Moving Object

    NASA Astrophysics Data System (ADS)

    Gove, Robert J.

    1989-03-01

    This paper presents a detailed description and a comparative analysis of the algorithms used to determine the position and orientation of an object in real-time. The exemplary object, a freely moving gold-fish in an aquarium, provides "real-world" motion, with definable characteristics of motion (the fish never swims upside-down) and the complexities of a non-rigid body. For simplicity of implementation, and since a restricted and stationary viewing domain exists (fish-tank), we reduced the problem of obtaining 3D correspondence information to trivial alignment calculations by using two cameras orthogonally viewing the object. We applied symbolic processing techniques to recognize the 3-D orientation of a moving object of known identity in real-time. Assuming motion, each new frame (sensed by the two cameras) provides images of the object's profile which has most likely undergone translation, rotation, scaling and/or bending of the non-rigid object since the previous frame. We developed an expert system which uses heuristics of the object's motion behavior in the form of rules and information obtained via low-level image processing (like numerical inertial axis calculations) to dynamically estimate the object's orientation. An inference engine provides these estimates at frame rates of up to 10 per second (which is essentially real-time). The advantages of the rule-based approach to orientation recognition will be compared other pattern recognition techniques. Our results of an investigation of statistical pattern recognition, neural networks, and procedural techniques for orientation recognition will be included. We implemented the algorithms in a rapid-prototyping environment, the TI-Ezplorer, equipped with an Odyssey and custom imaging hardware. A brief overview of the workstation is included to clarify one motivation for our choice of algorithms. These algorithms exploit two facets of the prototype image processing and understanding workstation - both low

  16. Reader error, object recognition, and visual search

    NASA Astrophysics Data System (ADS)

    Kundel, Harold L.

    2004-05-01

    Small abnormalities such as hairline fractures, lung nodules and breast tumors are missed by competent radiologists with sufficient frequency to make them a matter of concern to the medical community; not only because they lead to litigation but also because they delay patient care. It is very easy to attribute misses to incompetence or inattention. To do so may be placing an unjustified stigma on the radiologists involved and may allow other radiologists to continue a false optimism that it can never happen to them. This review presents some of the fundamentals of visual system function that are relevant to understanding the search for and the recognition of small targets embedded in complicated but meaningful backgrounds like chests and mammograms. It presents a model for visual search that postulates a pre-attentive global analysis of the retinal image followed by foveal checking fixations and eventually discovery scanning. The model will be used to differentiate errors of search, recognition and decision making. The implications for computer aided diagnosis and for functional workstation design are discussed.

  17. The Role of Perceptual Load in Object Recognition

    ERIC Educational Resources Information Center

    Lavie, Nilli; Lin, Zhicheng; Zokaei, Nahid; Thoma, Volker

    2009-01-01

    Predictions from perceptual load theory (Lavie, 1995, 2005) regarding object recognition across the same or different viewpoints were tested. Results showed that high perceptual load reduces distracter recognition levels despite always presenting distracter objects from the same view. They also showed that the levels of distracter recognition were…

  18. A novel multi-view object recognition in complex background

    NASA Astrophysics Data System (ADS)

    Chang, Yongxin; Yu, Huapeng; Xu, Zhiyong; Fu, Chengyu; Gao, Chunming

    2015-02-01

    Recognizing objects from arbitrary aspects is always a highly challenging problem in computer vision, and most existing algorithms mainly focus on a specific viewpoint research. Hence, in this paper we present a novel recognizing framework based on hierarchical representation, part-based method and learning in order to recognize objects from different viewpoints. The learning evaluates the model's mistakes and feeds it back the detector to avid the same mistakes in the future. The principal idea is to extract intrinsic viewpoint invariant features from the unseen poses of object, and then to take advantage of these shared appearance features to support recognition combining with the improved multiple view model. Compared with other recognition models, the proposed approach can efficiently tackle multi-view problem and promote the recognition versatility of our system. For an quantitative valuation The novel algorithm has been tested on several benchmark datasets such as Caltech 101 and PASCAL VOC 2010. The experimental results validate that our approach can recognize objects more precisely and the performance outperforms others single view recognition methods.

  19. Object Recognition using Feature- and Color-Based Methods

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  20. Hippocampal histone acetylation regulates object recognition and the estradiol-induced enhancement of object recognition.

    PubMed

    Zhao, Zaorui; Fan, Lu; Fortress, Ashley M; Boulware, Marissa I; Frick, Karyn M

    2012-02-15

    Histone acetylation has recently been implicated in learning and memory processes, yet necessity of histone acetylation for such processes has not been demonstrated using pharmacological inhibitors of histone acetyltransferases (HATs). As such, the present study tested whether garcinol, a potent HAT inhibitor in vitro, could impair hippocampal memory consolidation and block the memory-enhancing effects of the modulatory hormone 17β-estradiol E2. We first showed that bilateral infusion of garcinol (0.1, 1, or 10 μg/side) into the dorsal hippocampus (DH) immediately after training impaired object recognition memory consolidation in ovariectomized female mice. A behaviorally effective dose of garcinol (10 μg/side) also significantly decreased DH HAT activity. We next examined whether DH infusion of a behaviorally subeffective dose of garcinol (1 ng/side) could block the effects of DH E2 infusion on object recognition and epigenetic processes. Immediately after training, ovariectomized female mice received bilateral DH infusions of vehicle, E2 (5 μg/side), garcinol (1 ng/side), or E2 plus garcinol. Forty-eight hours later, garcinol blocked the memory-enhancing effects of E2. Garcinol also reversed the E2-induced increase in DH histone H3 acetylation, HAT activity, and levels of the de novo methyltransferase DNMT3B, as well as the E2-induced decrease in levels of the memory repressor protein histone deacetylase 2. Collectively, these findings suggest that histone acetylation is critical for object recognition memory consolidation and the beneficial effects of E2 on object recognition. Importantly, this work demonstrates that the role of histone acetylation in memory processes can be studied using a HAT inhibitor.

  1. Automatic Recognition of Object Names in Literature

    NASA Astrophysics Data System (ADS)

    Bonnin, C.; Lesteven, S.; Derriere, S.; Oberto, A.

    2008-08-01

    SIMBAD is a database of astronomical objects that provides (among other things) their bibliographic references in a large number of journals. Currently, these references have to be entered manually by librarians who read each paper. To cope with the increasing number of papers, CDS develops a tool to assist the librarians in their work, taking advantage of the Dictionary of Nomenclature of Celestial Objects, which keeps track of object acronyms and of their origin. The program searches for object names directly in PDF documents by comparing the words with all the formats stored in the Dictionary of Nomenclature. It also searches for variable star names based on constellation names and for a large list of usual names such as Aldebaran or the Crab. Object names found in the documents often correspond to several astronomical objects. The system retrieves all possible matches, displays them with their object type given by SIMBAD, and lets the librarian make the final choice. The bibliographic reference can then be automatically added to the object identifiers in the database. Besides, the systematic usage of the Dictionary of Nomenclature, which is updated manually, permitted to automatically check it and to detect errors and inconsistencies. Last but not least, the program collects some additional information such as the position of the object names in the document (in the title, subtitle, abstract, table, figure caption...) and their number of occurrences. In the future, this will permit to calculate the 'weight' of an object in a reference and to provide SIMBAD users with an important new information, which will help them to find the most relevant papers in the object reference list.

  2. Planning Multiple Observations for Object Recognition

    DTIC Science & Technology

    1992-12-09

    choosing the branch with the highest weight at each level, and backtracking when necessary. The PREMIO system of Camps, et al [51 predicts object...appearances under various conditions of lighting, viewpoint, sensor, and image processing operators. Unlike other systems, PREMIO also evaluates the utility...1988). [51 Camps, 0. 1., Shapiro, L. G., and Haralick, R. M. PREMIO : an overview. Proc. IEEE Workshop on Directions in Automated CAD-Based Vision, pp

  3. Object recognition memory: neurobiological mechanisms of encoding, consolidation and retrieval.

    PubMed

    Winters, Boyer D; Saksida, Lisa M; Bussey, Timothy J

    2008-07-01

    Tests of object recognition memory, or the judgment of the prior occurrence of an object, have made substantial contributions to our understanding of the nature and neurobiological underpinnings of mammalian memory. Only in recent years, however, have researchers begun to elucidate the specific brain areas and neural processes involved in object recognition memory. The present review considers some of this recent research, with an emphasis on studies addressing the neural bases of perirhinal cortex-dependent object recognition memory processes. We first briefly discuss operational definitions of object recognition and the common behavioural tests used to measure it in non-human primates and rodents. We then consider research from the non-human primate and rat literature examining the anatomical basis of object recognition memory in the delayed nonmatching-to-sample (DNMS) and spontaneous object recognition (SOR) tasks, respectively. The results of these studies overwhelmingly favor the view that perirhinal cortex (PRh) is a critical region for object recognition memory. We then discuss the involvement of PRh in the different stages--encoding, consolidation, and retrieval--of object recognition memory. Specifically, recent work in rats has indicated that neural activity in PRh contributes to object memory encoding, consolidation, and retrieval processes. Finally, we consider the pharmacological, cellular, and molecular factors that might play a part in PRh-mediated object recognition memory. Recent studies in rodents have begun to indicate the remarkable complexity of the neural substrates underlying this seemingly simple aspect of declarative memory.

  4. A Rule-Based Pattern Matching System for the Recognition of Three-Dimensional Line Drawn Objects: A Foundation for Video Tracking,

    DTIC Science & Technology

    1986-01-01

    case, the object could have been a cubical one (prob = 35) or a prism (prob = 15) or a L-shaped object (prob = 25) or a T-shaped object (prob = 25...recognition program was coded in Prolog. A database containing eight object descriptions was used for testing. The set of oojects consisted of prisms , pyramids...practical use, it sfr ild be able to recognize objects in a scene rather than when they are presented individually. This needs the introduction of more

  5. The role of perceptual load in object recognition.

    PubMed

    Lavie, Nilli; Lin, Zhicheng; Zokaei, Nahid; Thoma, Volker

    2009-10-01

    Predictions from perceptual load theory (Lavie, 1995, 2005) regarding object recognition across the same or different viewpoints were tested. Results showed that high perceptual load reduces distracter recognition levels despite always presenting distracter objects from the same view. They also showed that the levels of distracter recognition were unaffected by a change in the distracter object view under conditions of low perceptual load. These results were found both with repetition priming measures of distracter recognition and with performance on a surprise recognition memory test. The results support load theory proposals that distracter recognition critically depends on the level of perceptual load. The implications for the role of attention in object recognition theories are discussed.

  6. Shape Recognition Of Complex Objects By Syntactical Primitives

    NASA Astrophysics Data System (ADS)

    Lenger, D.; Cipovic, H.

    1985-04-01

    The paper describes a pattern recognition method based on syntactic image analysis applicable in autonomous systems of robot vision for the purpose of pattern detection or classification. The discrimination of syntactic elements is realized by polygonal approximation of contours employing a very fast algorithm based upon coding, local pixel logic and methods of choice instead of numerical methods. Semantic information is derived from attributes calculated from the filtered shape vector. No a priori information on image objects is required, and the choice of starting point is determined by finding the significant directions on the shape vector. The radius of recognition sphere is minimum Euclidian distance, i.e. maximum similarity between the unknown model and each individual grammar created in the learning phase. By keeping information on derivations of individual syntactic elements, an alternative of parsing recognition is left. The analysis is very flexible, and permits the recognition of highly distorted or even partially visible objects. The output from syntactic analyzer is the measure of irregularity, and the method is thus applicable in any application where sample deformation is being examined.

  7. Chotosan, a kampo formula, ameliorates chronic cerebral hypoperfusion-induced deficits in object recognition behaviors and central cholinergic systems in mice.

    PubMed

    Zhao, Qi; Murakami, Yukihisa; Tohda, Michihisa; Obi, Ryosuke; Shimada, Yutaka; Matsumoto, Kinzo

    2007-04-01

    We previously demonstrated that the Kampo formula chotosan (CTS) ameliorated spatial cognitive impairment via central cholinergic systems in a chronic cerebral hypoperfusion (P2VO) mouse model. In this study, the object discrimination tasks were used to determine if the ameliorative effects of CTS on P2VO-induced cognitive deficits are a characteristic pharmacological profile of this formula, with the aim of clarifying the mechanisms by which CTS enhances central cholinergic function in P2VO mice. The cholinesterase inhibitor tacrine (THA) and Kampo formula saikokeishito (SKT) were used as controls. P2VO impaired object discrimination performance in the object recognition, location, and context tests. Daily administration of CTS (750 mg/kg, p.o.) and THA (2.5 mg/kg, i.p.) improved the object discrimination deficits, whereas SKT (750 mg/kg, p.o.) did not. In ex vivo assays, tacrine but not CTS or SKT inhibited cortical cholinesterase activity. P2VO reduced the mRNA expression of m(3) and m(5) muscarinic receptors and choline acetyltransferase but not that of other muscarinic receptor subtypes in the cerebral cortex. Daily administration of CTS and THA but not SKT reversed these expression changes. These results suggest that CTS and THA improve P2VO-induced cognitive impairment by normalizing the deficit of central cholinergic systems and that the beneficial effect on P2VO-induced cognitive deficits is a distinctive pharmacological characteristic of CTS.

  8. Sleep deprivation impairs spontaneous object-place but not novel-object recognition in rats.

    PubMed

    Ishikawa, Hiroko; Yamada, Kazuo; Pavlides, Constantine; Ichitani, Yukio

    2014-09-19

    Effects of sleep deprivation (SD) on one-trial recognition memory were investigated in rats using either a spontaneous novel-object or object-place recognition test. Rats were allowed to explore a field in which two identical objects were presented. After a delay period, they were placed again in the same field in which either: (1) one of the two objects was replaced by another object (novel-object recognition); or (2) one of the sample objects was moved to a different place (object-place recognition), and their exploration behavior to these objects was analyzed. Four hours SD immediately after the sample phase (early SD group) disrupted object-place recognition but not novel-object recognition, while SD 4-8h after the sample phase (delayed SD group) did not affect either paradigm. The results suggest that sleep selectively promotes the consolidation of hippocampal dependent memory, and that this effect is limited to within 4h after learning.

  9. Fast neuromimetic object recognition using FPGA outperforms GPU implementations.

    PubMed

    Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph

    2013-08-01

    Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.

  10. Visual Object Recognition and Tracking of Tools

    NASA Technical Reports Server (NTRS)

    English, James; Chang, Chu-Yin; Tardella, Neil

    2011-01-01

    A method has been created to automatically build an algorithm off-line, using computer-aided design (CAD) models, and to apply this at runtime. The object type is discriminated, and the position and orientation are identified. This system can work with a single image and can provide improved performance using multiple images provided from videos. The spatial processing unit uses three stages: (1) segmentation; (2) initial type, pose, and geometry (ITPG) estimation; and (3) refined type, pose, and geometry (RTPG) calculation. The image segmentation module files all the tools in an image and isolates them from the background. For this, the system uses edge-detection and thresholding to find the pixels that are part of a tool. After the pixels are identified, nearby pixels are grouped into blobs. These blobs represent the potential tools in the image and are the product of the segmentation algorithm. The second module uses matched filtering (or template matching). This approach is used for condensing synthetic images using an image subspace that captures key information. Three degrees of orientation, three degrees of position, and any number of degrees of freedom in geometry change are included. To do this, a template-matching framework is applied. This framework uses an off-line system for calculating template images, measurement images, and the measurements of the template images. These results are used online to match segmented tools against the templates. The final module is the RTPG processor. Its role is to find the exact states of the tools given initial conditions provided by the ITPG module. The requirement that the initial conditions exist allows this module to make use of a local search (whereas the ITPG module had global scope). To perform the local search, 3D model matching is used, where a synthetic image of the object is created and compared to the sensed data. The availability of low-cost PC graphics hardware allows rapid creation of synthetic images

  11. Selective visual attention in object recognition and scene analysis

    NASA Astrophysics Data System (ADS)

    Gonzaga, Adilson; de Almeida Neves, Evelina M.; Frere, Annie F.

    1998-10-01

    An important feature of human vision system is the ability of selective visual attention. The stimulus that reaches the primate retina is processed in two different cortical pathways; one is specialized for object vision (`What') and the other for spatial vision (`Where'). By this, the visual system is able to recognize objects independently where they appear in the visual field. There are two major theories to explain the human visual attention. According to the Object- Based theory there is a limit on the isolated objects that could be perceived simultaneously and by the Space-Based theory there is a limit on the spatial areas from which the information could be taken up. This paper deals with the Object-Based theory that states the visual world occurs in two stages. The scene is segmented into isolated objects by region growing techniques in the pre-attentive stage. Invariant features (moments) are extracted and used as input of an Artificial Neural Network giving the probable object location (`Where'). In the focal-stage, particular objects are analyzed in detail through another neural network that performs the object recognition (`What'). The number of analyzed objects is based on a top-down process doing a consistent scene interpretation. With Visual Attention is possible the development of more efficient and flexible interfaces between low sensory information and high level process.

  12. Object Recognition and Localization: The Role of Tactile Sensors

    PubMed Central

    Aggarwal, Achint; Kirchner, Frank

    2014-01-01

    Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This paper presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Particle Filter (BRICPPF) is based on an innovative combination of particle filters, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC) algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in ground and underwater environments using real hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses the BRICPPF for object sub-part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments. PMID:24553087

  13. An ERP Study on Self-Relevant Object Recognition

    ERIC Educational Resources Information Center

    Miyakoshi, Makoto; Nomura, Michio; Ohira, Hideki

    2007-01-01

    We performed an event-related potential study to investigate the self-relevance effect in object recognition. Three stimulus categories were prepared: SELF (participant's own objects), FAMILIAR (disposable and public objects, defined as objects with less-self-relevant familiarity), and UNFAMILIAR (others' objects). The participants' task was to…

  14. Contrast- and illumination-invariant object recognition from active sensation.

    PubMed

    Rentschler, Ingo; Osman, Erol; Jüttner, Martin

    2009-01-01

    It has been suggested that the deleterious effect of contrast reversal on visual recognition is unique to faces, not objects. Here we show from priming, supervised category learning, and generalization that there is no such thing as general invariance of recognition of non-face objects against contrast reversal and, likewise, changes in direction of illumination. However, when recognition varies with rendering conditions, invariance may be restored and effects of continuous learning may be reduced by providing prior object knowledge from active sensation. Our findings suggest that the degree of contrast invariance achieved reflects functional characteristics of object representations learned in a task-dependent fashion.

  15. Infants' Recognition of Objects Using Canonical Color

    ERIC Educational Resources Information Center

    Kimura, Atsushi; Wada, Yuji; Yang, Jiale; Otsuka, Yumiko; Dan, Ippeita; Masuda, Tomohiro; Kanazawa, So; Yamaguchi, Masami K.

    2010-01-01

    We explored infants' ability to recognize the canonical colors of daily objects, including two color-specific objects (human face and fruit) and a non-color-specific object (flower), by using a preferential looking technique. A total of 58 infants between 5 and 8 months of age were tested with a stimulus composed of two color pictures of an object…

  16. Young Children's Self-Generated Object Views and Object Recognition

    ERIC Educational Resources Information Center

    James, Karin H.; Jones, Susan S.; Smith, Linda B.; Swain, Shelley N.

    2014-01-01

    Two important and related developments in children between 18 and 24 months of age are the rapid expansion of object name vocabularies and the emergence of an ability to recognize objects from sparse representations of their geometric shapes. In the same period, children also begin to show a preference for planar views (i.e., views of objects held…

  17. Mechanisms of object recognition: what we have learned from pigeons

    PubMed Central

    Soto, Fabian A.; Wasserman, Edward A.

    2014-01-01

    Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons. PMID:25352784

  18. Mechanisms of object recognition: what we have learned from pigeons.

    PubMed

    Soto, Fabian A; Wasserman, Edward A

    2014-01-01

    Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the "simple" brains of pigeons.

  19. Eye movements during object recognition in visual agnosia.

    PubMed

    Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe

    2012-07-01

    This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape.

  20. Multiple-View Object Recognition in Smart Camera Networks

    NASA Astrophysics Data System (ADS)

    Yang, Allen Y.; Maji, Subhransu; Christoudias, C. Mario; Darrell, Trevor; Malik, Jitendra; Sastry, S. Shankar

    We study object recognition in low-power, low-bandwidth smart camera networks. The ability to perform robust object recognition is crucial for applications such as visual surveillance to track and identify objects of interest, and overcome visual nuisances such as occlusion and pose variations between multiple camera views. To accommodate limited bandwidth between the cameras and the base-station computer, the method utilizes the available computational power on the smart sensors to locally extract SIFT-type image features to represent individual camera views. We show that between a network of cameras, high-dimensional SIFT histograms exhibit a joint sparse pattern corresponding to a set of shared features in 3-D. Such joint sparse patterns can be explicitly exploited to encode the distributed signal via random projections. At the network station, multiple decoding schemes are studied to simultaneously recover the multiple-view object features based on a distributed compressive sensing theory. The system has been implemented on the Berkeley CITRIC smart camera platform. The efficacy of the algorithm is validated through extensive simulation and experiment.

  1. 'Breaking' position-invariant object recognition.

    PubMed

    Cox, David D; Meier, Philip; Oertelt, Nadja; DiCarlo, James J

    2005-09-01

    While it is often assumed that objects can be recognized irrespective of where they fall on the retina, little is known about the mechanisms underlying this ability. By exposing human subjects to an altered world where some objects systematically changed identity during the transient blindness that accompanies eye movements, we induced predictable object confusions across retinal positions, effectively 'breaking' position invariance. Thus, position invariance is not a rigid property of vision but is constantly adapting to the statistics of the environment.

  2. Real object use facilitates object recognition in semantic agnosia.

    PubMed

    Morady, Kamelia; Humphreys, Glyn W

    2009-01-01

    In the present paper we show that, in patients with poor semantic representations, the naming of real objects can improve when naming takes place after patients have been asked to use the objects, compared with when they name the objects either from vision or from touch alone, or together. In addition, the patients were strongly affected by action when required to name objects that were used correctly or incorrectly by the examiner. The data suggest that actions can be cued directly from sensory-motor associations, and that patients can then name on the basis of the evoked action.

  3. Category selectivity in human visual cortex: Beyond visual object recognition.

    PubMed

    Peelen, Marius V; Downing, Paul E

    2017-04-02

    Human ventral temporal cortex shows a categorical organization, with regions responding selectively to faces, bodies, tools, scenes, words, and other categories. Why is this? Traditional accounts explain category selectivity as arising within a hierarchical system dedicated to visual object recognition. For example, it has been proposed that category selectivity reflects the clustering of category-associated visual feature representations, or that it reflects category-specific computational algorithms needed to achieve view invariance. This visual object recognition framework has gained renewed interest with the success of deep neural network models trained to "recognize" objects: these hierarchical feed-forward networks show similarities to human visual cortex, including categorical separability. We argue that the object recognition framework is unlikely to fully account for category selectivity in visual cortex. Instead, we consider category selectivity in the context of other functions such as navigation, social cognition, tool use, and reading. Category-selective regions are activated during such tasks even in the absence of visual input and even in individuals with no prior visual experience. Further, they are engaged in close connections with broader domain-specific networks. Considering the diverse functions of these networks, category-selective regions likely encode their preferred stimuli in highly idiosyncratic formats; representations that are useful for navigation, social cognition, or reading are unlikely to be meaningfully similar to each other and to varying degrees may not be entirely visual. The demand for specific types of representations to support category-associated tasks may best account for category selectivity in visual cortex. This broader view invites new experimental and computational approaches.

  4. Recognition memory for object form and object location: an event-related potential study.

    PubMed

    Mecklinger, A; Meinshausen, R M

    1998-09-01

    In this study, the processes associated with retrieving object forms and object locations from working memory were examined with the use of simultaneously recorded event-related potential (ERP) activity. Subjects memorized object forms and their spatial locations and made either object-based or location-based recognition judgments. In Experiment 1, recognition performance was higher for object locations than for object forms. Old responses evoked more positive-going ERP activity between 0.3 and 1.8 sec poststimulus than did new responses. The topographic distribution of these old/new effects in the P300 time interval was task specific, with object-based recognition judgments being associated with anteriorly focused effects and location-based judgments with posteriorly focused effects. Late old/new effects were dominant at right frontal recordings. Using an interference paradigm, it was shown in Experiment 2 that visual representations were used to rehearse both object forms and object locations in working memory. The results of Experiment 3 indicated that the observed differential topographic distributions of the old/new effects in the P300 time interval are unlikely to reflect differences between easy and difficult recognition judgments. More specific effects were obtained for a subgroup of subjects for which the processing characteristics during location-based judgments presumably were similar to those in Experiment 1. These data, together with those from Experiment 1, indicate that different brain areas are engaged in retrieving object forms and object locations from working memory. Further analyses support the view that retrieval of object forms relies on conceptual semantic representation, whereas retrieving object locations is based on structural representations of spatial information. The effects in the later time intervals may play a functional role in post-retrieval processing, such as recollecting information from the study episode or other processes

  5. Category-Specificity in Visual Object Recognition

    ERIC Educational Resources Information Center

    Gerlach, Christian

    2009-01-01

    Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not: some brain damaged patients are more impaired in recognizing natural objects than artefacts whereas others show the opposite impairment. Category-effects have also been demonstrated in neurologically intact subjects, but the…

  6. 3D object recognition in TOF data sets

    NASA Astrophysics Data System (ADS)

    Hess, Holger; Albrecht, Martin; Grothof, Markus; Hussmann, Stephan; Oikonomidis, Nikolaos; Schwarte, Rudolf

    2003-08-01

    In the last years 3D-Vision systems based on the Time-Of-Flight (TOF) principle have gained more importance than Stereo Vision (SV). TOF offers a direct depth-data acquisition, whereas SV involves a great amount of computational power for a comparable 3D data set. Due to the enormous progress in TOF-techniques, nowadays 3D cameras can be manufactured and be used for many practical applications. Hence there is a great demand for new accurate algorithms for 3D object recognition and classification. This paper presents a new strategy and algorithm designed for a fast and solid object classification. A challenging example - accurate classification of a (half-) sphere - demonstrates the performance of the developed algorithm. Finally, the transition from a general model of the system to specific applications such as Intelligent Airbag Control and Robot Assistance in Surgery are introduced. The paper concludes with the current research results in the above mentioned fields.

  7. A Taxonomy of 3D Occluded Objects Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Soleimanizadeh, Shiva; Mohamad, Dzulkifli; Saba, Tanzila; Al-ghamdi, Jarallah Saleh

    2016-03-01

    The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

  8. The subjective experience of object recognition: comparing metacognition for object detection and object categorization.

    PubMed

    Meuwese, Julia D I; van Loon, Anouk M; Lamme, Victor A F; Fahrenfort, Johannes J

    2014-05-01

    Perceptual decisions seem to be made automatically and almost instantly. Constructing a unitary subjective conscious experience takes more time. For example, when trying to avoid a collision with a car on a foggy road you brake or steer away in a reflex, before realizing you were in a near accident. This subjective aspect of object recognition has been given little attention. We used metacognition (assessed with confidence ratings) to measure subjective experience during object detection and object categorization for degraded and masked objects, while objective performance was matched. Metacognition was equal for degraded and masked objects, but categorization led to higher metacognition than did detection. This effect turned out to be driven by a difference in metacognition for correct rejection trials, which seemed to be caused by an asymmetry of the distractor stimulus: It does not contain object-related information in the detection task, whereas it does contain such information in the categorization task. Strikingly, this asymmetry selectively impacted metacognitive ability when objective performance was matched. This finding reveals a fundamental difference in how humans reflect versus act on information: When matching the amount of information required to perform two tasks at some objective level of accuracy (acting), metacognitive ability (reflecting) is still better in tasks that rely on positive evidence (categorization) than in tasks that rely more strongly on an absence of evidence (detection).

  9. Orientation-invariant object recognition: evidence from repetition blindness.

    PubMed

    Harris, Irina M; Dux, Paul E

    2005-02-01

    The question of whether object recognition is orientation-invariant or orientation-dependent was investigated using a repetition blindness (RB) paradigm. In RB, the second occurrence of a repeated stimulus is less likely to be reported, compared to the occurrence of a different stimulus, if it occurs within a short time of the first presentation. This failure is usually interpreted as a difficulty in assigning two separate episodic tokens to the same visual type. Thus, RB can provide useful information about which representations are treated as the same by the visual system. Two experiments tested whether RB occurs for repeated objects that were either in identical orientations, or differed by 30, 60, 90, or 180 degrees . Significant RB was found for all orientation differences, consistent with the existence of orientation-invariant object representations. However, under some circumstances, RB was reduced or even eliminated when the repeated object was rotated by 180 degrees , suggesting easier individuation of the repeated objects in this case. A third experiment confirmed that the upside-down orientation is processed more easily than other rotated orientations. The results indicate that, although object identity can be determined independently of orientation, orientation plays an important role in establishing distinct episodic representations of a repeated object, thus enabling one to report them as separate events.

  10. Determinants of novel object and location recognition during development.

    PubMed

    Jablonski, S A; Schreiber, W B; Westbrook, S R; Brennan, L E; Stanton, M E

    2013-11-01

    In the novel object recognition (OR) paradigm, rats are placed in an arena where they encounter two sample objects during a familiarization phase. A few minutes later, they are returned to the same arena and are presented with a familiar object and a novel object. The object location recognition (OL) variant involves the same familiarization procedure but during testing one of the familiar objects is placed in a novel location. Normal adult rats are able to perform both the OR and OL tasks, as indicated by enhanced exploration of the novel vs. the familiar test item. Rats with hippocampal lesions perform the OR but not OL task indicating a role of spatial memory in OL. Recently, these tasks have been used to study the ontogeny of spatial memory but the literature has yielded conflicting results. The current experiments add to this literature by: (1) behaviorally characterizing these paradigms in postnatal day (PD) 21, 26 and 31-day-old rats; (2) examining the role of NMDA systems in OR vs. OL; and (3) investigating the effects of neonatal alcohol exposure on both tasks. Results indicate that normal-developing rats are able to perform OR and OL by PD21, with greater novelty exploration in the OR task at each age. Second, memory acquisition in the OL but not OR task requires NMDA receptor function in juvenile rats [corrected]. Lastly, neonatal alcohol exposure does not disrupt performance in either task. Implications for the ontogeny of incidental spatial learning and its disruption by developmental alcohol exposure are discussed.

  11. Learning Distance Functions for Exemplar-Based Object Recognition

    DTIC Science & Technology

    2007-01-01

    This thesis investigates an exemplar-based approach to object recognition that learns, on an image-by-image basis, the relative importance of patch...this thesis is a method for learning a set-to-set distance function specific to each training image and demonstrating the use of these functions for...Science University of California, Berkeley Professor Jitendra Malik, Chair This thesis investigates an exemplar-based approach to object recognition that

  12. Comparing object recognition from binary and bipolar edge features

    PubMed Central

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2017-01-01

    Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary edge images (black edges on white background or white edges on black background) have been used to represent features (edges and cusps) in scenes. However, the polarity of cusps and edges may contain important depth information (depth from shading) which is lost in the binary edge representation. This depth information may be restored, to some degree, using bipolar edges. We compared recognition rates of 16 binary edge images, or bipolar features, by 26 subjects. Object recognition rates were higher with bipolar edges and the improvement was significant in scenes with complex backgrounds.

  13. Object Recognition and Random Image Structure Evolution

    ERIC Educational Resources Information Center

    Sadr, Jvid; Sinha, Pawan

    2004-01-01

    We present a technique called Random Image Structure Evolution (RISE) for use in experimental investigations of high-level visual perception. Potential applications of RISE include the quantitative measurement of perceptual hysteresis and priming, the study of the neural substrates of object perception, and the assessment and detection of subtle…

  14. Unposed Object Recognition using an Active Approach

    DTIC Science & Technology

    2013-02-01

    a transition between two different visual aspects V1, and V2 . The human brain stores pose in a simi- lar manner. Neurophysiological evidence sug...re- gion that is not on the table, as estimated using the Depth information provided by the Kinect . The size of the object was normalized in the same

  15. Changes in functional connectivity support conscious object recognition.

    PubMed

    Imamoglu, Fatma; Kahnt, Thorsten; Koch, Christof; Haynes, John-Dylan

    2012-12-01

    What are the brain mechanisms that mediate conscious object recognition? To investigate this question, it is essential to distinguish between brain processes that cause conscious recognition of a stimulus from other correlates of its sensory processing. Previous fMRI studies have identified large-scale brain activity ranging from striate to high-level sensory and prefrontal regions associated with conscious visual perception or recognition. However, the possible role of changes in connectivity during conscious perception between these regions has only rarely been studied. Here, we used fMRI and connectivity analyses, together with 120 custom-generated, two-tone, Mooney images to directly assess whether conscious recognition of an object is accompanied by a dynamical change in the functional coupling between extrastriate cortex and prefrontal areas. We compared recognizing an object versus not recognizing it in 19 naïve subjects using two different response modalities. We find that connectivity between the extrastriate cortex and the dorsolateral prefrontal cortex (DLPFC) increases when objects are consciously recognized. This interaction was independent of the response modality used to report conscious recognition. Furthermore, computing the difference in Granger causality between recognized and not recognized conditions reveals stronger feedforward connectivity than feedback connectivity when subjects recognized the objects. We suggest that frontal and visual brain regions are part of a functional network that supports conscious object recognition by changes in functional connectivity.

  16. Multiple Kernel Learning for Visual Object Recognition: A Review.

    PubMed

    Bucak, Serhat S; Rong Jin; Jain, Anil K

    2014-07-01

    Multiple kernel learning (MKL) is a principled approach for selecting and combining kernels for a given recognition task. A number of studies have shown that MKL is a useful tool for object recognition, where each image is represented by multiple sets of features and MKL is applied to combine different feature sets. We review the state-of-the-art for MKL, including different formulations and algorithms for solving the related optimization problems, with the focus on their applications to object recognition. One dilemma faced by practitioners interested in using MKL for object recognition is that different studies often provide conflicting results about the effectiveness and efficiency of MKL. To resolve this, we conduct extensive experiments on standard datasets to evaluate various approaches to MKL for object recognition. We argue that the seemingly contradictory conclusions offered by studies are due to different experimental setups. The conclusions of our study are: (i) given a sufficient number of training examples and feature/kernel types, MKL is more effective for object recognition than simple kernel combination (e.g., choosing the best performing kernel or average of kernels); and (ii) among the various approaches proposed for MKL, the sequential minimal optimization, semi-infinite programming, and level method based ones are computationally most efficient.

  17. Leveraging Cognitive Context for Object Recognition

    DTIC Science & Technology

    2014-06-01

    when looking in the kitchen, context may suggest related concepts such as apples or lemons . Any ambiguities that might arise from other similar...small and round), it might also look sim- ilar to other known small round objects (e.g., lemon ). Therefore, while our classification decision might...T3 T4 T5 # correct apple A A A A A 5 raisins R R R R A 4 banana B B B B B 5 lemon L L L L L 5 coyote R C R C C 3 wire W W W W W 5 Table 1. Results of

  18. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats.

    PubMed

    Rosselli, Federica B; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide

    2015-01-01

    In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning.

  19. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    PubMed Central

    Rosselli, Federica B.; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide

    2015-01-01

    In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning. PMID:25814936

  20. Induced gamma band responses predict recognition delays during object identification.

    PubMed

    Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M

    2007-06-01

    Neural mechanisms of object recognition seem to rely on activity of distributed neural assemblies coordinated by synchronous firing in the gamma-band range (>20 Hz). In the present electroencephalogram (EEG) study, we investigated induced gamma band activity during the naming of line drawings of upright objects and objects rotated in the image plane. Such plane-rotation paradigms elicit view-dependent processing, leading to delays in recognition of disoriented objects. Our behavioral results showed reaction time delays for rotated, as opposed to upright, images. These delays were accompanied by delays in the peak latency of induced gamma band responses (GBRs), in the absence of any effects on other measures of EEG activity. The latency of the induced GBRs has thus, for the first time, been selectively modulated by an experimental manipulation that delayed recognition. This finding indicates that induced GBRs have a genuine role as neural markers of late representational processes during object recognition. In concordance with the view that object recognition is achieved through dynamic learning processes, we propose that induced gamma band activity could be one of the possible cortical markers of such dynamic object coding.

  1. Robust object recognition under partial occlusions using NMF.

    PubMed

    Soukup, Daniel; Bajla, Ivan

    2008-01-01

    In recent years, nonnegative matrix factorization (NMF) methods of a reduced image data representation attracted the attention of computer vision community. These methods are considered as a convenient part-based representation of image data for recognition tasks with occluded objects. A novel modification in NMF recognition tasks is proposed which utilizes the matrix sparseness control introduced by Hoyer. We have analyzed the influence of sparseness on recognition rates (RRs) for various dimensions of subspaces generated for two image databases, ORL face database, and USPS handwritten digit database. We have studied the behavior of four types of distances between a projected unknown image object and feature vectors in NMF subspaces generated for training data. One of these metrics also is a novelty we proposed. In the recognition phase, partial occlusions in the test images have been modeled by putting two randomly large, randomly positioned black rectangles into each test image.

  2. A hippocampal signature of perceptual learning in object recognition.

    PubMed

    Guggenmos, Matthias; Rothkirch, Marcus; Obermayer, Klaus; Haynes, John-Dylan; Sterzer, Philipp

    2015-04-01

    Perceptual learning is the improvement in perceptual performance through training or exposure. Here, we used fMRI before and after extensive behavioral training to investigate the effects of perceptual learning on the recognition of objects under challenging viewing conditions. Objects belonged either to trained or untrained categories. Trained categories were further subdivided into trained and untrained exemplars and were coupled with high or low monetary rewards during training. After a 3-day training, object recognition was markedly improved. Although there was a considerable transfer of learning to untrained exemplars within categories, an enhancing effect of reward reinforcement was specific to trained exemplars. fMRI showed that hippocampus responses to both trained and untrained exemplars of trained categories were enhanced by perceptual learning and correlated with the effect of reward reinforcement. Our results suggest a key role of hippocampus in object recognition after perceptual learning.

  3. The Neural Regions Sustaining Episodic Encoding and Recognition of Objects

    ERIC Educational Resources Information Center

    Hofer, Alex; Siedentopf, Christian M.; Ischebeck, Anja; Rettenbacher, Maria A.; Widschwendter, Christian G.; Verius, Michael; Golaszewski, Stefan M.; Koppelstaetter, Florian; Felber, Stephan; Wolfgang Fleischhacker, W.

    2007-01-01

    In this functional MRI experiment, encoding of objects was associated with activation in left ventrolateral prefrontal/insular and right dorsolateral prefrontal and fusiform regions as well as in the left putamen. By contrast, correct recognition of previously learned objects (R judgments) produced activation in left superior frontal, bilateral…

  4. Spontaneous Object Recognition Memory in Aged Rats: Complexity versus Similarity

    ERIC Educational Resources Information Center

    Gamiz, Fernando; Gallo, Milagros

    2012-01-01

    Previous work on the effect of aging on spontaneous object recognition (SOR) memory tasks in rats has yielded controversial results. Although the results at long-retention intervals are consistent, conflicting results have been reported at shorter delays. We have assessed the potential relevance of the type of object used in the performance of…

  5. Object Recognition with Severe Spatial Deficits in Williams Syndrome: Sparing and Breakdown

    ERIC Educational Resources Information Center

    Landau, Barbara; Hoffman, James E.; Kurz, Nicole

    2006-01-01

    Williams syndrome (WS) is a rare genetic disorder that results in severe visual-spatial cognitive deficits coupled with relative sparing in language, face recognition, and certain aspects of motion processing. Here, we look for evidence for sparing or impairment in another cognitive system--object recognition. Children with WS, normal mental-age…

  6. Picture object recognition in an American black bear (Ursus americanus).

    PubMed

    Johnson-Ulrich, Zoe; Vonk, Jennifer; Humbyrd, Mary; Crowley, Marilyn; Wojtkowski, Ela; Yates, Florence; Allard, Stephanie

    2016-11-01

    Many animals have been tested for conceptual discriminations using two-dimensional images as stimuli, and many of these species appear to transfer knowledge from 2D images to analogous real life objects. We tested an American black bear for picture-object recognition using a two alternative forced choice task. She was presented with four unique sets of objects and corresponding pictures. The bear showed generalization from both objects to pictures and pictures to objects; however, her transfer was superior when transferring from real objects to pictures, suggesting that bears can recognize visual features from real objects within photographic images during discriminations.

  7. Pattern recognition systems and procedures

    NASA Technical Reports Server (NTRS)

    Nelson, G. D.; Serreyn, D. V.

    1972-01-01

    The objectives of the pattern recognition tasks are to develop (1) a man-machine interactive data processing system; and (2) procedures to determine effective features as a function of time for crops and soils. The signal analysis and dissemination equipment, SADE, is being developed as a man-machine interactive data processing system. SADE will provide imagery and multi-channel analog tape inputs for digitation and a color display of the data. SADE is an essential tool to aid in the investigation to determine useful features as a function of time for crops and soils. Four related studies are: (1) reliability of the multivariate Gaussian assumption; (2) usefulness of transforming features with regard to the classifier probability of error; (3) advantage of selecting quantizer parameters to minimize the classifier probability of error; and (4) advantage of using contextual data. The study of transformation of variables (features), especially those experimental studies which can be completed with the SADE system, will be done.

  8. Improving human object recognition performance using video enhancement techniques

    NASA Astrophysics Data System (ADS)

    Whitman, Lucy S.; Lewis, Colin; Oakley, John P.

    2004-12-01

    Atmospheric scattering causes significant degradation in the quality of video images, particularly when imaging over long distances. The principle problem is the reduction in contrast due to scattered light. It is known that when the scattering particles are not too large compared with the imaging wavelength (i.e. Mie scattering) then high spatial resolution information may be contained within a low-contrast image. Unfortunately this information is not easily perceived by a human observer, particularly when using a standard video monitor. A secondary problem is the difficulty of achieving a sharp focus since automatic focus techniques tend to fail in such conditions. Recently several commercial colour video processing systems have become available. These systems use various techniques to improve image quality in low contrast conditions whilst retaining colour content. These systems produce improvements in subjective image quality in some situations, particularly in conditions of haze and light fog. There is also some evidence that video enhancement leads to improved ATR performance when used as a pre-processing stage. Psychological literature indicates that low contrast levels generally lead to a reduction in the performance of human observers in carrying out simple visual tasks. The aim of this paper is to present the results of an empirical study on object recognition in adverse viewing conditions. The chosen visual task was vehicle number plate recognition at long ranges (500 m and beyond). Two different commercial video enhancement systems are evaluated using the same protocol. The results show an increase in effective range with some differences between the different enhancement systems.

  9. Object recognition and pose estimation of planar objects from range data

    NASA Technical Reports Server (NTRS)

    Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael

    1994-01-01

    The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and

  10. Object Recognition Method of Space Debris Tracking Image Sequence

    NASA Astrophysics Data System (ADS)

    Chen, Zhang; Yi-ding, Ping

    2016-07-01

    In order to strengthen the capability of space debris detection, the automated optical observation becomes more and more popular. Thus, the fully unattended automatic object recognition is urgently needed to study. As the open-loop tracking, which guides the telescope only with the historical orbital elements, is a simple and robust way to track space debris, based on the analysis on the point distribution characteristics of object's open-loop tracking image sequence in the pixel space, this paper has proposed to use the cluster identification method for the automatic space debris recognition, and made a comparison on the three kinds of different algorithms.

  11. Visual Exploration and Object Recognition by Lattice Deformation

    PubMed Central

    Melloni, Lucia; Mureşan, Raul C.

    2011-01-01

    Mechanisms of explicit object recognition are often difficult to investigate and require stimuli with controlled features whose expression can be manipulated in a precise quantitative fashion. Here, we developed a novel method (called “Dots”), for generating visual stimuli, which is based on the progressive deformation of a regular lattice of dots, driven by local contour information from images of objects. By applying progressively larger deformation to the lattice, the latter conveys progressively more information about the target object. Stimuli generated with the presented method enable a precise control of object-related information content while preserving low-level image statistics, globally, and affecting them only little, locally. We show that such stimuli are useful for investigating object recognition under a naturalistic setting – free visual exploration – enabling a clear dissociation between object detection and explicit recognition. Using the introduced stimuli, we show that top-down modulation induced by previous exposure to target objects can greatly influence perceptual decisions, lowering perceptual thresholds not only for object recognition but also for object detection (visual hysteresis). Visual hysteresis is target-specific, its expression and magnitude depending on the identity of individual objects. Relying on the particular features of dot stimuli and on eye-tracking measurements, we further demonstrate that top-down processes guide visual exploration, controlling how visual information is integrated by successive fixations. Prior knowledge about objects can guide saccades/fixations to sample locations that are supposed to be highly informative, even when the actual information is missing from those locations in the stimulus. The duration of individual fixations is modulated by the novelty and difficulty of the stimulus, likely reflecting cognitive demand. PMID:21818397

  12. Image-based object recognition in man, monkey and machine.

    PubMed

    Tarr, M J; Bülthoff, H H

    1998-07-01

    Theories of visual object recognition must solve the problem of recognizing 3D objects given that perceivers only receive 2D patterns of light on their retinae. Recent findings from human psychophysics, neurophysiology and machine vision provide converging evidence for 'image-based' models in which objects are represented as collections of viewpoint-specific local features. This approach is contrasted with 'structural-description' models in which objects are represented as configurations of 3D volumes or parts. We then review recent behavioral results that address the biological plausibility of both approaches, a well as some of their computational advantages and limitations. We conclude that, although the image-based approach holds great promise, it has potential pitfalls that may be best overcome by including structural information. Thus, the most viable model of object recognition may be one that incorporates the most appealing aspects of both image-based and structural description theories.

  13. Three-dimensional object rotation-tolerant recognition for integral imaging using synthetic discriminant function

    NASA Astrophysics Data System (ADS)

    Hao, Jinbo; Wang, Xiaorui; Zhang, Jianqi; Xu, Yin

    2013-04-01

    This paper presents a novel approach of three-dimensional object rotation-tolerant recognition that combines the merits of Integral Imaging (II) and Synthetic Discriminant Function (SDF). SDF aims at filters and distortion-tolerant recognition, and we use it for three-dimensional (3-D) rotation-tolerant recognition with II system. Using high relevancy of elemental images of II, the approach can not only realize 3-D rotation-tolerant recognition, but also reduce computational complexity. The correctness has been validated by experimental results.

  14. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  15. 3-D object recognition using 2-D views.

    PubMed

    Li, Wenjing; Bebis, George; Bourbakis, Nikolaos G

    2008-11-01

    We consider the problem of recognizing 3-D objects from 2-D images using geometric models and assuming different viewing angles and positions. Our goal is to recognize and localize instances of specific objects (i.e., model-based) in a scene. This is in contrast to category-based object recognition methods where the goal is to search for instances of objects that belong to a certain visual category (e.g., faces or cars). The key contribution of our work is improving 3-D object recognition by integrating Algebraic Functions of Views (AFoVs), a powerful framework for predicting the geometric appearance of an object due to viewpoint changes, with indexing and learning. During training, we compute the space of views that groups of object features can produce under the assumption of 3-D linear transformations, by combining a small number of reference views that contain the object features using AFoVs. Unrealistic views (e.g., due to the assumption of 3-D linear transformations) are eliminated by imposing a pair of rigidity constraints based on knowledge of the transformation between the reference views of the object. To represent the space of views that an object can produce compactly while allowing efficient hypothesis generation during recognition, we propose combining indexing with learning in two stages. In the first stage, we sample the space of views of an object sparsely and represent information about the samples using indexing. In the second stage, we build probabilistic models of shape appearance by sampling the space of views of the object densely and learning the manifold formed by the samples. Learning employs the Expectation-Maximization (EM) algorithm and takes place in a "universal," lower-dimensional, space computed through Random Projection (RP). During recognition, we extract groups of point features from the scene and we use indexing to retrieve the most feasible model groups that might have produced them (i.e., hypothesis generation). The likelihood

  16. Orientation-Invariant Object Recognition: Evidence from Repetition Blindness

    ERIC Educational Resources Information Center

    Harris, Irina M.; Dux, Paul E.

    2005-01-01

    The question of whether object recognition is orientation-invariant or orientation-dependent was investigated using a repetition blindness (RB) paradigm. In RB, the second occurrence of a repeated stimulus is less likely to be reported, compared to the occurrence of a different stimulus, if it occurs within a short time of the first presentation.…

  17. Computing with Connections in Visual Recognition of Origami Objects.

    ERIC Educational Resources Information Center

    Sabbah, Daniel

    1985-01-01

    Summarizes an initial foray in tackling artificial intelligence problems using a connectionist approach. The task chosen is visual recognition of Origami objects, and the questions answered are how to construct a connectionist network to represent and recognize projected Origami line drawings and the advantages such an approach would have. (30…

  18. High-speed optical object recognition processor with massive holographic memory

    NASA Astrophysics Data System (ADS)

    Chao, Tien-Hsin; Zhou, Hanying; Reyes, George F.

    2002-09-01

    Real-time object recognition using a compact grayscale optical correlator will be introduced. A holographic memory module for storing a large bank of optimum correlation filters to accommodate large data throughput rate needed for many real-world applications has also been developed. System architecture of the optical processor and the holographic memory will be presented. Application examples of this object recognition technology will also be demonstrated.

  19. Speckle-learning-based object recognition through scattering media.

    PubMed

    Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun

    2015-12-28

    We experimentally demonstrated object recognition through scattering media based on direct machine learning of a number of speckle intensity images. In the experiments, speckle intensity images of amplitude or phase objects on a spatial light modulator between scattering plates were captured by a camera. We used the support vector machine for binary classification of the captured speckle intensity images of face and non-face data. The experimental results showed that speckles are sufficient for machine learning.

  20. Learning Distance Functions for Exemplar-Based Object Recognition

    DTIC Science & Technology

    2007-08-08

    NOTES 14. ABSTRACT This thesis investigates an exemplar-based approach to object recognition that learns, on an image-by-image basis, the relative...contribution of this thesis is a method for learning a set-to-set distance function specific to each training image and demonstrating the use of these...Computer Science University of California, Berkeley Professor Jitendra Malik, Chair This thesis investigates an exemplar-based approach to object

  1. Nicotine Administration Attenuates Methamphetamine-Induced Novel Object Recognition Deficits

    PubMed Central

    Vieira-Brock, Paula L.; McFadden, Lisa M.; Nielsen, Shannon M.; Smith, Misty D.; Hanson, Glen R.

    2015-01-01

    Background: Previous studies have demonstrated that methamphetamine abuse leads to memory deficits and these are associated with relapse. Furthermore, extensive evidence indicates that nicotine prevents and/or improves memory deficits in different models of cognitive dysfunction and these nicotinic effects might be mediated by hippocampal or cortical nicotinic acetylcholine receptors. The present study investigated whether nicotine attenuates methamphetamine-induced novel object recognition deficits in rats and explored potential underlying mechanisms. Methods: Adolescent or adult male Sprague-Dawley rats received either nicotine water (10–75 μg/mL) or tap water for several weeks. Methamphetamine (4×7.5mg/kg/injection) or saline was administered either before or after chronic nicotine exposure. Novel object recognition was evaluated 6 days after methamphetamine or saline. Serotonin transporter function and density and α4β2 nicotinic acetylcholine receptor density were assessed on the following day. Results: Chronic nicotine intake via drinking water beginning during either adolescence or adulthood attenuated the novel object recognition deficits caused by a high-dose methamphetamine administration. Similarly, nicotine attenuated methamphetamine-induced deficits in novel object recognition when administered after methamphetamine treatment. However, nicotine did not attenuate the serotonergic deficits caused by methamphetamine in adults. Conversely, nicotine attenuated methamphetamine-induced deficits in α4β2 nicotinic acetylcholine receptor density in the hippocampal CA1 region. Furthermore, nicotine increased α4β2 nicotinic acetylcholine receptor density in the hippocampal CA3, dentate gyrus and perirhinal cortex in both saline- and methamphetamine-treated rats. Conclusions: Overall, these findings suggest that nicotine-induced increases in α4β2 nicotinic acetylcholine receptors in the hippocampus and perirhinal cortex might be one mechanism by which

  2. Priming for novel object associations: Neural differences from object item priming and equivalent forms of recognition.

    PubMed

    Gomes, Carlos Alexandre; Figueiredo, Patrícia; Mayes, Andrew

    2016-04-01

    The neural substrates of associative and item priming and recognition were investigated in a functional magnetic resonance imaging study over two separate sessions. In the priming session, participants decided which object of a pair was bigger during both study and test phases. In the recognition session, participants saw different object pairs and performed the same size-judgement task followed by an associative recognition memory task. Associative priming was accompanied by reduced activity in the right middle occipital gyrus as well as in bilateral hippocampus. Object item priming was accompanied by reduced activity in extensive priming-related areas in the bilateral occipitotemporofrontal cortex, as well as in the perirhinal cortex, but not in the hippocampus. Associative recognition was characterized by activity increases in regions linked to recollection, such as the hippocampus, posterior cingulate cortex, anterior medial frontal gyrus and posterior parahippocampal cortex. Item object priming and recognition recruited broadly overlapping regions (e.g., bilateral middle occipital and prefrontal cortices, left fusiform gyrus), even though the BOLD response was in opposite directions. These regions along with the precuneus, where both item priming and recognition were accompanied by activation, have been found to respond to object familiarity. The minimal structural overlap between object associative priming and recollection-based associative recognition suggests that they depend on largely different stimulus-related information and that the different directions of the effects indicate distinct retrieval mechanisms. In contrast, item priming and familiarity-based recognition seemed mainly based on common memory information, although the extent of common processing between priming and familiarity remains unclear. Further implications of these findings are discussed.

  3. Invariant visual object recognition and shape processing in rats.

    PubMed

    Zoccolan, Davide

    2015-05-15

    Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision.

  4. Invariant visual object recognition and shape processing in rats

    PubMed Central

    Zoccolan, Davide

    2015-01-01

    Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision. PMID:25561421

  5. Multispectral image analysis for object recognition and classification

    NASA Astrophysics Data System (ADS)

    Viau, C. R.; Payeur, P.; Cretu, A.-M.

    2016-05-01

    Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.

  6. Biological object recognition in μ-radiography images

    NASA Astrophysics Data System (ADS)

    Prochazka, A.; Dammer, J.; Weyda, F.; Sopko, V.; Benes, J.; Zeman, J.; Jandejsek, I.

    2015-03-01

    This study presents an applicability of real-time microradiography to biological objects, namely to horse chestnut leafminer, Cameraria ohridella (Insecta: Lepidoptera, Gracillariidae) and following image processing focusing on image segmentation and object recognition. The microradiography of insects (such as horse chestnut leafminer) provides a non-invasive imaging that leaves the organisms alive. The imaging requires a high spatial resolution (micrometer scale) radiographic system. Our radiographic system consists of a micro-focus X-ray tube and two types of detectors. The first is a charge integrating detector (Hamamatsu flat panel), the second is a pixel semiconductor detector (Medipix2 detector). The latter allows detection of single quantum photon of ionizing radiation. We obtained numerous horse chestnuts leafminer pupae in several microradiography images easy recognizable in automatic mode using the image processing methods. We implemented an algorithm that is able to count a number of dead and alive pupae in images. The algorithm was based on two methods: 1) noise reduction using mathematical morphology filters, 2) Canny edge detection. The accuracy of the algorithm is higher for the Medipix2 (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.83), than for the flat panel (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.77). Therefore, we conclude that Medipix2 has lower noise and better displays contours (edges) of biological objects. Our method allows automatic selection and calculation of dead and alive chestnut leafminer pupae. It leads to faster monitoring of the population of one of the world's important insect pest.

  7. A chicken model for studying the emergence of invariant object recognition.

    PubMed

    Wood, Samantha M W; Wood, Justin N

    2015-01-01

    "Invariant object recognition" refers to the ability to recognize objects across variation in their appearance on the retina. This ability is central to visual perception, yet its developmental origins are poorly understood. Traditionally, nonhuman primates, rats, and pigeons have been the most commonly used animal models for studying invariant object recognition. Although these animals have many advantages as model systems, they are not well suited for studying the emergence of invariant object recognition in the newborn brain. Here, we argue that newly hatched chicks (Gallus gallus) are an ideal model system for studying the emergence of invariant object recognition. Using an automated controlled-rearing approach, we show that chicks can build a viewpoint-invariant representation of the first object they see in their life. This invariant representation can be built from highly impoverished visual input (three images of an object separated by 15° azimuth rotations) and cannot be accounted for by low-level retina-like or V1-like neuronal representations. These results indicate that newborn neural circuits begin building invariant object representations at the onset of vision and argue for an increased focus on chicks as an animal model for studying invariant object recognition.

  8. Comparison of Object Recognition Behavior in Human and Monkey

    PubMed Central

    Rajalingham, Rishi; Schmidt, Kailyn

    2015-01-01

    Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to

  9. Multispectral and hyperspectral imaging with AOTF for object recognition

    NASA Astrophysics Data System (ADS)

    Gupta, Neelam; Dahmani, Rachid

    1999-01-01

    Acousto-optic tunable-filter (AOTF) technology has been used in the design of a no-moving parts, compact, lightweight, field portable, automated, adaptive spectral imaging system when combined with a high sensitivity imaging detector array. Such a system could detect spectral signatures of targets and/or background, which contain polarization information and can be digitally processed by a variety of algorithms. At the Army Research Laboratory, we have developed and used a number of AOTF imaging systems and are also carrying out the development of such imagers at longer wavelengths. We have carried out hyperspectral and multispectral imaging using AOTF systems covering the spectral range from the visible to mid-IR. One of the imager uses a two-cascaded collinear-architecture AOTF cell in the visible-to-near-IR range with a digital Si charge-coupled device camera as the detector. The images obtained with this system showed no color blurring or image shift due to the angular deviation of different colors as a result of diffraction, and the digital images are stored and processed with great ease. The spatial resolution of the filter was evaluated by means of the lines of a target chart. We have also obtained and processed images from another noncollinear visible-to-near-IR AOTF imager with a digital camera, and used hyperspectral image processing software to enhance object recognition in cluttered background. We are presently working on a mid-IR AOTF imaging system that uses a high- performance InSb focal plane array and image acquisition and processing software. We describe our hyperspectral imaging program and present results from our imaging experiments.

  10. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    PubMed

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life.

  11. How can selection of biologically inspired features improve the performance of a robust object recognition model?

    PubMed

    Ghodrati, Masoud; Khaligh-Razavi, Seyed-Mahdi; Ebrahimpour, Reza; Rajaei, Karim; Pooyan, Mohammad

    2012-01-01

    Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.

  12. How Can Selection of Biologically Inspired Features Improve the Performance of a Robust Object Recognition Model?

    PubMed Central

    Ghodrati, Masoud; Khaligh-Razavi, Seyed-Mahdi; Ebrahimpour, Reza; Rajaei, Karim; Pooyan, Mohammad

    2012-01-01

    Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition. PMID:22384229

  13. Distortion-invariant kernel correlation filters for general object recognition

    NASA Astrophysics Data System (ADS)

    Patnaik, Rohit

    General object recognition is a specific application of pattern recognition, in which an object in a background must be classified in the presence of several distortions such as aspect-view differences, scale differences, and depression-angle differences. Since the object can be present at different locations in the test input, a classification algorithm must be applied to all possible object locations in the test input. We emphasize one type of classifier, the distortion-invariant filter (DIF), for fast object recognition, since it can be applied to all possible object locations using a fast Fourier transform (FFT) correlation. We refer to distortion-invariant correlation filters simply as DIFs. DIFs all use a combination of training-set images that are representative of the expected distortions in the test set. In this dissertation, we consider a new approach that combines DIFs and the higher-order kernel technique; these form what we refer to as "kernel DIFs." Our objective is to develop higher-order classifiers that can be applied (efficiently and fast) to all possible locations of the object in the test input. All prior kernel DIFs ignored the issue of efficient filter shifts. We detail which kernel DIF formulations are computational realistic to use and why. We discuss the proper way to synthesize DIFs and kernel DIFs for the wide area search case (i.e., when a small filter must be applied to a much larger test input) and the preferable way to perform wide area search with these filters; this is new. We use computer-aided design (CAD) simulated infrared (IR) object imagery and real IR clutter imagery to obtain test results. Our test results on IR data show that a particular kernel DIF, the kernel SDF filter and its new "preprocessed" version, is promising, in terms of both test-set performance and on-line calculations, and is emphasized in this dissertation. We examine the recognition of object variants. We also quantify the effect of different constant

  14. Top-down facilitation of visual object recognition: object-based and context-based contributions.

    PubMed

    Fenske, Mark J; Aminoff, Elissa; Gronau, Nurit; Bar, Moshe

    2006-01-01

    The neural mechanisms subserving visual recognition are traditionally described in terms of bottom-up analysis, whereby increasingly complex aspects of the visual input are processed along a hierarchical progression of cortical regions. However, the importance of top-down facilitation in successful recognition has been emphasized in recent models and research findings. Here we consider evidence for top-down facilitation of recognition that is triggered by early information about an object, as well as by contextual associations between an object and other objects with which it typically appears. The object-based mechanism is proposed to trigger top-down facilitation of visual recognition rapidly, using a partially analyzed version of the input image (i.e., a blurred image) that is projected from early visual areas directly to the prefrontal cortex (PFC). This coarse representation activates in the PFC information that is back-projected as "initial guesses" to the temporal cortex where it presensitizes the most likely interpretations of the input object. In addition to this object-based facilitation, a context-based mechanism is proposed to trigger top-down facilitation through contextual associations between objects in scenes. These contextual associations activate predictive information about which objects are likely to appear together, and can influence the "initial guesses" about an object's identity. We have shown that contextual associations are analyzed by a network that includes the parahippocampal cortex and the retrosplenial complex. The integrated proposal described here is that object- and context-based top-down influences operate together, promoting efficient recognition by framing early information about an object within the constraints provided by a lifetime of experience with contextual associations.

  15. Biologically Motivated Novel Localization Paradigm by High-Level Multiple Object Recognition in Panoramic Images

    PubMed Central

    Kim, Sungho; Shim, Min-Sheob

    2015-01-01

    This paper presents the novel paradigm of a global localization method motivated by human visual systems (HVSs). HVSs actively use the information of the object recognition results for self-position localization and for viewing direction. The proposed localization paradigm consisted of three parts: panoramic image acquisition, multiple object recognition, and grid-based localization. Multiple object recognition information from panoramic images is utilized in the localization part. High-level object information was useful not only for global localization, but also for robot-object interactions. The metric global localization (position, viewing direction) was conducted based on the bearing information of recognized objects from just one panoramic image. The feasibility of the novel localization paradigm was validated experimentally. PMID:26457323

  16. Object Locating System

    NASA Technical Reports Server (NTRS)

    Arndt, G. Dickey (Inventor); Carl, James R. (Inventor)

    2000-01-01

    A portable system is provided that is operational for determining, with three dimensional resolution, the position of a buried object or approximately positioned object that may move in space or air or gas. The system has a plurality of receivers for detecting the signal front a target antenna and measuring the phase thereof with respect to a reference signal. The relative permittivity and conductivity of the medium in which the object is located is used along with the measured phase signal to determine a distance between the object and each of the plurality of receivers. Knowing these distances. an iteration technique is provided for solving equations simultaneously to provide position coordinates. The system may also be used for tracking movement of an object within close range of the system by sampling and recording subsequent position of the object. A dipole target antenna. when positioned adjacent to a buried object, may be energized using a separate transmitter which couples energy to the target antenna through the medium. The target antenna then preferably resonates at a different frequency, such as a second harmonic of the transmitter frequency.

  17. The Neural Basis of Nonvisual Object Recognition Memory in the Rat

    PubMed Central

    Albasser, Mathieu M.; Olarte-Sánchez, Cristian M.; Amin, Eman; Horne, Murray R.; Newton, Michael J.; Warburton, E. Clea; Aggleton, John P.

    2012-01-01

    Research into the neural basis of recognition memory has traditionally focused on the remembrance of visual stimuli. The present study examined the neural basis of object recognition memory in the dark, with a view to determining the extent to which it shares common pathways with visual-based object recognition. Experiment 1 assessed the expression of the immediate-early gene c-fos in rats that discriminated novel from familiar objects in the dark (Group Novel). Comparisons made with a control group that explored only familiar objects (Group Familiar) showed that Group Novel had higher c-fos activity in the rostral perirhinal cortex and the lateral entorhinal cortex. Outside the temporal region, Group Novel showed relatively increased c-fos activity in the anterior medial thalamic nucleus and the anterior cingulate cortex. Both the hippocampal CA fields and the granular retrosplenial cortex showed borderline increases in c-fos activity with object novelty. The hippocampal findings prompted Experiment 2. Here, rats with hippocampal lesions were tested in the dark for object recognition memory at different retention delays. Across two replications, no evidence was found that hippocampal lesions impair nonvisual object recognition. The results indicate that in the dark, as in the light, interrelated parahippocampal sites are activated when rats explore novel stimuli. These findings reveal a network of linked c-fos activations that share superficial features with those associated with visual recognition but differ in the fine details; for example, in the locus of the perirhinal cortex activation. While there may also be a relative increase in c-fos activation in the extended-hippocampal system to object recognition in the dark, there was no evidence that this recognition memory problem required an intact hippocampus. PMID:23244291

  18. The neural basis of nonvisual object recognition memory in the rat.

    PubMed

    Albasser, Mathieu M; Olarte-Sánchez, Cristian M; Amin, Eman; Horne, Murray R; Newton, Michael J; Warburton, E Clea; Aggleton, John P

    2013-02-01

    Research into the neural basis of recognition memory has traditionally focused on the remembrance of visual stimuli. The present study examined the neural basis of object recognition memory in the dark, with a view to determining the extent to which it shares common pathways with visual-based object recognition. Experiment 1 assessed the expression of the immediate-early gene c-fos in rats that discriminated novel from familiar objects in the dark (Group Novel). Comparisons made with a control group that explored only familiar objects (Group Familiar) showed that Group Novel had higher c-fos activity in the rostral perirhinal cortex and the lateral entorhinal cortex. Outside the temporal region, Group Novel showed relatively increased c-fos activity in the anterior medial thalamic nucleus and the anterior cingulate cortex. Both the hippocampal CA fields and the granular retrosplenial cortex showed borderline increases in c-fos activity with object novelty. The hippocampal findings prompted Experiment 2. Here, rats with hippocampal lesions were tested in the dark for object recognition memory at different retention delays. Across two replications, no evidence was found that hippocampal lesions impair nonvisual object recognition. The results indicate that in the dark, as in the light, interrelated parahippocampal sites are activated when rats explore novel stimuli. These findings reveal a network of linked c-fos activations that share superficial features with those associated with visual recognition but differ in the fine details; for example, in the locus of the perirhinal cortex activation. While there may also be a relative increase in c-fos activation in the extended-hippocampal system to object recognition in the dark, there was no evidence that this recognition memory problem required an intact hippocampus.

  19. The relationship between protein synthesis and protein degradation in object recognition memory.

    PubMed

    Furini, Cristiane R G; Myskiw, Jociane de C; Schmidt, Bianca E; Zinn, Carolina G; Peixoto, Patricia B; Pereira, Luiza D; Izquierdo, Ivan

    2015-11-01

    For decades there has been a consensus that de novo protein synthesis is necessary for long-term memory. A second round of protein synthesis has been described for both extinction and reconsolidation following an unreinforced test session. Recently, it was shown that consolidation and reconsolidation depend not only on protein synthesis but also on protein degradation by the ubiquitin-proteasome system (UPS), a major mechanism responsible for protein turnover. However, the involvement of UPS on consolidation and reconsolidation of object recognition memory remains unknown. Here we investigate in the CA1 region of the dorsal hippocampus the involvement of UPS-mediated protein degradation in consolidation and reconsolidation of object recognition memory. Animals with infusion cannulae stereotaxically implanted in the CA1 region of the dorsal hippocampus, were exposed to an object recognition task. The UPS inhibitor β-Lactacystin did not affect the consolidation and the reconsolidation of object recognition memory at doses known to affect other forms of memory (inhibitory avoidance, spatial learning in a water maze) while the protein synthesis inhibitor anisomycin impaired the consolidation and the reconsolidation of the object recognition memory. However, β-Lactacystin was able to reverse the impairment caused by anisomycin on the reconsolidation process in the CA1 region of the hippocampus. Therefore, it is possible to postulate a direct link between protein degradation and protein synthesis during the reconsolidation of the object recognition memory.

  20. Atypical Time Course of Object Recognition in Autism Spectrum Disorder

    PubMed Central

    Caplette, Laurent; Wicker, Bruno; Gosselin, Frédéric

    2016-01-01

    In neurotypical observers, it is widely believed that the visual system samples the world in a coarse-to-fine fashion. Past studies on Autism Spectrum Disorder (ASD) have identified atypical responses to fine visual information but did not investigate the time course of the sampling of information at different levels of granularity (i.e. Spatial Frequencies, SF). Here, we examined this question during an object recognition task in ASD and neurotypical observers using a novel experimental paradigm. Our results confirm and characterize with unprecedented precision a coarse-to-fine sampling of SF information in neurotypical observers. In ASD observers, we discovered a different pattern of SF sampling across time: in the first 80 ms, high SFs lead ASD observers to a higher accuracy than neurotypical observers, and these SFs are sampled differently across time in the two subject groups. Our results might be related to the absence of a mandatory precedence of global information, and to top-down processing abnormalities in ASD. PMID:27752088

  1. Neuronal substrates characterizing two stages in visual object recognition.

    PubMed

    Taminato, Tomoya; Miura, Naoki; Sugiura, Motoaki; Kawashima, Ryuta

    2014-12-01

    Visual object recognition is classically believed to involve two stages: a perception stage in which perceptual information is integrated, and a memory stage in which perceptual information is matched with an object's representation. The transition from the perception to the memory stage can be slowed to allow for neuroanatomical segregation using a degraded visual stimuli (DVS) task in which images are first presented at low spatial resolution and then gradually sharpened. In this functional magnetic resonance imaging study, we characterized these two stages using a DVS task based on the classic model. To separate periods that are assumed to dominate the perception, memory, and post-recognition stages, subjects responded once when they could guess the identity of the object in the image and a second time when they were certain of the identity. Activation of the right medial occipitotemporal region and the posterior part of the rostral medial frontal cortex was found to be characteristic of the perception and memory stages, respectively. Although the known role of the former region in perceptual integration was consistent with the classic model, a likely role of the latter region in monitoring for confirmation of recognition suggests the advantage of recently proposed interactive models.

  2. Multisensory interactions between auditory and haptic object recognition.

    PubMed

    Kassuba, Tanja; Menz, Mareike M; Röder, Brigitte; Siebner, Hartwig R

    2013-05-01

    Object manipulation produces characteristic sounds and causes specific haptic sensations that facilitate the recognition of the manipulated object. To identify the neural correlates of audio-haptic binding of object features, healthy volunteers underwent functional magnetic resonance imaging while they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG, this effect was found for haptic-to-auditory and auditory-to-haptic matching, whereas the pSTS only displayed a crossmodal matching effect for congruent auditory targets. Auditory and somatosensory association cortices showed increased activity during crossmodal object matching which was, however, independent of semantic congruency. Together, the results show multisensory interactions at different hierarchical stages of auditory and haptic object processing. Object-specific crossmodal interactions culminate in the left FG, which may provide a higher order convergence zone for conceptual object knowledge.

  3. Early recurrent feedback facilitates visual object recognition under challenging conditions

    PubMed Central

    Wyatte, Dean; Jilk, David J.; O'Reilly, Randall C.

    2014-01-01

    Standard models of the visual object recognition pathway hold that a largely feedforward process from the retina through inferotemporal cortex leads to object identification. A subsequent feedback process originating in frontoparietal areas through reciprocal connections to striate cortex provides attentional support to salient or behaviorally-relevant features. Here, we review mounting evidence that feedback signals also originate within extrastriate regions and begin during the initial feedforward process. This feedback process is temporally dissociable from attention and provides important functions such as grouping, associational reinforcement, and filling-in of features. Local feedback signals operating concurrently with feedforward processing are important for object identification in noisy real-world situations, particularly when objects are partially occluded, unclear, or otherwise ambiguous. Altogether, the dissociation of early and late feedback processes presented here expands on current models of object identification, and suggests a dual role for descending feedback projections. PMID:25071647

  4. How does the brain solve visual object recognition?

    PubMed Central

    Zoccolan, Davide; Rust, Nicole C.

    2012-01-01

    Mounting evidence suggests that “core object recognition,” the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains little-understood. Here we review evidence ranging from individual neurons, to neuronal populations, to behavior, to computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical sub-networks with a common functional goal. PMID:22325196

  5. An audiovisual emotion recognition system

    NASA Astrophysics Data System (ADS)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  6. Methylphenidate restores novel object recognition in DARPP-32 knockout mice.

    PubMed

    Heyser, Charles J; McNaughton, Caitlyn H; Vishnevetsky, Donna; Fienberg, Allen A

    2013-09-15

    Previously, we have shown that Dopamine- and cAMP-regulated phosphoprotein of 32kDa (DARPP-32) knockout mice required significantly more trials to reach criterion than wild-type mice in an operant reversal-learning task. The present study was conducted to examine adult male and female DARPP-32 knockout mice and wild-type controls in a novel object recognition test. Wild-type and knockout mice exhibited comparable behavior during the initial exploration trials. As expected, wild-type mice exhibited preferential exploration of the novel object during the substitution test, demonstrating recognition memory. In contrast, knockout mice did not show preferential exploration of the novel object, instead exhibiting an increase in exploration of all objects during the test trial. Given that the removal of DARPP-32 is an intracellular manipulation, it seemed possible to pharmacologically restore some cellular activity and behavior by stimulating dopamine receptors. Therefore, a second experiment was conducted examining the effect of methylphenidate. The results show that methylphenidate increased horizontal activity in both wild-type and knockout mice, though this increase was blunted in knockout mice. Pretreatment with methylphenidate significantly impaired novel object recognition in wild-type mice. In contrast, pretreatment with methylphenidate restored the behavior of DARPP-32 knockout mice to that observed in wild-type mice given saline. These results provide additional evidence for a functional role of DARPP-32 in the mediation of processes underlying learning and memory. These results also indicate that the behavioral deficits in DARPP-32 knockout mice may be restored by the administration of methylphenidate.

  7. A chicken model for studying the emergence of invariant object recognition

    PubMed Central

    Wood, Samantha M. W.; Wood, Justin N.

    2015-01-01

    “Invariant object recognition” refers to the ability to recognize objects across variation in their appearance on the retina. This ability is central to visual perception, yet its developmental origins are poorly understood. Traditionally, nonhuman primates, rats, and pigeons have been the most commonly used animal models for studying invariant object recognition. Although these animals have many advantages as model systems, they are not well suited for studying the emergence of invariant object recognition in the newborn brain. Here, we argue that newly hatched chicks (Gallus gallus) are an ideal model system for studying the emergence of invariant object recognition. Using an automated controlled-rearing approach, we show that chicks can build a viewpoint-invariant representation of the first object they see in their life. This invariant representation can be built from highly impoverished visual input (three images of an object separated by 15° azimuth rotations) and cannot be accounted for by low-level retina-like or V1-like neuronal representations. These results indicate that newborn neural circuits begin building invariant object representations at the onset of vision and argue for an increased focus on chicks as an animal model for studying invariant object recognition. PMID:25767436

  8. The role of the dorsal dentate gyrus in object and object-context recognition.

    PubMed

    Dees, Richard L; Kesner, Raymond P

    2013-11-01

    The aim of this study was to determine the role of the dorsal dentate gyrus (dDG) in object recognition memory using a black box and object-context recognition memory using a clear box with available cues that define a spatial context. Based on a 10 min retention interval between the study phase and the test phase, the results indicated that dDG lesioned rats are impaired when compared to controls in the object-context recognition test in the clear box. However, there were no reliable differences between the dDG lesioned rats and the control group for the object recognition test in the black box. Even though the dDG lesioned rats were more active in object exploration, the habituation gradients did not differ. These results suggest that the dentate gyrus lesioned rats are clearly impaired when there is an important contribution of context. Furthermore, based on a 24 h retention interval in the black box the dDG lesioned rats were impaired compared to controls.

  9. Visual appearance interacts with conceptual knowledge in object recognition

    PubMed Central

    Cheung, Olivia S.; Gauthier, Isabel

    2014-01-01

    Objects contain rich visual and conceptual information, but do these two types of information interact? Here, we examine whether visual and conceptual information interact when observers see novel objects for the first time. We then address how this interaction influences the acquisition of perceptual expertise. We used two types of novel objects (Greebles), designed to resemble either animals or tools, and two lists of words, which described non-visual attributes of people or man-made objects. Participants first judged if a word was more suitable for describing people or objects while ignoring a task-irrelevant image, and showed faster responses if the words and the unfamiliar objects were congruent in terms of animacy (e.g., animal-like objects with words that described human). Participants then learned to associate objects and words that were either congruent or not in animacy, before receiving expertise training to rapidly individuate the objects. Congruent pairing of visual and conceptual information facilitated observers' ability to become a perceptual expert, as revealed in a matching task that required visual identification at the basic or subordinate levels. Taken together, these findings show that visual and conceptual information interact at multiple levels in object recognition. PMID:25120509

  10. Touching and Hearing Unseen Objects: Multisensory Effects on Scene Recognition

    PubMed Central

    van Lier, Rob

    2016-01-01

    In three experiments, we investigated the influence of object-specific sounds on haptic scene recognition without vision. Blindfolded participants had to recognize, through touch, spatial scenes comprising six objects that were placed on a round platform. Critically, in half of the trials, object-specific sounds were played when objects were touched (bimodal condition), while sounds were turned off in the other half of the trials (unimodal condition). After first exploring the scene, two objects were swapped and the task was to report, which of the objects swapped positions. In Experiment 1, geometrical objects and simple sounds were used, while in Experiment 2, the objects comprised toy animals that were matched with semantically compatible animal sounds. In Experiment 3, we replicated Experiment 1, but now a tactile-auditory object identification task preceded the experiment in which the participants learned to identify the objects based on tactile and auditory input. For each experiment, the results revealed a significant performance increase only after the switch from bimodal to unimodal. Thus, it appears that the release of bimodal identification, from audio-tactile to tactile-only produces a benefit that is not achieved when having the reversed order in which sound was added after having experience with haptic-only. We conclude that task-related factors other than mere bimodal identification cause the facilitation when switching from bimodal to unimodal conditions. PMID:27698985

  11. An optical processor for object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Sloan, J.; Udomkesmalee, S.

    1987-01-01

    The design and development of a miniaturized optical processor that performs real time image correlation are described. The optical correlator utilizes the Vander Lugt matched spatial filter technique. The correlation output, a focused beam of light, is imaged onto a CMOS photodetector array. In addition to performing target recognition, the device also tracks the target. The hardware, composed of optical and electro-optical components, occupies only 590 cu cm of volume. A complete correlator system would also include an input imaging lens. This optical processing system is compact, rugged, requires only 3.5 watts of operating power, and weighs less than 3 kg. It represents a major achievement in miniaturizing optical processors. When considered as a special-purpose processing unit, it is an attractive alternative to conventional digital image recognition processing. It is conceivable that the combined technology of both optical and ditital processing could result in a very advanced robot vision system.

  12. The role of color diagnosticity in object recognition and representation.

    PubMed

    Therriault, David J; Yaxley, Richard H; Zwaan, Rolf A

    2009-11-01

    The role of color diagnosticity in object recognition and representation was assessed in three Experiments. In Experiment 1a, participants named pictured objects that were strongly associated with a particular color (e.g., pumpkin and orange). Stimuli were presented in a congruent color, incongruent color, or grayscale. Results indicated that congruent color facilitated naming time, incongruent color impeded naming time, and naming times for grayscale items were situated between the congruent and incongruent conditions. Experiment 1b replicated Experiment 1a using a verification task. Experiment 2 employed a picture rebus paradigm in which participants read sentences one word at a time that included pictures of color diagnostic objects (i.e., pictures were substituted for critical nouns). Results indicated that the "reading" times of these pictures mirrored the pattern found in Experiment 1. In Experiment 3, an attempt was made to override color diagnosticity using linguistic context (e.g., a pumpkin was described as painted green). Linguistic context did not override color diagnosticity. Collectively, the results demonstrate that color information is regularly utilized in object recognition and representation for highly color diagnostic items.

  13. Long-term visual object recognition memory in aged rats.

    PubMed

    Platano, Daniela; Fattoretti, Patrizia; Balietti, Marta; Bertoni-Freddari, Carlo; Aicardi, Giorgio

    2008-04-01

    Aging is associated with memory impairments, but the neural bases of this process need to be clarified. To this end, behavioral protocols for memory testing may be applied to aged animals to compare memory performances with functional and structural characteristics of specific brain regions. Visual object recognition memory can be investigated in the rat using a behavioral task based on its spontaneous preference for exploring novel rather than familiar objects. We found that a behavioral task able to elicit long-term visual object recognition memory in adult Long-Evans rats failed in aged (25-27 months old) Wistar rats. Since no tasks effective in aged rats are reported in the literature, we changed the experimental conditions to improve consolidation processes to assess whether this form of memory can still be maintained for long term at this age: the learning trials were performed in a smaller box, identical to the home cage, and the inter-trial delays were shortened. We observed a reduction in anxiety in this box (as indicated by the lower number of fecal boli produced during habituation), and we developed a learning protocol able to elicit a visual object recognition memory that was maintained after 24 h in these aged rats. When we applied the same protocol to adult rats, we obtained similar results. This experimental approach can be useful to study functional and structural changes associated with age-related memory impairments, and may help to identify new behavioral strategies and molecular targets that can be addressed to ameliorate memory performances during aging.

  14. Canonical Wnt signaling is necessary for object recognition memory consolidation.

    PubMed

    Fortress, Ashley M; Schram, Sarah L; Tuscher, Jennifer J; Frick, Karyn M

    2013-07-31

    Wnt signaling has emerged as a potent regulator of hippocampal synaptic function, although no evidence yet supports a critical role for Wnt signaling in hippocampal memory. Here, we sought to determine whether canonical β-catenin-dependent Wnt signaling is necessary for hippocampal memory consolidation. Immediately after training in a hippocampal-dependent object recognition task, mice received a dorsal hippocampal (DH) infusion of vehicle or the canonical Wnt antagonist Dickkopf-1 (Dkk-1; 50, 100, or 200 ng/hemisphere). Twenty-four hours later, mice receiving vehicle remembered the familiar object explored during training. However, mice receiving Dkk-1 exhibited no memory for the training object, indicating that object recognition memory consolidation is dependent on canonical Wnt signaling. To determine how Dkk-1 affects canonical Wnt signaling, mice were infused with vehicle or 50 ng/hemisphere Dkk-1 and protein levels of Wnt-related proteins (Dkk-1, GSK3β, β-catenin, TCF1, LEF1, Cyclin D1, c-myc, Wnt7a, Wnt1, and PSD95) were measured in the dorsal hippocampus 5 min or 4 h later. Dkk-1 produced a rapid increase in Dkk-1 protein levels and a decrease in phosphorylated GSK3β levels, followed by a decrease in β-catenin, TCF1, LEF1, Cyclin D1, c-myc, Wnt7a, and PSD95 protein levels 4 h later. These data suggest that alterations in Wnt/GSK3β/β-catenin signaling may underlie the memory impairments induced by Dkk-1. In a subsequent experiment, object training alone rapidly increased DH GSK3β phosphorylation and levels of β-catenin and Cyclin D1. These data suggest that canonical Wnt signaling is regulated by object learning and is necessary for hippocampal memory consolidation.

  15. Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

    PubMed

    Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison

    2016-04-01

    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that

  16. Detailed 3D representations for object recognition and modeling.

    PubMed

    Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad

    2013-11-01

    Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.

  17. Recognition of similar objects using simulated prosthetic vision.

    PubMed

    Hu, Jie; Xia, Peng; Gu, Chaochen; Qi, Jin; Li, Sheng; Peng, Yinghong

    2014-02-01

    Due to the limitations of existing techniques, even the most advanced visual prostheses, using several hundred electrodes to transmit signals to the visual pathway, restrict sensory function and visual information. To identify the bottlenecks and guide prosthesis designing, psychophysics simulations of a visual prosthesis in normally sighted individuals are desirable. In this study, psychophysical experiments of discriminating objects with similar profiles were used to test the effects of phosphene array parameters (spatial resolution, gray scale, distortion, and dropout rate) on visual information using simulated prosthetic vision. The results showed that the increase in spatial resolution and number of gray levels and the decrease in phosphene distortion and dropout rate improved recognition performance, and the accuracy is 78.5% under the optimum condition (resolution: 32 × 32, gray level: 8, distortion: k = 0, dropout: 0%). In combined parameter tests, significant facial recognition accuracy was achieved for all the images with k = 0.1 distortion and 10% dropout. Compared with other experiments, we find that different objects do not show specific sensitivity to the changes of parameters and visual information is not nearly enough even under the optimum condition. The results suggests that higher spatial resolution and more gray levels are required for visual prosthetic devices and further research on image processing strategies to improve prosthetic vision is necessary, especially when the wearers have to accomplish more than simple visual tasks.

  18. Vision: are models of object recognition catching up with the brain?

    PubMed

    Poggio, Tomaso; Ullman, Shimon

    2013-12-01

    Object recognition has been a central yet elusive goal of computational vision. For many years, computer performance seemed highly deficient and unable to emulate the basic capabilities of the human recognition system. Over the past decade or so, computer scientists and neuroscientists have developed algorithms and systems-and models of visual cortex-that have come much closer to human performance in visual identification and categorization. In this personal perspective, we discuss the ongoing struggle of visual models to catch up with the visual cortex, identify key reasons for the relatively rapid improvement of artificial systems and models, and identify open problems for computational vision in this domain.

  19. Combat Systems Department Employee Recognition System

    DTIC Science & Technology

    1996-08-01

    the individual’s view of positive reinforcement . Include them in discussions. Ask for their opinions. 4 NSWCDD/MP-96/137 SECTION 3 INSTRUCTIONS 3.1...PROVIDES POSITIVE REINFORCEMENT . THE EASIER IT IS TO DO, THE MORE LIKELY IT IS TO GET DONE. N-DEPARTMENT EMPLOYEE RECOGNITION SYSTEM PRI NCI PLES THERE ARE...INDIVIDUAL’S VIEW OF POSITIVE REINFORCEMENT . ASK THEM I Papa .18Iv 15 N-DEPARTMENT EMPLOYEE RECOGNITION SYSTEM * OUTLINE A. TASK FORCE MEMBERSHIP

  20. Neural Substrates of View-Invariant Object Recognition Developed without Experiencing Rotations of the Objects

    PubMed Central

    Okamura, Jun-ya; Yamaguchi, Reona; Honda, Kazunari; Tanaka, Keiji

    2014-01-01

    One fails to recognize an unfamiliar object across changes in viewing angle when it must be discriminated from similar distractor objects. View-invariant recognition gradually develops as the viewer repeatedly sees the objects in rotation. It is assumed that different views of each object are associated with one another while their successive appearance is experienced in rotation. However, natural experience of objects also contains ample opportunities to discriminate among objects at each of the multiple viewing angles. Our previous behavioral experiments showed that after experiencing a new set of object stimuli during a task that required only discrimination at each of four viewing angles at 30° intervals, monkeys could recognize the objects across changes in viewing angle up to 60°. By recording activities of neurons from the inferotemporal cortex after various types of preparatory experience, we here found a possible neural substrate for the monkeys' performance. For object sets that the monkeys had experienced during the task that required only discrimination at each of four viewing angles, many inferotemporal neurons showed object selectivity covering multiple views. The degree of view generalization found for these object sets was similar to that found for stimulus sets with which the monkeys had been trained to conduct view-invariant recognition. These results suggest that the experience of discriminating new objects in each of several viewing angles develops the partially view-generalized object selectivity distributed over many neurons in the inferotemporal cortex, which in turn bases the monkeys' emergent capability to discriminate the objects across changes in viewing angle. PMID:25378169

  1. Neural substrates of view-invariant object recognition developed without experiencing rotations of the objects.

    PubMed

    Okamura, Jun-Ya; Yamaguchi, Reona; Honda, Kazunari; Wang, Gang; Tanaka, Keiji

    2014-11-05

    One fails to recognize an unfamiliar object across changes in viewing angle when it must be discriminated from similar distractor objects. View-invariant recognition gradually develops as the viewer repeatedly sees the objects in rotation. It is assumed that different views of each object are associated with one another while their successive appearance is experienced in rotation. However, natural experience of objects also contains ample opportunities to discriminate among objects at each of the multiple viewing angles. Our previous behavioral experiments showed that after experiencing a new set of object stimuli during a task that required only discrimination at each of four viewing angles at 30° intervals, monkeys could recognize the objects across changes in viewing angle up to 60°. By recording activities of neurons from the inferotemporal cortex after various types of preparatory experience, we here found a possible neural substrate for the monkeys' performance. For object sets that the monkeys had experienced during the task that required only discrimination at each of four viewing angles, many inferotemporal neurons showed object selectivity covering multiple views. The degree of view generalization found for these object sets was similar to that found for stimulus sets with which the monkeys had been trained to conduct view-invariant recognition. These results suggest that the experience of discriminating new objects in each of several viewing angles develops the partially view-generalized object selectivity distributed over many neurons in the inferotemporal cortex, which in turn bases the monkeys' emergent capability to discriminate the objects across changes in viewing angle.

  2. Moment invariants applied to the recognition of objects using neural networks

    NASA Astrophysics Data System (ADS)

    Gonzaga, Adilson; Ferreira Costa, Jose A.

    1996-11-01

    Visual pattern recognition and visual object recognition are central aspects of high level computer vision systems. This paper describes a method of recognizing patterns and objects in digital images with several types of objects in different positions. The moment invariants of such real work, noise containing images are processed by a neural network, which performs a pattern classification. Two learning methods are adopted for training the network: the conjugate gradient and the Levenber-Maquardt algorithms, both in conjunction with simulated annealing, for different sets of error conditions and features. Real images are used for testing the net's correct class assignments and rejections. We present results and comments focusing on the system's capacity to generalize, even in the presence of noise, geometrical transformations, object shadows and other types of image degradation. One advantage of the artificial neural network employed is its low execution time, allowing the system to be integrated to an assembly industry line for automated visual inspection.

  3. Category Specificity in Normal Episodic Learning: Applications to Object Recognition and Category-Specific Agnosia

    ERIC Educational Resources Information Center

    Bukach, Cindy M.; Bub, Daniel N.; Masson, Michael E. J.; Lindsay, D. Stephen

    2004-01-01

    Studies of patients with category-specific agnosia (CSA) have given rise to multiple theories of object recognition, most of which assume the existence of a stable, abstract semantic memory system. We applied an episodic view of memory to questions raised by CSA in a series of studies examining normal observers' recall of newly learned attributes…

  4. Modeling 4D Human-Object Interactions for Joint Event Segmentation, Recognition, and Object Localization.

    PubMed

    Wei, Ping; Zhao, Yibiao; Zheng, Nanning; Zhu, Song-Chun

    2016-06-01

    In this paper, we present a 4D human-object interaction (4DHOI) model for solving three vision tasks jointly: i) event segmentation from a video sequence, ii) event recognition and parsing, and iii) contextual object localization. The 4DHOI model represents the geometric, temporal, and semantic relations in daily events involving human-object interactions. In 3D space, the interactions of human poses and contextual objects are modeled by semantic co-occurrence and geometric compatibility. On the time axis, the interactions are represented as a sequence of atomic event transitions with coherent objects. The 4DHOI model is a hierarchical spatial-temporal graph representation which can be used for inferring scene functionality and object affordance. The graph structures and parameters are learned using an ordered expectation maximization algorithm which mines the spatial-temporal structures of events from RGB-D video samples. Given an input RGB-D video, the inference is performed by a dynamic programming beam search algorithm which simultaneously carries out event segmentation, recognition, and object localization. We collected and released a large multiview RGB-D event dataset which contains 3,815 video sequences and 383,036 RGB-D frames captured by three RGB-D cameras. The experimental results on three challenging datasets demonstrate the strength of the proposed method.

  5. Short-term plasticity of visuo-haptic object recognition.

    PubMed

    Kassuba, Tanja; Klinge, Corinna; Hölig, Cordula; Röder, Brigitte; Siebner, Hartwig R

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies have provided ample evidence for the involvement of the lateral occipital cortex (LO), fusiform gyrus (FG), and intraparietal sulcus (IPS) in visuo-haptic object integration. Here we applied 30 min of sham (non-effective) or real offline 1 Hz repetitive transcranial magnetic stimulation (rTMS) to perturb neural processing in left LO immediately before subjects performed a visuo-haptic delayed-match-to-sample task during fMRI. In this task, subjects had to match sample (S1) and target (S2) objects presented sequentially within or across vision and/or haptics in both directions (visual-haptic or haptic-visual) and decide whether or not S1 and S2 were the same objects. Real rTMS transiently decreased activity at the site of stimulation and remote regions such as the right LO and bilateral FG during haptic S1 processing. Without affecting behavior, the same stimulation gave rise to relative increases in activation during S2 processing in the right LO, left FG, bilateral IPS, and other regions previously associated with object recognition. Critically, the modality of S2 determined which regions were recruited after rTMS. Relative to sham rTMS, real rTMS induced increased activations during crossmodal congruent matching in the left FG for haptic S2 and the temporal pole for visual S2. In addition, we found stronger activations for incongruent than congruent matching in the right anterior parahippocampus and middle frontal gyrus for crossmodal matching of haptic S2 and in the left FG and bilateral IPS for unimodal matching of visual S2, only after real but not sham rTMS. The results imply that a focal perturbation of the left LO triggers modality-specific interactions between the stimulated left LO and other key regions of object processing possibly to maintain unimpaired object recognition. This suggests that visual and haptic processing engage partially distinct brain networks during visuo-haptic object matching.

  6. Robust feature detection for 3D object recognition and matching

    NASA Astrophysics Data System (ADS)

    Pankanti, Sharath; Dorai, Chitra; Jain, Anil K.

    1993-06-01

    Salient surface features play a central role in tasks related to 3-D object recognition and matching. There is a large body of psychophysical evidence demonstrating the perceptual significance of surface features such as local minima of principal curvatures in the decomposition of objects into a hierarchy of parts. Many recognition strategies employed in machine vision also directly use features derived from surface properties for matching. Hence, it is important to develop techniques that detect surface features reliably. Our proposed scheme consists of (1) a preprocessing stage, (2) a feature detection stage, and (3) a feature integration stage. The preprocessing step selectively smoothes out noise in the depth data without degrading salient surface details and permits reliable local estimation of the surface features. The feature detection stage detects both edge-based and region-based features, of which many are derived from curvature estimates. The third stage is responsible for integrating the information provided by the individual feature detectors. This stage also completes the partial boundaries provided by the individual feature detectors, using proximity and continuity principles of Gestalt. All our algorithms use local support and, therefore, are inherently parallelizable. We demonstrate the efficacy and robustness of our approach by applying it to two diverse domains of applications: (1) segmentation of objects into volumetric primitives and (2) detection of salient contours on free-form surfaces. We have tested our algorithms on a number of real range images with varying degrees of noise and missing data due to self-occlusion. The preliminary results are very encouraging.

  7. Expanded Dempster-Shafer reasoning technique for image feature integration and object recognition

    NASA Astrophysics Data System (ADS)

    Zhu, Quiming; Huang, Yinghua; Payne, Matt G.

    1992-12-01

    Integration of information from multiple sources has been one of the key steps to the success of general vision systems. It is also an essential problem to the development of color image understanding algorithms that make full use of the multichannel color data for object recognition. This paper presents a feature integration system characterized by a hybrid combination of a statistic-based reasoning technique and a symbolic logic-based inference method. A competitive evidence enhancement scheme is used in the process to fuse information from multiple sources. The scheme expands the Dempster-Shafer's function of combination and improves the reliability of the object recognition. When applied to integrate the object features extracted from the multiple spectra of the color images, the system alleviates the drawback of traditional Baysian classification system.

  8. Object recognition testing: methodological considerations on exploration and discrimination measures.

    PubMed

    Akkerman, Sven; Blokland, Arjan; Reneerkens, Olga; van Goethem, Nick P; Bollen, Eva; Gijselaers, Hieronymus J M; Lieben, Cindy K J; Steinbusch, Harry W M; Prickaerts, Jos

    2012-07-01

    The object recognition task (ORT) is a popular one-trial learning test for animals. In the current study, we investigated several methodological issues concerning the task. Data was pooled from 28 ORT studies, containing 731 male Wistar rats. We investigated the relationship between 3 common absolute- and relative discrimination measures, as well as their relation to exploratory activity. In this context, the effects of pre-experimental habituation, object familiarity, trial duration, retention interval and the amnesic drugs MK-801 and scopolamine were investigated. Our analyses showed that the ORT is very sensitive, capable of detecting subtle differences in memory (discrimination) and exploratory performance. As a consequence, it is susceptible to potential biases due to (injection) stress and side effects of drugs. Our data indicated that a minimum amount of exploration is required in the sample and test trial for stable significant discrimination performance. However, there was no relationship between the level of exploration in the sample trial and discrimination performance. In addition, the level of exploration in the test trial was positively related to the absolute discrimination measure, whereas this was not the case for relative discrimination measures, which correct for exploratory differences, making them more resistant to exploration biases. Animals appeared to remember object information over multiple test sessions. Therefore, when animals have encountered both objects in prior test sessions, the object preference observed in the test trial of 1h retention intervals is probably due to a relative difference in familiarity between the objects in the test trial, rather than true novelty per se. Taken together, our findings suggest to take into consideration pre-experimental exposure (familiarization) to objects, habituation to treatment procedures, and the use of relative discrimination measures when using the ORT.

  9. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  10. Combining feature- and correspondence-based methods for visual object recognition.

    PubMed

    Westphal, Günter; Würtz, Rolf P

    2009-07-01

    We present an object recognition system built on a combination of feature- and correspondence-based pattern recognizers. The feature-based part, called preselection network, is a single-layer feedforward network weighted with the amount of information contributed by each feature to the decision at hand. For processing arbitrary objects, we employ small, regular graphs whose nodes are attributed with Gabor amplitudes, termed parquet graphs. The preselection network can quickly rule out most irrelevant matches and leaves only the ambiguous cases, so-called model candidates, to be verified by a rudimentary version of elastic graph matching, a standard correspondence-based technique for face and object recognition. According to the model, graphs are constructed that describe the object in the input image well. We report the results of experiments on standard databases for object recognition. The method achieved high recognition rates on identity and pose. Unlike many other models, it can also cope with varying background, multiple objects, and partial occlusion.

  11. Laptop Computer - Based Facial Recognition System Assessment

    SciTech Connect

    R. A. Cain; G. B. Singleton

    2001-03-01

    The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results. After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in

  12. Anthropomorphic robot for recognition and drawing generalized object images

    NASA Astrophysics Data System (ADS)

    Ginzburg, Vera M.

    1998-10-01

    The process of recognition, for instance, understanding the text, written by different fonts, consists in the depriving of the individual attributes of the letters in the particular font. It is shown that such process, in Nature and technique, can be provided by the narrowing the spatial frequency of the object's image by its defocusing. In defocusing images remain only areas, so-called Informative Fragments (IFs), which all together form the generalized (stylized) image of many identical objects. It is shown that the variety of shapes of IFs is restricted and can be presented by `Geometrical alphabet'. The `letters' for this alphabet can be created using two basic `genetic' figures: a stripe and round spot. It is known from physiology that the special cells of visual cortex response to these particular figures. The prototype of such `genetic' alphabet has been made using Boolean algebra (Venn's diagrams). The algorithm for drawing the letter's (`genlet's') shape in this alphabet and generalized images of objects (for example, `sleeping cat'), are given. A scheme of an anthropomorphic robot is shown together with results of model computer experiment of the robot's action--`drawing' the generalized image.

  13. Organization of face and object recognition in modular neural network models.

    PubMed

    Dailey, M N.; Cottrell, G W.

    1999-10-01

    There is strong evidence that face processing in the brain is localized. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent neural mechanisms. In this paper, we use computational models to show how the face processing specialization apparently underlying prosopagnosia and visual object agnosia could be attributed to (1) a relatively simple competitive selection mechanism that, during development, devotes neural resources to the tasks they are best at performing, (2) the developing infant's need to perform subordinate classification (identification) of faces early on, and (3) the infant's low visual acuity at birth. Inspired by de Schonen, Mancini and Liegeois' arguments (1998) [de Schonen, S., Mancini, J., Liegeois, F. (1998). About functional cortical specialization: the development of face recognition. In: F. Simon & G. Butterworth, The development of sensory, motor, and cognitive capacities in early infancy (pp. 103-116). Hove, UK: Psychology Press] that factors like these could bias the visual system to develop a processing subsystem particularly useful for face recognition, and Jacobs and Kosslyn's experiments (1994) [Jacobs, R. A., & Kosslyn, S. M. (1994). Encoding shape and spatial relations-the role of receptive field size in coordination complementary representations. Cognitive Science, 18(3), 361-368] in the mixtures of experts (ME) modeling paradigm, we provide a preliminary computational demonstration of how this theory accounts for the double dissociation between face and object processing. We present two feed-forward computational models of visual processing. In both models, the selection mechanism is a gating network that mediates a competition between modules attempting to classify input stimuli. In Model I, when the modules

  14. When Action Observation Facilitates Visual Perception: Activation in Visuo-Motor Areas Contributes to Object Recognition.

    PubMed

    Sim, Eun-Jin; Helbig, Hannah B; Graf, Markus; Kiefer, Markus

    2015-09-01

    Recent evidence suggests an interaction between the ventral visual-perceptual and dorsal visuo-motor brain systems during the course of object recognition. However, the precise function of the dorsal stream for perception remains to be determined. The present study specified the functional contribution of the visuo-motor system to visual object recognition using functional magnetic resonance imaging and event-related potential (ERP) during action priming. Primes were movies showing hands performing an action with an object with the object being erased, followed by a manipulable target object, which either afforded a similar or a dissimilar action (congruent vs. incongruent condition). Participants had to recognize the target object within a picture-word matching task. Priming-related reductions of brain activity were found in frontal and parietal visuo-motor areas as well as in ventral regions including inferior and anterior temporal areas. Effective connectivity analyses suggested functional influences of parietal areas on anterior temporal areas. ERPs revealed priming-related source activity in visuo-motor regions at about 120 ms and later activity in the ventral stream at about 380 ms. Hence, rapidly initiated visuo-motor processes within the dorsal stream functionally contribute to visual object recognition in interaction with ventral stream processes dedicated to visual analysis and semantic integration.

  15. The role of histamine receptors in the consolidation of object recognition memory.

    PubMed

    da Silveira, Clarice Krás Borges; Furini, Cristiane R G; Benetti, Fernando; Monteiro, Siomara da Cruz; Izquierdo, Ivan

    2013-07-01

    Findings have shown that histamine receptors in the hippocampus modulate the acquisition and extinction of fear motivated learning. In order to determine the role of hippocampal histaminergic receptors on recognition memory, adult male Wistar rats with indwelling infusion cannulae stereotaxically placed in the CA1 region of dorsal hippocampus were trained in an object recognition learning task involving exposure to two different stimulus objects in an enclosed environment. In the test session, one of the objects presented during training was replaced by a novel one. Recognition memory retention was assessed 24 h after training by comparing the time spent in exploration (sniffing and touching) of the known object with that of the novel one. When infused in the CA1 region immediately, 30, 120 or 360 min posttraining, the H1-receptor antagonist, pyrilamine, the H2-receptor antagonist, ranitidine, and the H3-receptor agonist, imetit, blocked long-term memory retention in a time dependent manner (30-120 min) without affecting general exploratory behavior, anxiety state or hippocampal function. Our data indicate that histaminergic system modulates consolidation of object recognition memory through H1, H2 and H3 receptors.

  16. Infrared detection, recognition and identification of handheld objects

    NASA Astrophysics Data System (ADS)

    Adomeit, Uwe

    2012-10-01

    A main criterion for comparison and selection of thermal imagers for military applications is their nominal range performance. This nominal range performance is calculated for a defined task and standardized target and environmental conditions. The only standardization available to date is STANAG 4347. The target defined there is based on a main battle tank in front view. Because of modified military requirements, this target is no longer up-to-date. Today, different topics of interest are of interest, especially differentiation between friend and foe and identification of humans. There is no direct way to differentiate between friend and foe in asymmetric scenarios, but one clue can be that someone is carrying a weapon. This clue can be transformed in the observer tasks detection: a person is carrying or is not carrying an object, recognition: the object is a long / medium / short range weapon or civil equipment and identification: the object can be named (e. g. AK-47, M-4, G36, RPG7, Axe, Shovel etc.). These tasks can be assessed experimentally and from the results of such an assessment, a standard target for handheld objects may be derived. For a first assessment, a human carrying 13 different handheld objects in front of his chest was recorded at four different ranges with an IR-dual-band camera. From the recorded data, a perception experiment was prepared. It was conducted with 17 observers in a 13-alternative forced choice, unlimited observation time arrangement. The results of the test together with Minimum Temperature Difference Perceived measurements of the camera and temperature difference and critical dimension derived from the recorded imagery allowed defining a first standard target according to the above tasks. This standard target consist of 2.5 / 3.5 / 5 DRI line pairs on target, 0.24 m critical size and 1 K temperature difference. The values are preliminary and have to be refined in the future. Necessary are different aspect angles, different

  17. The influence of color information on the recognition of color diagnostic and noncolor diagnostic objects.

    PubMed

    Bramão, Inês; Inácio, Filomena; Faísca, Luís; Reis, Alexandra; Petersson, Karl Magnus

    2011-01-01

    In the present study, the authors explore in detail the level of visual object recognition at which perceptual color information improves the recognition of color diagnostic and noncolor diagnostic objects. To address this issue, 3 object recognition tasks with different cognitive demands were designed: (a) an object verification task; (b) a category verification task; and (c) a name verification task. The authors found that perceptual color information improved color diagnostic object recognition mainly in tasks for which access to the semantic knowledge about the object was necessary to perform the task; that is, in category and name verification. In contrast, the authors found that perceptual color information facilitates noncolor diagnostic object recognition when access to the object's structural description from long-term memory was necessary--that is, object verification. In summary, the present study shows that the role of perceptual color information in object recognition is dependent on color diagnosticity.

  18. Severe Cross-Modal Object Recognition Deficits in Rats Treated Sub-Chronically with NMDA Receptor Antagonists are Reversed by Systemic Nicotine: Implications for Abnormal Multisensory Integration in Schizophrenia

    PubMed Central

    Jacklin, Derek L; Goel, Amit; Clementino, Kyle J; Hall, Alexander W M; Talpos, John C; Winters, Boyer D

    2012-01-01

    Schizophrenia is a complex and debilitating disorder, characterized by positive, negative, and cognitive symptoms. Among the cognitive deficits observed in patients with schizophrenia, recent work has indicated abnormalities in multisensory integration, a process that is important for the formation of comprehensive environmental percepts and for the appropriate guidance of behavior. Very little is known about the neural bases of such multisensory integration deficits, partly because of the lack of viable behavioral tasks to assess this process in animal models. In this study, we used our recently developed rodent cross-modal object recognition (CMOR) task to investigate multisensory integration functions in rats treated sub-chronically with one of two N-methyl-D-aspartate receptor (NMDAR) antagonists, MK-801, or ketamine; such treatment is known to produce schizophrenia-like symptoms. Rats treated with the NMDAR antagonists were impaired on the standard spontaneous object recognition (SOR) task, unimodal (tactile or visual only) versions of SOR, and the CMOR task with intermediate to long retention delays between acquisition and testing phases, but they displayed a selective CMOR task deficit when mnemonic demand was minimized. This selective impairment in multisensory information processing was dose-dependently reversed by acute systemic administration of nicotine. These findings suggest that persistent NMDAR hypofunction may contribute to the multisensory integration deficits observed in patients with schizophrenia and highlight the valuable potential of the CMOR task to facilitate further systematic investigation of the neural bases of, and potential treatments for, this hitherto overlooked aspect of cognitive dysfunction in schizophrenia. PMID:22669170

  19. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  20. Image quality analysis and improvement of Ladar reflective tomography for space object recognition

    NASA Astrophysics Data System (ADS)

    Wang, Jin-cheng; Zhou, Shi-wei; Shi, Liang; Hu, Yi-Hua; Wang, Yong

    2016-01-01

    Some problems in the application of Ladar reflective tomography for space object recognition are studied in this work. An analytic target model is adopted to investigate the image reconstruction properties with limited relative angle range, which are useful to verify the target shape from the incomplete image, analyze the shadowing effect of the target and design the satellite payloads against recognition via reflective tomography approach. We proposed an iterative maximum likelihood method basing on Bayesian theory, which can effectively compress the pulse width and greatly improve the image resolution of incoherent LRT system without loss of signal to noise ratio.

  1. Cascade fuzzy ART: a new extensible database for model-based object recognition

    NASA Astrophysics Data System (ADS)

    Hung, Hai-Lung; Liao, Hong-Yuan M.; Lin, Shing-Jong; Lin, Wei-Chung; Fan, Kuo-Chin

    1996-02-01

    In this paper, we propose a cascade fuzzy ART (CFART) neural network which can be used as an extensible database in a model-based object recognition system. The proposed CFART networks can accept both binary and continuous inputs. Besides, it preserves the prominent characteristics of a fuzzy ART network and extends the fuzzy ART's capability toward a hierarchical class representation of input patterns. The learning processes of the proposed network are unsupervised and self-organizing, which include coupled top-down searching and bottom-up learning processes. In addition, a global searching tree is built to speed up the learning and recognition processes.

  2. Moving Object Control System

    NASA Technical Reports Server (NTRS)

    Arndt, G. Dickey (Inventor); Carl, James R. (Inventor)

    2001-01-01

    A method is provided for controlling two objects relatively moveable with respect to each other. A plurality of receivers are provided for detecting a distinctive microwave signal from each of the objects and measuring the phase thereof with respect to a reference signal. The measured phase signal is used to determine a distance between each of the objects and each of the plurality of receivers. Control signals produced in response to the relative distances are used to control the position of the two objects.

  3. Learning invariant object recognition from temporal correlation in a hierarchical network.

    PubMed

    Lessmann, Markus; Würtz, Rolf P

    2014-06-01

    Invariant object recognition, which means the recognition of object categories independent of conditions like viewing angle, scale and illumination, is a task of great interest that humans can fulfill much better than artificial systems. During the last years several basic principles were derived from neurophysiological observations and careful consideration: (1) Developing invariance to possible transformations of the object by learning temporal sequences of visual features that occur during the respective alterations. (2) Learning in a hierarchical structure, so basic level (visual) knowledge can be reused for different kinds of objects. (3) Using feedback to compare predicted input with the current one for choosing an interpretation in the case of ambiguous signals. In this paper we propose a network which implements all of these concepts in a computationally efficient manner which gives very good results on standard object datasets. By dynamically switching off weakly active neurons and pruning weights computation is sped up and thus handling of large databases with several thousands of images and a number of categories in a similar order becomes possible. The involved parameters allow flexible adaptation to the information content of training data and allow tuning to different databases relatively easily. Precondition for successful learning is that training images are presented in an order assuring that images of the same object under similar viewing conditions follow each other. Through an implementation with sparse data structures the system has moderate memory demands and still yields very good recognition rates.

  4. Real-time concealed-object detection and recognition with passive millimeter wave imaging.

    PubMed

    Yeom, Seokwon; Lee, Dong-Su; Jang, Yushin; Lee, Mun-Kyo; Jung, Sang-Won

    2012-04-23

    Millimeter wave (MMW) imaging is finding rapid adoption in security applications such as concealed object detection under clothing. A passive MMW imaging system can operate as a stand-off type sensor that scans people in both indoors and outdoors. However, the imaging system often suffers from the diffraction limit and the low signal level. Therefore, suitable intelligent image processing algorithms would be required for automatic detection and recognition of the concealed objects. This paper proposes real-time outdoor concealed-object detection and recognition with a radiometric imaging system. The concealed object region is extracted by the multi-level segmentation. A novel approach is proposed to measure similarity between two binary images. Principal component analysis (PCA) regularizes the shape in terms of translation and rotation. A geometric-based feature vector is composed of shape descriptors, which can achieve scale and orientation-invariant and distortion-tolerant property. Class is decided by minimum Euclidean distance between normalized feature vectors. Experiments confirm that the proposed methods provide fast and reliable recognition of the concealed object carried by a moving human subject.

  5. Cross-modal object recognition and dynamic weighting of sensory inputs in a fish

    PubMed Central

    Schumacher, Sarah; Burt de Perera, Theresa; Thenert, Johanna; von der Emde, Gerhard

    2016-01-01

    Most animals use multiple sensory modalities to obtain information about objects in their environment. There is a clear adaptive advantage to being able to recognize objects cross-modally and spontaneously (without prior training with the sense being tested) as this increases the flexibility of a multisensory system, allowing an animal to perceive its world more accurately and react to environmental changes more rapidly. So far, spontaneous cross-modal object recognition has only been shown in a few mammalian species, raising the question as to whether such a high-level function may be associated with complex mammalian brain structures, and therefore absent in animals lacking a cerebral cortex. Here we use an object-discrimination paradigm based on operant conditioning to show, for the first time to our knowledge, that a nonmammalian vertebrate, the weakly electric fish Gnathonemus petersii, is capable of performing spontaneous cross-modal object recognition and that the sensory inputs are weighted dynamically during this task. We found that fish trained to discriminate between two objects with either vision or the active electric sense, were subsequently able to accomplish the task using only the untrained sense. Furthermore we show that cross-modal object recognition is influenced by a dynamic weighting of the sensory inputs. The fish weight object-related sensory inputs according to their reliability, to minimize uncertainty and to enable an optimal integration of the senses. Our results show that spontaneous cross-modal object recognition and dynamic weighting of sensory inputs are present in a nonmammalian vertebrate. PMID:27313211

  6. Associative recognition and the hippocampus: differential effects of hippocampal lesions on object-place, object-context and object-place-context memory.

    PubMed

    Langston, Rosamund F; Wood, Emma R

    2010-10-01

    The hippocampus is thought to be required for the associative recognition of objects together with the spatial or temporal contexts in which they occur. However, recent data showing that rats with fornix lesions perform as well as controls in an object-place task, while being impaired on an object-place-context task (Eacott and Norman (2004) J Neurosci 24:1948-1953), suggest that not all forms of context-dependent associative recognition depend on the integrity of the hippocampus. To examine the role of the hippocampus in context-dependent recognition directly, the present study tested the effects of large, selective, bilateral hippocampus lesions in rats on performance of a series of spontaneous recognition memory tasks: object recognition, object-place recognition, object-context recognition and object-place-context recognition. Consistent with the effects of fornix lesions, animals with hippocampus lesions were impaired only on the object-place-context task. These data confirm that not all forms of context-dependent associative recognition are mediated by the hippocampus. Subsequent experiments suggested that the object-place task does not require an allocentric representation of space, which could account for the lack of impairment following hippocampus lesions. Importantly, as the object-place-context task has similar spatial requirements, the selective deficit in object-place-context recognition suggests that this task requires hippocampus-dependent neural processes distinct from those required for allocentric spatial memory, or for object memory, object-place memory or object-context memory. Two possibilities are that object, place, and context information converge only in the hippocampus, or that recognition of integrated object-place-context information requires a hippocampus-dependent mode of retrieval, such as recollection.

  7. The speed of object recognition from a haptic glance: event-related potential evidence.

    PubMed

    Gurtubay-Antolin, Ane; Rodriguez-Herreros, Borja; Rodríguez-Fornells, Antoni

    2015-05-01

    Recognition of an object usually involves a wide range of sensory inputs. Accumulating evidence shows that first brain responses associated with the visual discrimination of objects emerge around 150 ms, but fewer studies have been devoted to measure the first neural signature of haptic recognition. To investigate the speed of haptic processing, we recorded event-related potentials (ERPs) during a shape discrimination task without visual information. After a restricted exploratory procedure, participants (n = 27) were instructed to judge whether the touched object corresponded to an expected object whose name had been previously presented in a screen. We encountered that any incongruence between the presented word and the shape of the object evoked a frontocentral negativity starting at ∼175 ms. With the use of source analysis and L2 minimum-norm estimation, the neural sources of this differential activity were located in higher level somatosensory areas and prefrontal regions involved in error monitoring and cognitive control. Our findings reveal that the somatosensory system is able to complete an amount of haptic processing substantial enough to trigger conflict-related responses in medial and prefrontal cortices in <200 ms. The present results show that our haptic system is a fast recognition device closely interlinked with error- and conflict-monitoring processes.

  8. Communicative Signals Promote Object Recognition Memory and Modulate the Right Posterior STS.

    PubMed

    Redcay, Elizabeth; Ludlum, Ruth S; Velnoskey, Kayla R; Kanwal, Simren

    2016-01-01

    Detection of communicative signals is thought to facilitate knowledge acquisition early in life, but less is known about the role these signals play in adult learning or about the brain systems supporting sensitivity to communicative intent. The current study examined how ostensive gaze cues and communicative actions affect adult recognition memory and modulate neural activity as measured by fMRI. For both the behavioral and fMRI experiments, participants viewed a series of videos of an actress acting on one of two objects in front of her. Communicative context in the videos was manipulated in a 2 × 2 design in which the actress either had direct gaze (Gaze) or wore a visor (NoGaze) and either pointed at (Point) or reached for (Reach) one of the objects (target) in front of her. Participants then completed a recognition memory task with old (target and nontarget) objects and novel objects. Recognition memory for target objects in the Gaze conditions was greater than NoGaze, but no effects of gesture type were seen. Similarly, the fMRI video-viewing task revealed a significant effect of Gaze within right posterior STS (pSTS), but no significant effects of Gesture. Furthermore, pSTS sensitivity to Gaze conditions was related to greater memory for objects viewed in Gaze, as compared with NoGaze, conditions. Taken together, these results demonstrate that the ostensive, communicative signal of direct gaze preceding an object-directed action enhances recognition memory for attended items and modulates the pSTS response to object-directed actions. Thus, establishment of a communicative context through ostensive signals remains an important component of learning and memory into adulthood, and the pSTS may play a role in facilitating this type of social learning.

  9. Crowding, grouping, and object recognition: A matter of appearance

    PubMed Central

    Herzog, Michael H.; Sayim, Bilge; Chicherov, Vitaly; Manassi, Mauro

    2015-01-01

    In crowding, the perception of a target strongly deteriorates when neighboring elements are presented. Crowding is usually assumed to have the following characteristics. (a) Crowding is determined only by nearby elements within a restricted region around the target (Bouma's law). (b) Increasing the number of flankers can only deteriorate performance. (c) Target-flanker interference is feature-specific. These characteristics are usually explained by pooling models, which are well in the spirit of classic models of object recognition. In this review, we summarize recent findings showing that crowding is not determined by the above characteristics, thus, challenging most models of crowding. We propose that the spatial configuration across the entire visual field determines crowding. Only when one understands how all elements of a visual scene group with each other, can one determine crowding strength. We put forward the hypothesis that appearance (i.e., how stimuli look) is a good predictor for crowding, because both crowding and appearance reflect the output of recurrent processing rather than interactions during the initial phase of visual processing. PMID:26024452

  10. Crowding, grouping, and object recognition: A matter of appearance.

    PubMed

    Herzog, Michael H; Sayim, Bilge; Chicherov, Vitaly; Manassi, Mauro

    2015-01-01

    In crowding, the perception of a target strongly deteriorates when neighboring elements are presented. Crowding is usually assumed to have the following characteristics. (a) Crowding is determined only by nearby elements within a restricted region around the target (Bouma's law). (b) Increasing the number of flankers can only deteriorate performance. (c) Target-flanker interference is feature-specific. These characteristics are usually explained by pooling models, which are well in the spirit of classic models of object recognition. In this review, we summarize recent findings showing that crowding is not determined by the above characteristics, thus, challenging most models of crowding. We propose that the spatial configuration across the entire visual field determines crowding. Only when one understands how all elements of a visual scene group with each other, can one determine crowding strength. We put forward the hypothesis that appearance (i.e., how stimuli look) is a good predictor for crowding, because both crowding and appearance reflect the output of recurrent processing rather than interactions during the initial phase of visual processing.

  11. Acute restraint stress and corticosterone transiently disrupts novelty preference in an object recognition task.

    PubMed

    Vargas-López, Viviana; Torres-Berrio, Angélica; González-Martínez, Lina; Múnera, Alejandro; Lamprea, Marisol R

    2015-09-15

    The object recognition task is a procedure based on rodents' natural tendency to explore novel objects which is frequently used for memory testing. However, in some instances novelty preference is replaced by familiarity preference, raising questions regarding the validity of novelty preference as a pure recognition memory index. Acute stress- and corticosterone administration-induced novel object preference disruption has been frequently interpreted as memory impairment; however, it is still not clear whether such effect can be actually attributed to either mnemonic disruption or altered novelty seeking. Seventy-five adult male Wistar rats were trained in an object recognition task and subjected to either acute stress or corticosterone administration to evaluate the effect of stress or corticosterone on an object recognition task. Acute stress was induced by restraining movement for 1 or 4h, ending 30 min before the sample trial. Corticosterone was injected intraperitoneally 10 min before the test trial which was performed either 1 or 24h after the sample trial. Four-hour, but not 1-h, stress induced familiar object preference during the test trial performed 1h after the sample trial; however, acute stress had no effects on the test when performed 24h after sample trial. Systemic administration of corticosterone before the test trial performed either 1 or 24h after the sample trial also resulted in familiar object preference. However, neither acute stress nor corticosterone induced changes in locomotor behaviour. Taken together, such results suggested that acute stress probably does not induce memory retrieval impairment but, instead, induces an emotional arousing state which motivates novelty avoidance.

  12. The Role of Fixation and Visual Attention in Object Recognition.

    DTIC Science & Technology

    1995-01-01

    stereo by matching features and using trigonometry to convert disparity into depth lies in the matching process (correspondence problem). This is...avoid obstacles and perform other tasks which require recognizing specific objects in the environment. An active-attentive vision system is more robust

  13. 3D video analysis of the novel object recognition test in rats.

    PubMed

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity.

  14. Histogram of Gabor phase patterns (HGPP): a novel object representation approach for face recognition.

    PubMed

    Zhang, Baochang; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-01-01

    A novel object descriptor, histogram of Gabor phase pattern (HGPP), is proposed for robust face recognition. In HGPP, the quadrant-bit codes are first extracted from faces based on the Gabor transformation. Global Gabor phase pattern (GGPP) and local Gabor phase pattern (LGPP) are then proposed to encode the phase variations. GGPP captures the variations derived from the orientation changing of Gabor wavelet at a given scale (frequency), while LGPP encodes the local neighborhood variations by using a novel local XOR pattern (LXP) operator. They are both divided into the nonoverlapping rectangular regions, from which spatial histograms are extracted and concatenated into an extended histogram feature to represent the original image. Finally, the recognition is performed by using the nearest-neighbor classifier with histogram intersection as the similarity measurement. The features of HGPP lie in two aspects: 1) HGPP can describe the general face images robustly without the training procedure; 2) HGPP encodes the Gabor phase information, while most previous face recognition methods exploit the Gabor magnitude information. In addition, Fisher separation criterion is further used to improve the performance of HGPP by weighing the subregions of the image according to their discriminative powers. The proposed methods are successfully applied to face recognition, and the experiment results on the large-scale FERET and CAS-PEAL databases show that the proposed algorithms significantly outperform other well-known systems in terms of recognition rate.

  15. A Scientific Workflow Platform for Generic and Scalable Object Recognition on Medical Images

    NASA Astrophysics Data System (ADS)

    Möller, Manuel; Tuot, Christopher; Sintek, Michael

    In the research project THESEUS MEDICO we aim at a system combining medical image information with semantic background knowledge from ontologies to give clinicians fully cross-modal access to biomedical image repositories. Therefore joint efforts have to be made in more than one dimension: Object detection processes have to be specified in which an abstraction is performed starting from low-level image features across landmark detection utilizing abstract domain knowledge up to high-level object recognition. We propose a system based on a client-server extension of the scientific workflow platform Kepler that assists the collaboration of medical experts and computer scientists during development and parameter learning.

  16. Regulation of object recognition and object placement by ovarian sex steroid hormones.

    PubMed

    Tuscher, Jennifer J; Fortress, Ashley M; Kim, Jaekyoon; Frick, Karyn M

    2015-05-15

    The ovarian hormones 17β-estradiol (E2) and progesterone (P4) are potent modulators of hippocampal memory formation. Both hormones have been demonstrated to enhance hippocampal memory by regulating the cellular and molecular mechanisms thought to underlie memory formation. Behavioral neuroendocrinologists have increasingly used the object recognition and object placement (object location) tasks to investigate the role of E2 and P4 in regulating hippocampal memory formation in rodents. These one-trial learning tasks are ideal for studying acute effects of hormone treatments on different phases of memory because they can be administered during acquisition (pre-training), consolidation (post-training), or retrieval (pre-testing). This review synthesizes the rodent literature testing the effects of E2 and P4 on object recognition (OR) and object placement (OP), and the molecular mechanisms in the hippocampus supporting memory formation in these tasks. Some general trends emerge from the data. Among gonadally intact females, object memory tends to be best when E2 and P4 levels are elevated during the estrous cycle, pregnancy, and in middle age. In ovariectomized females, E2 given before or immediately after testing generally enhances OR and OP in young and middle-aged rats and mice, although effects are mixed in aged rodents. Effects of E2 treatment on OR and OP memory consolidation can be mediated by both classical estrogen receptors (ERα and ERβ), and depend on glutamate receptors (NMDA, mGluR1) and activation of numerous cell signaling cascades (e.g., ERK, PI3K/Akt, mTOR) and epigenetic processes (e.g., histone acetylation, DNA methylation). Acute P4 treatment given immediately after training also enhances OR and OP in young and middle-aged ovariectomized females by activating similar cell signaling pathways as E2 (e.g., ERK, mTOR). The few studies that have administered both hormones in combination suggest that treatment can enhance OR and OP, but that effects

  17. Privacy protection schemes for fingerprint recognition systems

    NASA Astrophysics Data System (ADS)

    Marasco, Emanuela; Cukic, Bojan

    2015-05-01

    The deployment of fingerprint recognition systems has always raised concerns related to personal privacy. A fingerprint is permanently associated with an individual and, generally, it cannot be reset if compromised in one application. Given that fingerprints are not a secret, potential misuses besides personal recognition represent privacy threats and may lead to public distrust. Privacy mechanisms control access to personal information and limit the likelihood of intrusions. In this paper, image- and feature-level schemes for privacy protection in fingerprint recognition systems are reviewed. Storing only key features of a biometric signature can reduce the likelihood of biometric data being used for unintended purposes. In biometric cryptosystems and biometric-based key release, the biometric component verifies the identity of the user, while the cryptographic key protects the communication channel. Transformation-based approaches only a transformed version of the original biometric signature is stored. Different applications can use different transforms. Matching is performed in the transformed domain which enable the preservation of low error rates. Since such templates do not reveal information about individuals, they are referred to as cancelable templates. A compromised template can be re-issued using a different transform. At image-level, de-identification schemes can remove identifiers disclosed for objectives unrelated to the original purpose, while permitting other authorized uses of personal information. Fingerprint images can be de-identified by, for example, mixing fingerprints or removing gender signature. In both cases, degradation of matching performance is minimized.

  18. The importance of visual features in generic vs. specialized object recognition: a computational study

    PubMed Central

    Ghodrati, Masoud; Rajaei, Karim; Ebrahimpour, Reza

    2014-01-01

    It is debated whether the representation of objects in inferior temporal (IT) cortex is distributed over activities of many neurons or there are restricted islands of neurons responsive to a specific set of objects. There are lines of evidence demonstrating that fusiform face area (FFA-in human) processes information related to specialized object recognition (here we say within category object recognition such as face identification). Physiological studies have also discovered several patches in monkey ventral temporal lobe that are responsible for facial processing. Neuronal recording from these patches shows that neurons are highly selective for face images whereas for other objects we do not see such selectivity in IT. However, it is also well-supported that objects are encoded through distributed patterns of neural activities that are distinctive for each object category. It seems that visual cortex utilize different mechanisms for between category object recognition (e.g., face vs. non-face objects) vs. within category object recognition (e.g., two different faces). In this study, we address this question with computational simulations. We use two biologically inspired object recognition models and define two experiments which address these issues. The models have a hierarchical structure of several processing layers that simply simulate visual processing from V1 to aIT. We show, through computational modeling, that the difference between these two mechanisms of recognition can underlie the visual feature and extraction mechanism. It is argued that in order to perform generic and specialized object recognition, visual cortex must separate the mechanisms involved in within category from between categories object recognition. High recognition performance in within category object recognition can be guaranteed when class-specific features with intermediate size and complexity are extracted. However, generic object recognition requires a distributed universal

  19. Similarity dependency of the change in ERP component N1 accompanying with the object recognition learning.

    PubMed

    Tokudome, Wataru; Wang, Gang

    2012-01-01

    Performance during object recognition across views is largely dependent on inter-object similarity. The present study was designed to investigate the similarity dependency of object recognition learning on the changes in ERP component N1. Human subjects were asked to train themselves to recognize novel objects with different inter-object similarity by performing object recognition tasks. During the tasks, images of an object had to be discriminated from the images of other objects irrespective of the viewpoint. When objects had a high inter-object similarity, the ERP component, N1 exhibited a significant increase in both the amplitude and the latency variation across objects during the object recognition learning process, and the N1 amplitude and latency variation across the views of the same objects decreased significantly. In contrast, no significant changes were found during the learning process when using objects with low inter-object similarity. The present findings demonstrate that the changes in the variation of N1 that accompany the object recognition learning process are dependent upon the inter-object similarity and imply that there is a difference in the neuronal representation for object recognition when using objects with high and low inter-object similarity.

  20. Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition

    PubMed Central

    Cadieu, Charles F.; Hong, Ha; Yamins, Daniel L. K.; Pinto, Nicolas; Ardila, Diego; Solomon, Ethan A.; Majaj, Najib J.; DiCarlo, James J.

    2014-01-01

    The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. PMID:25521294

  1. Deep neural networks rival the representation of primate IT cortex for core visual object recognition.

    PubMed

    Cadieu, Charles F; Hong, Ha; Yamins, Daniel L K; Pinto, Nicolas; Ardila, Diego; Solomon, Ethan A; Majaj, Najib J; DiCarlo, James J

    2014-12-01

    The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.

  2. Role of the dentate gyrus in mediating object-spatial configuration recognition.

    PubMed

    Kesner, Raymond P; Taylor, James O; Hoge, Jennifer; Andy, Ford

    2015-02-01

    In the present study the effects of dorsal dentate gyrus (dDG) lesions in rats were tested on recognition memory tasks based on the interaction between objects, features of objects, and spatial features. The results indicated that the rats with dDG lesions did not differ from controls in recognition for a change within object feature configuration and object recognition tasks. In contrast, there was a deficit for the dDG lesioned rats relative to controls in recognition for a change within object-spatial feature configuration, complex object-place feature configuration and spatial recognition tasks. It is suggested that the dDG subregion of the hippocampus supports object-place and complex object-place feature information via a conjunctive encoding process.

  3. Sensor-independent approach to recognition: the object-based approach

    NASA Astrophysics Data System (ADS)

    Morrow, Jim C.; Hossain, Sqama

    1994-03-01

    This paper introduces a fundamentally different approach to recognition -- the object-based approach -- which is inherently knowledge-based and sensor independent. The paper begins with a description of an object-based recognition system, contrasting it with the image-based approach. Next, the multilevel stage of the system, incorporating several sensor data sources is described. From these sources elements of the situation hypothesis are generated as directed by the recognition goal. Depending on the degree of correspondence between the sensor-fed elements and the object-model-fed elements, a hypothetical element is created. The hypothetical element is further employed to develop evidence for the sensor-fed element through the inclusion of secondary sensor outputs. The sensor-fed element is thus modeled in more detail, and further evidence is added to the hypothetical element. Several levels of reasoning and data integration are involved in this overall process; further, a self-adjusting correction mechanism is included through the feedback from the hypothetical element to the sensors, thus defining secondary output connections to the sensor-fed element. Some preliminary work based on this approach has been carried out and initial results show improvements over the conventional image-based approach.

  4. Research on recognition methods of aphid objects in complex backgrounds

    NASA Astrophysics Data System (ADS)

    Zhao, Hui-Yan; Zhang, Ji-Hong

    2009-07-01

    In order to improve the recognition accuracy among the kinds of aphids in the complex backgrounds, the recognition method among kinds of aphids based on Dual-Tree Complex Wavelet Transform (DT-CWT) and Support Vector Machine (Libsvm) is proposed. Firstly the image is pretreated; secondly the aphid images' texture feature of three crops are extracted by DT-CWT in order to get the training parameters of training model; finally the training model could recognize aphids among the three kinds of crops. By contrasting to Gabor wavelet transform and the traditional extracting texture's methods based on Gray-Level Co-Occurrence Matrix (GLCM), the experiment result shows that the method has a certain practicality and feasibility and provides basic for aphids' recognition between the identification among same kind aphid.

  5. Crossmodal enhancement in the LOC for visuohaptic object recognition over development.

    PubMed

    Jao, R Joanne; James, Thomas W; James, Karin Harman

    2015-10-01

    Research has provided strong evidence of multisensory convergence of visual and haptic information within the visual cortex. These studies implement crossmodal matching paradigms to examine how systems use information from different sensory modalities for object recognition. Developmentally, behavioral evidence of visuohaptic crossmodal processing has suggested that communication within sensory systems develops earlier than across systems; nonetheless, it is unknown how the neural mechanisms driving these behavioral effects develop. To address this gap in knowledge, BOLD functional Magnetic Resonance Imaging (fMRI) was measured during delayed match-to-sample tasks that examined intramodal (visual-to-visual, haptic-to-haptic) and crossmodal (visual-to-haptic, haptic-to-visual) novel object recognition in children aged 7-8.5 years and adults. Tasks were further divided into sample encoding and test matching phases to dissociate the relative contributions of each. Results of crossmodal and intramodal object recognition revealed the network of known visuohaptic multisensory substrates, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS). Critically, both adults and children showed crossmodal enhancement within the LOC, suggesting a sensitivity to changes in sensory modality during recognition. These groups showed similar regions of activation, although children generally exhibited more widespread activity during sample encoding and weaker BOLD signal change during test matching than adults. Results further provided evidence of a bilateral region in the occipitotemporal cortex that was haptic-preferring in both age groups. This region abutted the bimodal LOtv, and was consistent with a medial to lateral organization that transitioned from a visual to haptic bias within the LOC. These findings converge with existing evidence of visuohaptic processing in the LOC in adults, and extend our knowledge of crossmodal processing in adults and

  6. Crossmodal enhancement in the LOC for visuohaptic object recognition over development

    PubMed Central

    Jao, R. Joanne; James, Thomas W.; James, Karin Harman

    2015-01-01

    Research has provided strong evidence of multisensory convergence of visual and haptic information within the visual cortex. These studies implement crossmodal matching paradigms to examine how systems use information from different sensory modalities for object recognition. Developmentally, behavioral evidence of visuohaptic crossmodal processing has suggested that communication within sensory systems develops earlier than across systems; nonetheless, it is unknown how the neural mechanisms driving these behavioral effects develop. To address this gap in knowledge, BOLD functional Magnetic Resonance Imaging (fMRI) was measured during delayed match-to-sample tasks that examined intramodal (visual-to-visual, haptic-to-haptic) and crossmodal (visual-to-haptic, haptic-to-visual) novel object recognition in children aged 7 to 8.5 years and adults. Tasks were further divided into sample encoding and test matching phases to dissociate the relative contributions of each. Results of crossmodal and intramodal object recognition revealed the network of known visuohaptic multisensory substrates, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS). Critically, both adults and children showed crossmodal enhancement within the LOC, suggesting a sensitivity to changes in sensory modality during recognition. These groups showed similar regions of activation, although children generally exhibited more widespread activity during sample encoding and weaker BOLD signal change during test matching than adults. Results further provided evidence of a bilateral region in the occipitotemporal cortex that was haptic-preferring in both age groups. This region abutted the bimodal LOtv, and was consistent with a medial to lateral organization that transitioned from a visual to haptic bias within the LOC. These findings converge with existing evidence of visuohaptic processing in the LOC in adults, and extend our knowledge of crossmodal processing in adults and

  7. Experience moderates overlap between object and face recognition, suggesting a common ability.

    PubMed

    Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E

    2014-07-03

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience.

  8. Experience moderates overlap between object and face recognition, suggesting a common ability

    PubMed Central

    Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.

    2014-01-01

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021

  9. Crossmodal object recognition in rats with and without multimodal object pre-exposure: no effect of hippocampal lesions.

    PubMed

    Reid, James M; Jacklin, Derek L; Winters, Boyer D

    2012-10-01

    The neural mechanisms and brain circuitry involved in the formation, storage, and utilization of multisensory object representations are poorly understood. We have recently introduced a crossmodal object recognition (CMOR) task that enables the study of such questions in rats. Our previous research has indicated that the perirhinal and posterior parietal cortices functionally interact to mediate spontaneous (tactile-to-visual) CMOR performance in rats; however, it remains to be seen whether other brain regions, particularly those receiving polymodal sensory inputs, contribute to this cognitive function. In the current study, we assessed the potential contribution of one such polymodal region, the hippocampus (HPC), to crossmodal object recognition memory. Rats with bilateral excitotoxic HPC lesions were tested in two versions of crossmodal object recognition: (1) the original CMOR task, which requires rats to compare between a stored tactile object representation and visually-presented objects to discriminate the novel and familiar stimuli; and (2) a novel 'multimodal pre-exposure' version of the CMOR task (PE/CMOR), in which simultaneous exploration of the tactile and visual sensory features of an object 24 h prior to the sample phase enhances CMOR performance across longer retention delays. Hippocampus-lesioned rats performed normally on both crossmodal object recognition tasks, but were impaired on a radial arm maze test of spatial memory, demonstrating the functional effectiveness of the lesions. These results strongly suggest that the HPC, despite its polymodal anatomical connections, is not critically involved in tactile-to-visual crossmodal object recognition memory.

  10. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  11. Practical automatic Arabic license plate recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Since 1970's, the need of an automatic license plate recognition system, sometimes referred as Automatic License Plate Recognition system, has been increasing. A license plate recognition system is an automatic system that is able to recognize a license plate number, extracted from image sensors. In specific, Automatic License Plate Recognition systems are being used in conjunction with various transportation systems in application areas such as law enforcement (e.g. speed limit enforcement) and commercial usages such as parking enforcement and automatic toll payment private and public entrances, border control, theft and vandalism control. Vehicle license plate recognition has been intensively studied in many countries. Due to the different types of license plates being used, the requirement of an automatic license plate recognition system is different for each country. [License plate detection using cluster run length smoothing algorithm ].Generally, an automatic license plate localization and recognition system is made up of three modules; license plate localization, character segmentation and optical character recognition modules. This paper presents an Arabic license plate recognition system that is insensitive to character size, font, shape and orientation with extremely high accuracy rate. The proposed system is based on a combination of enhancement, license plate localization, morphological processing, and feature vector extraction using the Haar transform. The performance of the system is fast due to classification of alphabet and numerals based on the license plate organization. Experimental results for license plates of two different Arab countries show an average of 99 % successful license plate localization and recognition in a total of more than 20 different images captured from a complex outdoor environment. The results run times takes less time compared to conventional and many states of art methods.

  12. Performance Evaluation of Neuromorphic-Vision Object Recognition Algorithms

    DTIC Science & Technology

    2014-08-01

    Malibu Canyon Rd, Malibu, CA 90265, USA Lior Elazary, Randolph C. Voorhies, Daniel F. Parks, Laurent Itti University of Southern California 3641...and Pattern Recognition, CVPR’07, pp. 1-8. [12] Carpenter , G., Grossberg, S. and Reynolds. J.H. (1991), “ARTMAP: Supervised real-time learning and

  13. Dual-Hierarchy Graph Method for Object Indexing and Recognition

    DTIC Science & Technology

    2014-07-01

    INDEXING AND RECOGNITION 5a. CONTRACT NUMBER FA8750-12-C-0117 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62305E 6. AUTHOR(S) Isaac Weiss, Fan...hierarchies. We then adjust the position and orientation of the node by minimizing its total energy using a simple one-step Newton method. Next we adjust

  14. Insular Cortex Is Involved in Consolidation of Object Recognition Memory

    ERIC Educational Resources Information Center

    Bermudez-Rattoni, Federico; Okuda, Shoki; Roozendaal, Benno; McGaugh, James L.

    2005-01-01

    Extensive evidence indicates that the insular cortex (IC), also termed gustatory cortex, is critically involved in conditioned taste aversion and taste recognition memory. Although most studies of the involvement of the IC in memory have investigated taste, there is some evidence that the IC is involved in memory that is not based on taste. In…

  15. Humans and Deep Networks Largely Agree on Which Kinds of Variation Make Object Recognition Harder.

    PubMed

    Kheradpisheh, Saeed R; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée

    2016-01-01

    View-invariant object recognition is a challenging problem that has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, even if some variations are presumably more difficult to handle than others (e.g., 3D rotations). Humans are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best models for object recognition in natural images. Here, for the first time, we systematically compared human feed-forward vision and DCNNs at view-invariant object recognition task using the same set of images and controlling the kinds of transformation (position, scale, rotation in plane, and rotation in depth) as well as their magnitude, which we call "variation level." We used four object categories: car, ship, motorcycle, and animal. In total, 89 human subjects participated in 10 experiments in which they had to discriminate between two or four categories after rapid presentation with backward masking. We also tested two recent DCNNs (proposed respectively by Hinton's group and Zisserman's group) on the same tasks. We found that humans and DCNNs largely agreed on the relative difficulties of each kind of variation: rotation in depth is by far the hardest transformation to handle, followed by scale, then rotation in plane, and finally position (much easier). This suggests that DCNNs would be reasonable models of human feed-forward vision. In addition, our results show that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances. We thus argue that these variations should be controlled in the image datasets used in vision research.

  16. Humans and Deep Networks Largely Agree on Which Kinds of Variation Make Object Recognition Harder

    PubMed Central

    Kheradpisheh, Saeed R.; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée

    2016-01-01

    View-invariant object recognition is a challenging problem that has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, even if some variations are presumably more difficult to handle than others (e.g., 3D rotations). Humans are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best models for object recognition in natural images. Here, for the first time, we systematically compared human feed-forward vision and DCNNs at view-invariant object recognition task using the same set of images and controlling the kinds of transformation (position, scale, rotation in plane, and rotation in depth) as well as their magnitude, which we call “variation level.” We used four object categories: car, ship, motorcycle, and animal. In total, 89 human subjects participated in 10 experiments in which they had to discriminate between two or four categories after rapid presentation with backward masking. We also tested two recent DCNNs (proposed respectively by Hinton's group and Zisserman's group) on the same tasks. We found that humans and DCNNs largely agreed on the relative difficulties of each kind of variation: rotation in depth is by far the hardest transformation to handle, followed by scale, then rotation in plane, and finally position (much easier). This suggests that DCNNs would be reasonable models of human feed-forward vision. In addition, our results show that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances. We thus argue that these variations should be controlled in the image datasets used in vision research. PMID:27642281

  17. Multifeatural shape processing in rats engaged in invariant visual object recognition.

    PubMed

    Alemi-Neissi, Alireza; Rosselli, Federica Bianca; Zoccolan, Davide

    2013-04-03

    The ability to recognize objects despite substantial variation in their appearance (e.g., because of position or size changes) represents such a formidable computational feat that it is widely assumed to be unique to primates. Such an assumption has restricted the investigation of its neuronal underpinnings to primate studies, which allow only a limited range of experimental approaches. In recent years, the increasingly powerful array of optical and molecular tools that has become available in rodents has spurred a renewed interest for rodent models of visual functions. However, evidence of primate-like visual object processing in rodents is still very limited and controversial. Here we show that rats are capable of an advanced recognition strategy, which relies on extracting the most informative object features across the variety of viewing conditions the animals may face. Rat visual strategy was uncovered by applying an image masking method that revealed the features used by the animals to discriminate two objects across a range of sizes, positions, in-depth, and in-plane rotations. Noticeably, rat recognition relied on a combination of multiple features that were mostly preserved across the transformations the objects underwent, and largely overlapped with the features that a simulated ideal observer deemed optimal to accomplish the discrimination task. These results indicate that rats are able to process and efficiently use shape information, in a way that is largely tolerant to variation in object appearance. This suggests that their visual system may serve as a powerful model to study the neuronal substrates of object recognition.

  18. Post-Training Reversible Inactivation of the Hippocampus Enhances Novel Object Recognition Memory

    ERIC Educational Resources Information Center

    Oliveira, Ana M. M.; Hawk, Joshua D.; Abel, Ted; Havekes, Robbert

    2010-01-01

    Research on the role of the hippocampus in object recognition memory has produced conflicting results. Previous studies have used permanent hippocampal lesions to assess the requirement for the hippocampus in the object recognition task. However, permanent hippocampal lesions may impact performance through effects on processes besides memory…

  19. The Consolidation of Object and Context Recognition Memory Involve Different Regions of the Temporal Lobe

    ERIC Educational Resources Information Center

    Balderas, Israela; Rodriguez-Ortiz, Carlos J.; Salgado-Tonda, Paloma; Chavez-Hurtado, Julio; McGaugh, James L.; Bermudez-Rattoni, Federico

    2008-01-01

    These experiments investigated the involvement of several temporal lobe regions in consolidation of recognition memory. Anisomycin, a protein synthesis inhibitor, was infused into the hippocampus, perirhinal cortex, insular cortex, or basolateral amygdala of rats immediately after the sample phase of object or object-in-context recognition memory…

  20. The development of newborn object recognition in fast and slow visual worlds.

    PubMed

    Wood, Justin N; Wood, Samantha M W

    2016-04-27

    Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world.

  1. Haptic Object Recognition is View-Independent in Early Blind but not Sighted People.

    PubMed

    Occelli, Valeria; Lacey, Simon; Stephens, Careese; John, Thomas; Sathian, K

    2016-03-01

    Object recognition, whether visual or haptic, is impaired in sighted people when objects are rotated between learning and test, relative to an unrotated condition, that is, recognition is view-dependent. Loss of vision early in life results in greater reliance on haptic perception for object identification compared with the sighted. Therefore, we hypothesized that early blind people may be more adept at recognizing objects despite spatial transformations. To test this hypothesis, we compared early blind and sighted control participants on a haptic object recognition task. Participants studied pairs of unfamiliar three-dimensional objects and performed a two-alternative forced-choice identification task, with the learned objects presented both unrotated and rotated 180° about they-axis. Rotation impaired the recognition accuracy of sighted, but not blind, participants. We propose that, consistent with our hypothesis, haptic view-independence in the early blind reflects their greater experience with haptic object perception.

  2. Haptic object recognition is view-independent in early blind but not sighted people

    PubMed Central

    Occelli, Valeria; Lacey, Simon; Stephens, Careese; John, Thomas; Sathian, K.

    2016-01-01

    Object recognition, whether visual or haptic, is impaired in sighted people when objects are rotated between learning and test, relative to an unrotated condition, i.e., recognition is view-dependent. Loss of vision early in life results in greater reliance on haptic perception for object identification compared to the sighted. Therefore, we hypothesized that early blind people may be more adept at recognizing objects despite spatial transformations. To test this hypothesis, we compared early blind and sighted control participants on a haptic object recognition task. Participants studied pairs of unfamiliar 3-D objects and performed a two-alternative forced-choice identification task, with the learned objects presented both unrotated and rotated 180° about the y-axis. Rotation impaired the recognition accuracy of sighted, but not blind, participants. We propose that, consistent with our hypothesis, haptic view-independence in the early blind reflects their greater experience with haptic object perception. PMID:26562881

  3. Toward the ultimate synthesis/recognition system.

    PubMed

    Furui, S

    1995-10-24

    This paper predicts speech synthesis, speech recognition, and speaker recognition technology for the year 2001, and it describes the most important research problems to be solved in order to arrive at these ultimate synthesis and recognition systems. The problems for speech synthesis include natural and intelligible voice production, prosody control based on meaning, capability of controlling synthesized voice quality and choosing individual speaking style, multilingual and multidialectal synthesis, choice of application-oriented speaking styles, capability of adding emotion, and synthesis from concepts. The problems for speech recognition include robust recognition against speech variations, adaptation/normalization to variations due to environmental conditions and speakers, automatic knowledge acquisition for acoustic and linguistic modeling, spontaneous speech recognition, naturalness and ease of human-machine interaction, and recognition of emotion. The problems for speaker recognition are similar to those for speech recognition. The research topics related to all these techniques include the use of articulatory and perceptual constraints and evaluation methods for measuring the quality of technology and systems.

  4. Face Recognition Is Affected by Similarity in Spatial Frequency Range to a Greater Degree Than Within-Category Object Recognition

    ERIC Educational Resources Information Center

    Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi

    2004-01-01

    Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…

  5. The role of color information on object recognition: a review and meta-analysis.

    PubMed

    Bramão, Inês; Reis, Alexandra; Petersson, Karl Magnus; Faísca, Luís

    2011-09-01

    In this study, we systematically review the scientific literature on the effect of color on object recognition. Thirty-five independent experiments, comprising 1535 participants, were included in a meta-analysis. We found a moderate effect of color on object recognition (d=0.28). Specific effects of moderator variables were analyzed and we found that color diagnosticity is the factor with the greatest moderator effect on the influence of color in object recognition; studies using color diagnostic objects showed a significant color effect (d=0.43), whereas a marginal color effect was found in studies that used non-color diagnostic objects (d=0.18). The present study did not permit the drawing of specific conclusions about the moderator effect of the object recognition task; while the meta-analytic review showed that color information improves object recognition mainly in studies using naming tasks (d=0.36), the literature review revealed a large body of evidence showing positive effects of color information on object recognition in studies using a large variety of visual recognition tasks. We also found that color is important for the ability to recognize artifacts and natural objects, to recognize objects presented as types (line-drawings) or as tokens (photographs), and to recognize objects that are presented without surface details, such as texture or shadow. Taken together, the results of the meta-analysis strongly support the contention that color plays a role in object recognition. This suggests that the role of color should be taken into account in models of visual object recognition.

  6. An algorithm for recognition and localization of rotated and scaled objects

    NASA Technical Reports Server (NTRS)

    Peli, T.

    1981-01-01

    An algorithm for recognition and localization of objects, which is invariant to displacement and rotation, is extended to the recognition and localization of differently scaled, rotated, and displaced objects. The proposed algorithm provides an optimum way to find if a match exists between two objects that are scaled, rotated, and displaced, while the number of computations is of the same order as for equally scaled objects.

  7. Differential effects of spaced vs. massed training in long-term object-identity and object-location recognition memory.

    PubMed

    Bello-Medina, Paola C; Sánchez-Carrasco, Livia; González-Ornelas, Nadia R; Jeffery, Kathryn J; Ramírez-Amaya, Víctor

    2013-08-01

    Here we tested whether the well-known superiority of spaced training over massed training is equally evident in both object identity and object location recognition memory. We trained animals with objects placed in a variable or in a fixed location to produce a location-independent object identity memory or a location-dependent object representation. The training consisted of 5 trials that occurred either on one day (Massed) or over the course of 5 consecutive days (Spaced). The memory test was done in independent groups of animals either 24h or 7 days after the last training trial. In each test the animals were exposed to either a novel object, when trained with the objects in variable locations, or to a familiar object in a novel location, when trained with objects in fixed locations. The difference in time spent exploring the changed versus the familiar objects was used as a measure of recognition memory. For the object-identity-trained animals, spaced training produced clear evidence of recognition memory after both 24h and 7 days, but massed-training animals showed it only after 24h. In contrast, for the object-location-trained animals, recognition memory was evident after both retention intervals and with both training procedures. When objects were placed in variable locations for the two types of training and the test was done with a brand-new location, only the spaced-training animals showed recognition at 24h, but surprisingly, after 7 days, animals trained using both procedures were able to recognize the change, suggesting a post-training consolidation process. We suggest that the two training procedures trigger different neural mechanisms that may differ in the two segregated streams that process object information and that may consolidate differently.

  8. α₄β₂ Nicotinic receptor stimulation of the GABAergic system within the orbitofrontal cortex ameliorates the severe crossmodal object recognition impairment in ketamine-treated rats: implications for cognitive dysfunction in schizophrenia.

    PubMed

    Cloke, Jacob M; Winters, Boyer D

    2015-03-01

    Schizophrenia is associated with atypical multisensory integration. Rats treated sub-chronically with NMDA receptor antagonists to model schizophrenia are severely impaired on a tactile-to-visual crossmodal object recognition (CMOR) task, and this deficit is reversed by systemic nicotine. The current study assessed the receptor specificity of the ameliorative effect of nicotine in the CMOR task, as well as the potential for nicotinic receptor (nAChR) interactions with GABA and glutamate. Male Long-Evans rats were treated sub-chronically for 10 days with ketamine or saline and then tested on the CMOR task after a 10-day washout. Systemic nicotine given before the sample phase of the CMOR task reversed the ketamine-induced impairment, but this effect was blocked by co-administration of the GABAA receptor antagonist bicuculline at a dosage that itself did not cause impairment. Pre-sample systemic co-administration of the NMDA receptor antagonist MK-801 did not block the remediating effect of nicotine in ketamine-treated rats. The selective α7 nAChR agonist GTS-21 and α4β2 nAChR agonist ABT-418 were also tested, with only the latter reversing the ketamine impairment dose-dependently; bicuculline also blocked this effect. Similarly, infusions of nicotine or ABT-418 into the orbitofrontal cortex (OFC) reversed the CMOR impairment in ketamine-treated rats, and systemic bicuculline blocked the effect of intra-OFC ABT-418. These results suggest that nicotine-induced agonism of α4β2 nAChRs within the OFC ameliorates CMOR deficits in ketamine-treated rats via stimulation of the GABAergic system. The findings of this research may have important implications for understanding the nature and potential treatment of cognitive impairment in schizophrenia.

  9. Object recognition through a multi-mode fiber

    NASA Astrophysics Data System (ADS)

    Takagi, Ryosuke; Horisaki, Ryoichi; Tanida, Jun

    2017-02-01

    We present a method of recognizing an object through a multi-mode fiber. A number of speckle patterns transmitted through a multi-mode fiber are provided to a classifier based on machine learning. We experimentally demonstrated binary classification of face and non-face targets based on the method. The measurement process of the experimental setup was random and nonlinear because a multi-mode fiber is a typical strongly scattering medium and any reference light was not used in our setup. Comparisons between three supervised learning methods, support vector machine, adaptive boosting, and neural network, are also provided. All of those learning methods achieved high accuracy rates at about 90% for the classification. The approach presented here can realize a compact and smart optical sensor. It is practically useful for medical applications, such as endoscopy. Also our study indicated a promising utilization of artificial intelligence, which has rapidly progressed, for reducing optical and computational costs in optical sensing systems.

  10. Model-based object recognition in range imagery

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-09-01

    The paper formulates the mathematical foundations of object discrimination and object re-identification in range image sequences using Bayesian decision theory. Object discrimination determines the unique model corresponding to each scene object, while object re-identification finds the unique object in the scene corresponding to a given model. In the first case object identities are independent; in the second case at most one object exists having a given identity. Efficient analytical and numerical techniques for updating and maximizing the posterior distributions are introduced. Experimental results indicate to what extent a single range image of an object can be used for re-identifying this object in arbitrary scenes. Applications including the protection of commercial vessels against piracy are discussed.

  11. Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition.

    PubMed

    Ueda, Yoshiyuki; Saiki, Jun

    2012-01-01

    Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning-visual recognition) and cross-modal learning (eg visual learning-haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances.

  12. Parts and Relations in Young Children's Shape-Based Object Recognition

    ERIC Educational Resources Information Center

    Augustine, Elaine; Smith, Linda B.; Jones, Susan S.

    2011-01-01

    The ability to recognize common objects from sparse information about geometric shape emerges during the same period in which children learn object names and object categories. Hummel and Biederman's (1992) theory of object recognition proposes that the geometric shapes of objects have two components--geometric volumes representing major object…

  13. RecceMan: an interactive recognition assistance for image-based reconnaissance: synergistic effects of human perception and computational methods for object recognition, identification, and infrastructure analysis

    NASA Astrophysics Data System (ADS)

    El Bekri, Nadia; Angele, Susanne; Ruckhäberle, Martin; Peinsipp-Byma, Elisabeth; Haelke, Bruno

    2015-10-01

    This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The

  14. Object recognition by use of polarimetric phase-shifting digital holography.

    PubMed

    Nomura, Takanori; Javidi, Bahram

    2007-08-01

    Pattern recognition by use of polarimetric phase-shifting digital holography is presented. Using holography, the amplitude distribution and phase difference distribution between two orthogonal polarizations of three-dimensional (3D) or two-dimensional phase objects are obtained. This information contains both complex amplitude and polarimetric characteristics of the object, and it can be used for improving the discrimination capability of object recognition. Experimental results are presented to demonstrate the idea. To the best of our knowledge, this is the first report on 3D polarimetric recognition of objects using digital holography.

  15. How Does Using Object Names Influence Visual Recognition Memory?

    ERIC Educational Resources Information Center

    Richler, Jennifer J.; Palmeri, Thomas J.; Gauthier, Isabel

    2013-01-01

    Two recent lines of research suggest that explicitly naming objects at study influences subsequent memory for those objects at test. Lupyan (2008) suggested that naming "impairs" memory by a representational shift of stored representations of named objects toward the prototype (labeling effect). MacLeod, Gopie, Hourihan, Neary, and Ozubko (2010)…

  16. Symbolic Play Connects to Language through Visual Object Recognition

    ERIC Educational Resources Information Center

    Smith, Linda B.; Jones, Susan S.

    2011-01-01

    Object substitutions in play (e.g. using a box as a car) are strongly linked to language learning and their absence is a diagnostic marker of language delay. Classic accounts posit a symbolic function that underlies both words and object substitutions. Here we show that object substitutions depend on developmental changes in visual object…

  17. Combining depth and gray images for fast 3D object recognition

    NASA Astrophysics Data System (ADS)

    Pan, Wang; Zhu, Feng; Hao, Yingming

    2016-10-01

    Reliable and stable visual perception systems are needed for humanoid robotic assistants to perform complex grasping and manipulation tasks. The recognition of the object and its precise 6D pose are required. This paper addresses the challenge of detecting and positioning a textureless known object, by estimating its complete 6D pose in cluttered scenes. A 3D perception system is proposed in this paper, which can robustly recognize CAD models in cluttered scenes for the purpose of grasping with a mobile manipulator. Our approach uses a powerful combination of two different camera technologies, Time-Of-Flight (TOF) and RGB, to segment the scene and extract objects. Combining the depth image and gray image to recognize instances of a 3D object in the world and estimate their 3D poses. The full pose estimation process is based on depth images segmentation and an efficient shape-based matching. At first, the depth image is used to separate the supporting plane of objects from the cluttered background. Thus, cluttered backgrounds are circumvented and the search space is extremely reduced. And a hierarchical model based on the geometry information of a priori CAD model of the object is generated in the offline stage. Then using the hierarchical model we perform a shape-based matching in 2D gray images. Finally, we validate the proposed method in a number of experiments. The results show that utilizing depth and gray images together can reach the demand of a time-critical application and reduce the error rate of object recognition significantly.

  18. Higher-order neural network software for distortion invariant object recognition

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly

    1991-01-01

    The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.

  19. On the delay-dependent involvement of the hippocampus in object recognition memory.

    PubMed

    Hammond, Rebecca S; Tull, Laura E; Stackman, Robert W

    2004-07-01

    The role of the hippocampus in object recognition memory processes is unclear in the current literature. Conflicting results have been found in lesion studies of both primates and rodents. Procedural differences between studies, such as retention interval, may explain these discrepancies. In the present study, acute lidocaine administration was used to temporarily inactivate the hippocampus prior to training in the spontaneous object recognition task. Male C57BL/6J mice were administered bilateral lidocaine (4%, 0.5 microl/side) or aCSF (0.5 microl/side) directly into the CA1 region of the dorsal hippocampus 5 min prior to sample object training, and object recognition memory was tested after a short ( 5 min) or long (24 h) retention interval. There was no effect of intra-hippocampal lidocaine on the time needed for mice to accumulate sample object exploration, suggesting that inactivation of the hippocampus did not affect sample session activity or the motivation to explore objects. Lidocaine-treated mice exhibited impaired object recognition memory, measured as reduced novel object preference, after a 24 h but not a 5 min retention interval. These data support a delay-dependent role for the hippocampus in object recognition memory, an effect consistent with the results of hippocampal lesion studies conducted in rats. However, these data are also consistent with the view that the hippocampus is involved in object recognition memory regardless of retention interval, and that object recognition processes of parahippocampal structures (e.g., perirhinal cortex) are sufficient to support object recognition memory over short retention intervals.

  20. Shift- and scale-invariant recognition of contour objects with logarithmic radial harmonic filters.

    PubMed

    Moya, A; Esteve-Taboada, J J; García, J; Ferreira, C

    2000-10-10

    The phase-only logarithmic radial harmonic (LRH) filter has been shown to be suitable for scale-invariant block object recognition. However, an important set of objects is the collection of contour functions that results from a digital edge extraction of the original block objects. These contour functions have a constant width that is independent of the scale of the original object. Therefore, since the energy of the contour objects decreases more slowly with the scale factor than does the energy of the block objects, the phase-only LRH filter has difficulties in the recognition tasks when these contour objects are used. We propose a modified LRH filter that permits the realization of a shift- and scale-invariant optical recognition of contour objects. The modified LRH filter is a complex filter that compensates the energy variation resulting from the scaling of contour objects. Optical results validate the theory and show the utility of the newly proposed method.

  1. Statistical and neural network classifiers in model-based 3-D object recognition

    NASA Astrophysics Data System (ADS)

    Newton, Scott C.; Nutter, Brian S.; Mitra, Sunanda

    1991-02-01

    For autonomous machines equipped with vision capabilities and in a controlled environment 3-D model-based object identification methodologies will in general solve rigid body recognition problems. In an uncontrolled environment however several factors pose difficulties for correct identification. We have addressed the problem of 3-D object recognition using a number of methods including neural network classifiers and a Bayesian-like classifier for matching image data with model projection-derived data [1 21. Neural network classifiers used began operation as simple feature vector classifiers. However unmodelled signal behavior was learned with additional samples yielding great improvement in classification rates. The model analysis drastically shortened training time of both classification systems. In an environment where signal behavior is not accurately modelled two separate forms of learning give the systems the ability to update estimates of this behavior. Required of course are sufficient samples to learn this new information. Given sufficient information and a well-controlled environment identification of 3-D objects from a limited number of classes is indeed possible. 1.

  2. Human hand descriptions and gesture recognition for object manipulation.

    PubMed

    Cobos, Salvador; Ferre, Manuel; Sánchez-Urán, M Ángel; Ortego, Javier; Aracil, Rafael

    2010-06-01

    This work focuses on obtaining realistic human hand models that are suitable for manipulation tasks. A 24 degrees of freedom (DoF) kinematic model of the human hand is defined. The model reasonably satisfies realism requirements in simulation and movement. To achieve realism, intra- and inter-finger constraints are obtained. The design of the hand model with 24 DoF is based upon a morphological, physiological and anatomical study of the human hand. The model is used to develop a gesture recognition procedure that uses principal components analysis (PCA) and discriminant functions. Two simplified hand descriptions (nine and six DoF) have been developed in accordance with the constraints obtained previously. The accuracy of the simplified models is almost 5% for the nine DoF hand description and 10% for the six DoF hand description. Finally, some criteria are defined by which to select the hand description best suited to the features of the manipulation task.

  3. Representational dynamics of object recognition: Feedforward and feedback information flows.

    PubMed

    Goddard, Erin; Carlson, Thomas A; Dermody, Nadene; Woolgar, Alexandra

    2016-03-01

    Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception.

  4. Support plane method applied to ground objects recognition using modelled SAR images

    NASA Astrophysics Data System (ADS)

    Zherdev, Denis A.; Fursov, Vladimir A.

    2015-09-01

    In this study, the object recognition problem was solved using support plane method. The modelled SAR images were used as features vectors in the recognition algorithm. Radar signal backscattering of objects in different observing poses is presented in SAR images. For real time simulation, we used simple mixture model of Lambertian-specular reflectivity. To this end, an algorithm of ray tracing is extended for simulating SAR images of 3D man-made models. The suggested algorithm of support plane is very effective in objects recognition using SAR images and RCS diagrams.

  5. Automatic TLI recognition system, general description

    SciTech Connect

    Lassahn, G.D.

    1997-02-01

    This report is a general description of an automatic target recognition system developed at the Idaho National Engineering Laboratory for the Department of Energy. A user`s manual is a separate volume, Automatic TLI Recognition System, User`s Guide, and a programmer`s manual is Automatic TLI Recognition System, Programmer`s Guide. This system was designed as an automatic target recognition system for fast screening of large amounts of multi-sensor image data, based on low-cost parallel processors. This system naturally incorporates image data fusion, and it gives uncertainty estimates. It is relatively low cost, compact, and transportable. The software is easily enhanced to expand the system`s capabilities, and the hardware is easily expandable to increase the system`s speed. In addition to its primary function as a trainable target recognition system, this is also a versatile, general-purpose tool for image manipulation and analysis, which can be either keyboard-driven or script-driven. This report includes descriptions of three variants of the computer hardware, a description of the mathematical basis if the training process, and a description with examples of the system capabilities.

  6. Stereo disparity facilitates view generalization during shape recognition for solid multipart objects.

    PubMed

    Cristino, Filipe; Davitt, Lina; Hayward, William G; Leek, E Charles

    2015-01-01

    Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.

  7. Artificial neural networks and model-based recognition of 3-D objects from 2-D images

    NASA Astrophysics Data System (ADS)

    Chao, Chih-Ho; Dhawan, Atam P.

    1992-09-01

    A computer vision system is developed for 3-D object recognition using artificial neural networks and a knowledge-based top-down feedback analysis system. This computer vision system can adequately analyze an incomplete edge map provided by a low-level processor for 3-D representation and recognition using key features. The key features are selected using a priority assignment and then used in an artificial neural network for matching with model key features. The result of such matching is utilized in generating the model-driven top-down feedback analysis. From the incomplete edge map we try to pick a candidate pattern utilizing the key feature priority assignment. The highest priority is given for the most connected node and associated features. The features are space invariant structures and sets of orientation for edge primitives. These features are now mapped into real numbers. A Hopfield network is then applied with two levels of matching to reduce the search time. The first match is to choose the class of possible model, the second match is then to find the model closest to the data patterns. This model is then rotated in 3-D to find the best match with the incomplete edge patterns and to provide the additional features in 3-D. In the case of multiple objects, a dynamically interconnected search strategy is designed to recognize objects using one pattern at a time. This strategy is also useful in recognizing occluded objects. The experimental results presented show the capability and effectiveness of this system.

  8. Mechanisms and neural basis of object and pattern recognition: a study with chess experts.

    PubMed

    Bilalić, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang

    2010-11-01

    Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and novices performing chess-related and -unrelated (visual) search tasks. As expected, the superiority of experts was limited to the chess-specific task, as there were no differences in a control task that used the same chess stimuli but did not require chess-specific recognition. The analysis of eye movements showed that experts immediately and exclusively focused on the relevant aspects in the chess task, whereas novices also examined irrelevant aspects. With random chess positions, when pattern knowledge could not be used to guide perception, experts nevertheless maintained an advantage. Experts' superior domain-specific parafoveal vision, a consequence of their knowledge about individual domain-specific symbols, enabled improved object recognition. Functional magnetic resonance imaging corroborated this differentiation between object and pattern recognition and showed that chess-specific object recognition was accompanied by bilateral activation of the occipitotemporal junction, whereas chess-specific pattern recognition was related to bilateral activations in the middle part of the collateral sulci. Using the expertise approach together with carefully chosen controls and multiple dependent measures, we identified object and pattern recognition as two essential cognitive processes in expert visual cognition, which may also help to explain the mechanisms of everyday perception.

  9. The resilience of object predictions: Early recognition across viewpoints and exemplars

    PubMed Central

    Cheung, Olivia S.; Bar, Moshe

    2013-01-01

    Recognition of everyday objects can be facilitated by top-down predictions. We have proposed that these predictions are derived from rudimentary shape information, or gist, extracted rapidly from low spatial frequencies (LSFs) in the image (Bar, 2003). Because of the coarse nature of LSF representations, we hypothesize here that such predictions can accommodate changes in viewpoint as well as facilitate the recognition of visually similar objects. In a repetition-priming task, we indeed observed significant facilitation of target recognition that was primed by LSF objects across moderate viewpoint changes, as well as across visually similar exemplars. These results suggest that the LSF representations are specific enough to activate accurate predictions, yet flexible enough to overcome small changes in visual appearance. Such gist representations facilitate object recognition by accommodating changes in visual appearance due to viewing conditions and help to generalize from familiar to novel exemplars. PMID:24234168

  10. Design and implementation of knowledge-based framework for ground objects recognition in remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Shaobin; Ding, Mingyue; Cai, Chao; Fu, Xiaowei; Sun, Yue; Chen, Duo

    2009-10-01

    The advance of image processing makes knowledge-based automatic image interpretation much more realistic than ever. In the domain of remote sensing image processing, the introduction of knowledge enhances the confidence of recognition of typical ground objects. There are mainly two approaches to employ knowledge: the first one is scattering knowledge in concrete program and relevant knowledge of ground objects are fixed by programming; the second is systematically storing knowledge in knowledge base to offer a unified instruction for each object recognition procedure. In this paper, a knowledge-based framework for ground objects recognition in remote sensing image is proposed. This framework takes the second means for using knowledge with a hierarchical architecture. The recognition of typical airport demonstrated the feasibility of the proposed framework.

  11. Selecting and implementing a voice recognition system.

    PubMed

    Wheeler, S; Cassimus, G C

    1999-01-01

    A single radiology department serves the three separate organizations that comprise Emory Healthcare in Atlanta--three separate hospitals, the Emory Clinic and the Emory University School of Medicine. In 1996, the chairman of Emory Healthcare issued a mandate to the radiology department to decrease its report turnaround time, provide better service and increase customer satisfaction. The area where the greatest effect could be made without involving the transcription area was the "exam complete to dictate" piece of the reporting process. A committee investigating voice recognition systems established an essential criteria for potential vendors--to be able to download patient scheduling and demographic information from the existing RIS to the new system. Second, the system had to be flexible and straightforward for doctors to learn. It must have a word processing package for easy report correction and editing, and a microphone that would rewind and correct dictation before recognition took place. To keep capital costs low for the pilot, the committee opted for server recognition rather than purchase the expensive workstations necessary for real-time recognition. A switch was made later to real-time recognition. PACS and voice recognition have proven to be highly complementary. Most importantly, the new system has had a tremendous impact on turnaround time in the "dictate to final" phase. Once in the 30-hour range, 65 percent of the reports are now turned around in less than 15 minutes, 80 percent in less than 30 minutes, and 90 percent in less than an hour.

  12. Fast and flexible 3D object recognition solutions for machine vision applications

    NASA Astrophysics Data System (ADS)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  13. It’s all connected: Pathways in visual object recognition and early noun learning

    PubMed Central

    Smith, Linda B.

    2013-01-01

    A developmental pathway may be defined as the route, or chain of events, through which a new structure or function forms. For many human behaviors, including object name learning and visual object recognition, these pathways are often complex, multi-causal and include unexpected dependencies. This paper presents three principles of development that suggest the value of a developmental psychology that explicitly seeks to trace these pathways and uses empirical evidence on developmental dependencies between motor development, action on objects, visual object recognition and object name learning in 12 to 24 month old infants to make the case. The paper concludes with a consideration of the theoretical implications of this approach. PMID:24320634

  14. Dissociations in the effect of delay on object recognition: evidence for an associative model of recognition memory.

    PubMed

    Tam, Shu K E; Robinson, Jasper; Jennings, Dómhnall J; Bonardi, Charlotte

    2014-01-01

    Rats were administered 3 versions of an object recognition task: In the spontaneous object recognition task (SOR) animals discriminated between a familiar object and a novel object; in the temporal order task they discriminated between 2 familiar objects, 1 of which had been presented more recently than the other; and, in the object-in-place task, they discriminated among 4 previously presented objects, 2 of which were presented in the same locations as in preexposure and 2 in different but familiar locations. In each task animals were tested at 2 delays (5 min and 2 hr) between the sample and test phases in the SOR and object-in-place task, and between the 2 sample phases in the temporal order task. Performance in the SOR was poorer with the longer delay, whereas in the temporal order task performance improved with delay. There was no effect of delay on object-in-place performance. In addition the performance of animals with neurotoxic lesions of the dorsal hippocampus was selectively impaired in the object-in-place task at the longer delay. These findings are interpreted within the framework of Wagner's (1981) model of memory.

  15. Implicit encoding of extrinsic object properties in stored representations mediating recognition: evidence from shadow-specific repetition priming.

    PubMed

    Leek, E Charles; Davitt, Lina I; Cristino, Filipe

    2015-03-01

    This study investigated whether, and under what conditions, stored shape representations mediating recognition encode extrinsic object properties that vary according to viewing conditions. This was examined in relation to cast shadow. Observers (N = 90) first memorised a subset of 3D multi-part novel objects from a limited range of viewpoints rendered with either no shadow, object internal shadow, or both object internal and external (ground) plane shadow. During a subsequent test phase previously memorised targets were discriminated from visually similar distractors across learned and novel views following brief presentation of a same-shape masked prime. The primes contained either matching or mismatching shadow rendering from the training condition. The results showed a recognition advantage for objects memorised with object internal shadow. In addition, objects encoded with internal shadow were primed more strongly by matching internal shadow primes, than by same shape primes with either no shadow or both object internal and external (ground) shadow. This pattern of priming effects generalises to previously unseen views of targets rendered with object internal shadow. The results suggest that the object recognition system contains a level of stored representation at which shape and the extrinsic object property of cast shadow are bound. We propose that this occurs when cast shadow cannot be discounted during perception on the basis of external cues to the scene lighting model.

  16. Kappa Opioid Receptor-Mediated Disruption of Novel Object Recognition: Relevance for Psychostimulant Treatment

    PubMed Central

    Paris, Jason J.; Reilley, Kate J.; McLaughlin, Jay P.

    2012-01-01

    Kappa opioid receptor (KOR) agonists are potentially valuable as therapeutics for the treatment of psychostimulant reward as they suppress dopamine signaling in reward circuitry to repress drug seeking behavior. However, KOR agonists are also associated with sedation and cognitive dysfunction. The extent to which learning and memory disruption or hypolocomotion underlie KOR agonists’ role in counteracting the rewarding effects of psychostimulants is of interest. C57BL/6J mice were pretreated with vehicle (saline, 0.9%), the KOR agonist (trans)-3,4-dichloro-N-methyl-N-[2-(1- pyrrolidinyl)-cyclohexyl] benzeneacetamide (U50,488), or the peripherally-restricted agonist D-Phe-D-Phe-D-lle-D-Arg- NH2 (ffir-NH2), through central (i.c.v.) or peripheral (i.p.) routes of administration. Locomotor activity was assessed via activity monitoring chambers and rotorod. Cognitive performance was assessed in a novel object recognition task. Prolonged hypolocomotion was observed following administration of 1.0 and 10.0, but not 0.3 mg/kg U50,488. Central, but not peripheral, administration of ffir-NH2 (a KOR agonist that does not cross the blood-brain barrier) also reduced motor behavior. Systemic pretreatment with the low dose of U50,488 (0.3 mg/kg, i.p.) significantly impaired performance in the novel object recognition task. Likewise, ffir-NH2 significantly reduced novel object recognition after central (i.c.v.), but not peripheral (i.p.), administration. U50,488- and ffir-NH2-mediated deficits in novel object recognition were prevented by pretreatment with KOR antagonists. Cocaine-induced conditioned place preference was subsequently assessed and was reduced by pretreatment with U50,488 (0.3 mg/kg, i.p.). Together, these results suggest that the activation of centrally-located kappa opioid receptors may induce cognitive and mnemonic disruption independent of hypolocomotor effects which may contribute to the KOR-mediated suppression of psychostimulant reward. PMID:22900234

  17. Do Simultaneously Viewed Objects Influence Scene Recognition Individually or as Groups? Two Perceptual Studies

    PubMed Central

    Gagne, Christopher R.; MacEvoy, Sean P.

    2014-01-01

    The ability to quickly categorize visual scenes is critical to daily life, allowing us to identify our whereabouts and to navigate from one place to another. Rapid scene categorization relies heavily on the kinds of objects scenes contain; for instance, studies have shown that recognition is less accurate for scenes to which incongruent objects have been added, an effect usually interpreted as evidence of objects' general capacity to activate semantic networks for scene categories they are statistically associated with. Essentially all real-world scenes contain multiple objects, however, and it is unclear whether scene recognition draws on the scene associations of individual objects or of object groups. To test the hypothesis that scene recognition is steered, at least in part, by associations between object groups and scene categories, we asked observers to categorize briefly-viewed scenes appearing with object pairs that were semantically consistent or inconsistent with the scenes. In line with previous results, scenes were less accurately recognized when viewed with inconsistent versus consistent pairs. To understand whether this reflected individual or group-level object associations, we compared the impact of pairs composed of mutually related versus unrelated objects; i.e., pairs, which, as groups, had clear associations to particular scene categories versus those that did not. Although related and unrelated object pairs equally reduced scene recognition accuracy, unrelated pairs were consistently less capable of drawing erroneous scene judgments towards scene categories associated with their individual objects. This suggests that scene judgments were influenced by the scene associations of object groups, beyond the influence of individual objects. More generally, the fact that unrelated objects were as capable of degrading categorization accuracy as related objects, while less capable of generating specific alternative judgments, indicates that the process

  18. Practical vision based degraded text recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published

  19. Object Recognition and Attention to Object Components by Preschool Children and 4-Month-Old Infants.

    ERIC Educational Resources Information Center

    Haaf, Robert A.

    2003-01-01

    This study investigated attention to and recognition of components in compound stimuli among infants and preschoolers. Oddity tasks with preschoolers and familiarization/novelty-preference tasks with infants demonstrated successful discrimination among stimuli components on basis of edge property information. Matching tasks with preschoolers and…

  20. Automatic TLI recognition system beta prototype testing

    SciTech Connect

    Lassahn, G.D.

    1996-06-01

    This report describes the beta prototype automatic target recognition system ATR3, and some performance tests done with this system. This is a fully operational system, with a high computational speed. It is useful for findings any kind of target in digitized image data, and as a general purpose image analysis tool.

  1. Complementary Hemispheric Asymmetries in Object Naming and Recognition: A Voxel-Based Correlational Study

    ERIC Educational Resources Information Center

    Acres, K.; Taylor, K. I.; Moss, H. E.; Stamatakis, E. A.; Tyler, L. K.

    2009-01-01

    Cognitive neuroscientific research proposes complementary hemispheric asymmetries in naming and recognising visual objects, with a left temporal lobe advantage for object naming and a right temporal lobe advantage for object recognition. Specifically, it has been proposed that the left inferior temporal lobe plays a mediational role linking…

  2. Dissociating the Effects of Angular Disparity and Image Similarity in Mental Rotation and Object Recognition

    ERIC Educational Resources Information Center

    Cheung, Olivia S.; Hayward, William G.; Gauthier, Isabel

    2009-01-01

    Performance is often impaired linearly with increasing angular disparity between two objects in tasks that measure mental rotation or object recognition. But increased angular disparity is often accompanied by changes in the similarity between views of an object, confounding the impact of the two factors in these tasks. We examined separately the…

  3. Category-specific interference of object recognition with biological motion perception.

    PubMed

    Wittinghofer, Karin; de Lussanet, Marc H E; Lappe, Markus

    2010-11-24

    The rapid and detailed recognition of human action from point-light displays is a remarkable ability and very robust against masking by motion signals. However, recognition of biological motion is strongly impaired when the typical point lights are replaced by pictures of complex objects. In a reaction time task and a detection in noise task, we asked subjects to decide if the walking direction is forward or backward. We found that complex objects as local elements impaired performance. When we compared different object categories, we found that human shapes as local objects gave more impairment than any other tested object category. Inverting or scrambling the human shapes restored the performance of walking perception. These results demonstrate an interference between object perception and biological motion recognition caused by shared processing capacities.

  4. The relationship between change detection and recognition of centrally attended objects in motion pictures.

    PubMed

    Angelone, Bonnie L; Levin, Daniel T; Simons, Daniel J

    2003-01-01

    Observers typically detect changes to central objects more readily than changes to marginal objects, but they sometimes miss changes to central, attended objects as well. However, even if observers do not report such changes, they may be able to recognize the changed object. In three experiments we explored change detection and recognition memory for several types of changes to central objects in motion pictures. Observers who failed to detect a change still performed at above chance levels on a recognition task in almost all conditions. In addition, observers who detected the change were no more accurate in their recognition than those who did not detect the change. Despite large differences in the detectability of changes across conditions, those observers who missed the change did not vary in their ability to recognize the changing object.

  5. Remote object recognition by analysis of surface structure

    NASA Astrophysics Data System (ADS)

    Wurster, J.; Stark, H.; Olsen, E. T.; Kogler, K.

    1995-06-01

    We present a new algorithm for the discrimination of remote objects by their surface structure. Starting from a range-azimuth profile function, we formulate a range-azimuth matrix whose largest eigenvalues are used as discriminating features to separate object classes. A simpler, competing algorithm uses the number of sign changes in the range-azimuth profile function to discriminate among classes. Whereas both algorithms work well on noiseless data, an experiment involving real data shows that the eigenvalue method is far more robust with respect to noise than is the sign-change method. Two well-known methods based on surface structure, variance, and fractal dimension were also tested on real data. Neither method furnished the aspect invariance and the discriminability of the eigenvalue method.

  6. An optimal sensing strategy for recognition and localization of 3-D natural quadric objects

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Hahn, Hernsoo

    1991-01-01

    An optimal sensing strategy for an optical proximity sensor system engaged in the recognition and localization of 3-D natural quadric objects is presented. The optimal sensing strategy consists of the selection of an optimal beam orientation and the determination of an optimal probing plane that compose an optimal data collection operation known as an optimal probing. The decision of an optimal probing is based on the measure of discrimination power of a cluster of surfaces on a multiple interpretation image (MII), where the measure of discrimination power is defined in terms of a utility function computing the expected number of interpretations that can be pruned out by a probing. An object representation suitable for active sensing based on a surface description vector (SDV) distribution graph and hierarchical tables is presented. Experimental results are shown.

  7. Object-oriented recognition of high-resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan

    2016-01-01

    With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .

  8. 3D Object Recognition: Symmetry and Virtual Views

    DTIC Science & Technology

    1992-12-01

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATIONI Artificial Intelligence Laboratory REPORT NUMBER 545 Technology Square AIM 1409 Cambridge... ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING A.I. Memo No. 1409 December 1992 C.B.C.L. Paper No. 76 3D Object...research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial

  9. Observations on Cortical Mechanisms for Object Recognition and Learning

    DTIC Science & Technology

    1993-12-01

    matching. iar objects such as the Eiffel Tower (M. Potter, pers. At the output of the network the activities of the vari- comm.). ous units are...with different localizations. and dendritic circuitry (see Poggio and Torre , 1978; one that could occur unsupervised and thus is similar to Torre and...5:81-100, 1990. [55] T. Poggio and V. Torre . A theory of synaptic inter- [41] H.K. Nishihara and T. Poggio. Stereo vision for actions. In W.E

  10. Informative Feature Selection for Object Recognition via Sparse PCA

    DTIC Science & Technology

    2011-04-07

    the BMW database [17] are used for training. For each image pair in SfM, SURF features are deemed informative if the consensus of the corresponding...observe that the first two sparse PVs are sufficient for selecting in- formative features that lie on the foreground objects in the BMW database (as... BMW ) database [17]. The database consists of multiple-view images of 20 landmark buildings on the Berkeley campus. For each building, wide-baseline

  11. Zero-Copy Objects System

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    Zero-Copy Objects System software enables application data to be encapsulated in layers of communication protocol without being copied. Indirect referencing enables application source data, either in memory or in a file, to be encapsulated in place within an unlimited number of protocol headers and/or trailers. Zero-copy objects (ZCOs) are abstract data access representations designed to minimize I/O (input/output) in the encapsulation of application source data within one or more layers of communication protocol structure. They are constructed within the heap space of a Simple Data Recorder (SDR) data store to which all participating layers of the stack must have access. Each ZCO contains general information enabling access to the core source data object (an item of application data), together with (a) a linked list of zero or more specific extents that reference portions of this source data object, and (b) linked lists of protocol header and trailer capsules. The concatenation of the headers (in ascending stack sequence), the source data object extents, and the trailers (in descending stack sequence) constitute the transmitted data object constructed from the ZCO. This scheme enables a source data object to be encapsulated in a succession of protocol layers without ever having to be copied from a buffer at one layer of the protocol stack to an encapsulating buffer at a lower layer of the stack. For large source data objects, the savings in copy time and reduction in memory consumption may be considerable.

  12. Single prolonged stress impairs social and object novelty recognition in rats.

    PubMed

    Eagle, Andrew L; Fitzpatrick, Chris J; Perrine, Shane A

    2013-11-01

    Posttraumatic stress disorder (PTSD) results from exposure to a traumatic event and manifests as re-experiencing, arousal, avoidance, and negative cognition/mood symptoms. Avoidant symptoms, as well as the newly defined negative cognitions/mood, are a serious complication leading to diminished interest in once important or positive activities, such as social interaction; however, the basis of these symptoms remains poorly understood. PTSD patients also exhibit impaired object and social recognition, which may underlie the avoidance and symptoms of negative cognition, such as social estrangement or diminished interest in activities. Previous studies have demonstrated that single prolonged stress (SPS), models PTSD phenotypes, including impairments in learning and memory. Therefore, it was hypothesized that SPS would impair social and object recognition memory. Male Sprague Dawley rats were exposed to SPS then tested in the social choice test (SCT) or novel object recognition test (NOR). These tests measure recognition of novelty over familiarity, a natural preference of rodents. Results show that SPS impaired preference for both social and object novelty. In addition, SPS impairment in social recognition may be caused by impaired behavioral flexibility, or an inability to shift behavior during the SCT. These results demonstrate that traumatic stress can impair social and object recognition memory, which may underlie certain avoidant symptoms or negative cognition in PTSD and be related to impaired behavioral flexibility.

  13. Speech recognition system for an automotive vehicle

    SciTech Connect

    Noso, K.; Futami, T.

    1987-01-13

    A speech recognition system is described for an automotive vehicle for activating vehicle actuators in response to predetermined spoken instructions supplied to the system via a microphone, which comprises: (a) a manually controlled record switch for deriving a record signal when activated; (b) a manually controlled recognition switch for deriving a recognition signal when activated; (c) a speech recognizer for sequentially recording reference spoken instructions whenever one reference spoken instruction is supplied to the system through the microphone while the record switch is activated, a memory having a storage area for each spoken instruction, and means for shifting access to each storage area for each spoken instruction has been recorded in the storage area provided therefore. A means is included for activating vehicle actuators sequentially whenever one recognition spoken instruction is supplied to the system via the microphone while the recognition switch is activated and when the spoken instruction to be recognized is similar to the reference spoken instruction; and (d) means for deriving skip instruction signal and for coupling the skip instruction signal to the speech recognizer to shift access from a currently accessed storage area for recording a current reference spoken instruction to a succeeding storage area for recording a succeeding reference spoken instruction even when the current reference spoken instruction is not supplied to the system through the microphone.

  14. Fast Object Recognition in Noisy Images Using Simulated Annealing.

    DTIC Science & Technology

    1994-12-01

    correlation coefficient is used as a measure of the match between a hypothesized object and an image. Templates are generated on-line during the search by transforming model images. Simulated annealing reduces the search time by orders of magnitude with respect to an exhaustive search. The algorithm is applied to the problem of how landmarks, for example, traffic signs, can be recognized by an autonomous vehicle or a navigating robot. The algorithm works well in noisy, real-world images of complicated scenes for model images with high information

  15. An Approach to Object Recognition: Aligning Pictorial Descriptions.

    DTIC Science & Technology

    1986-12-01

    recent reviews, see Binford 1982, Pinker 1984, !777’l dt 2 I’M AK W1 . FO UI’H HepHpHppc OI LDa Figure 1. Objects that can be recognized readily on the...M.E. Stevens , (eds.), Optical Character Recog- ntion. Washington: McGregor & Werner Inc. Asada, H. & Brady, M. 1985. The curvature primal sketch. IEEE...to face view and gaze direction. Proc. Roy. Soc. B, 223, 293-317. Pinker , S. 1984. Visual cognition: an introduction. Cognition, 18, 1-63. Potmesil, M

  16. A correlation-based algorithm for recognition and tracking of partially occluded objects

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2016-09-01

    In this work, a correlation-based algorithm consisting of a set of adaptive filters for recognition of occluded objects in still and dynamic scenes in the presence of additive noise is proposed. The designed algorithm is adaptive to the input scene, which may contain different fragments of the target, false objects, and background to be rejected. The algorithm output is high correlation peaks corresponding to pieces of the target in scenes. The proposed algorithm uses a bank of composite optimum filters. The performance of the proposed algorithm for recognition partially occluded objects is compared with that of common algorithms in terms of objective metrics.

  17. An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors

    PubMed Central

    Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai

    2017-01-01

    RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553

  18. Minefield Search and Object Recognition for Autonomous Underwater Vehicles

    DTIC Science & Technology

    1992-03-01

    limitations and deep acoustic sound channel effects. Figure 7 provides a vertical comparison of AUV sonar coverage in the Straits of Bab el Mandeb and the...provide a variety of classifiable sonar data. Successful examples of expert system classifications using NPS AUV sonar data are described in detail. The...provided with complete source code.[Ref. 45] 96 3. NPS AUV Sonar Classification System The program used to implement the concepts presented in this

  19. Ventral occipital lesions impair object recognition but not object-directed grasping: an fMRI study.

    PubMed

    James, Thomas W; Culham, Jody; Humphrey, G Keith; Milner, A David; Goodale, Melvyn A

    2003-11-01

    D.F., a patient with severe visual form agnosia, has been the subject of extensive research during the past decade. The fact that she could process visual input accurately for the purposes of guiding action despite being unable to perform visual discriminations on the same visual input inspired a novel interpretation of the functions of the two main cortical visual pathways or 'streams'. Within this theoretical context, the authors proposed that D.F. had suffered severe bilateral damage to her occipitotemporal visual system (the 'ventral stream'), while retaining the use of her occipitoparietal visual system (the 'dorsal stream'). The present paper reports a direct test of this idea, which was initially derived from purely behavioural data, before the advent of modern functional neuroimaging. We used functional MRI to examine activation in her ventral and dorsal streams during object recognition and object-directed grasping tasks. We found that D.F. showed no difference in activation when presented with line drawings of common objects compared with scrambled line drawings in the lateral occipital cortex (LO) of the ventral stream, an area that responded differentially to these stimuli in healthy individuals. Moreover, high-resolution anatomical MRI showed that her lesion corresponded bilaterally with the location of LO in healthy participants. The lack of activation with line drawings in D.F. mirrors her poor performance in identifying the objects depicted in the drawings. With coloured and greyscale pictures, stimuli that she can identify more often, D.F. did show some ventral-stream activation. These activations were, however, more widely distributed than those seen in control participants and did not include LO. In contrast to the absent or abnormal activation observed during these perceptual tasks, D.F. showed robust activation in the expected dorsal stream regions during object grasping, despite considerable atrophy in some regions of the parietal lobes. In

  20. Separate but interacting recognition memory systems for different senses: The role of the rat perirhinal cortex

    PubMed Central

    Albasser, Mathieu M.; Amin, Eman; Iordanova, Mihaela D.; Brown, Malcolm W.; Pearce, John M.; Aggleton, John P.

    2011-01-01

    Two different models (convergent and parallel) potentially describe how recognition memory, the ability to detect the re-occurrence of a stimulus, is organized across different senses. To contrast these two models, rats with or without perirhinal cortex lesions were compared across various conditions that controlled available information from specific sensory modalities. Intact rats not only showed visual, tactile, and olfactory recognition, but also overcame changes in the types of sensory information available between object sampling and subsequent object recognition, e.g., between sampling in the light and recognition in the dark, or vice versa. Perirhinal lesions severely impaired object recognition whenever visual cues were available, but spared olfactory recognition and tactile-based object recognition when tested in the dark. The perirhinal lesions also blocked the ability to recognize an object sampled in the light and then tested for recognition in the dark, or vice versa. The findings reveal parallel recognition systems for different senses reliant on distinct brain areas, e.g., perirhinal cortex for vision, but also show that: (1) recognition memory for multisensory stimuli involves competition between sensory systems and (2) perirhinal cortex lesions produce a bias to rely on vision, despite the presence of intact recognition memory systems serving other senses. PMID:21685150

  1. First results in the development of a mobile robot with trajectory planning and object recognition capabilities

    NASA Astrophysics Data System (ADS)

    Islamgozhayev, Talgat; Kalimoldayev, Maksat; Eleusinov, Arman; Mazhitov, Shokan; Mamyrbayev, Orken

    2016-11-01

    The use of mobile robots is becoming popular in many areas of service because they ensure safety and good performance while working in dangerous or unreachable locations. Areas of application of mobile robots differ from educational research to detection of bombs and their disposal. Based on the mission of the robot they have different configurations and abilities - some of them have additional arms, cranes and other tools, others use sensors and built-in image processing and object recognition systems to perform their missions. The robot that is described in this paper is mobile robot with a turret mounted on top of it. Different approaches have been tested while searching for best method suitable for image processing and template matching goals. Based on the information from image processing unit the system executes appropriate actions for planning motions and trajectory of the mobile robot.

  2. License Plate Recognition System for Indian Vehicles

    NASA Astrophysics Data System (ADS)

    Sanap, P. R.; Narote, S. P.

    2010-11-01

    We consider the task of recognition of Indian vehicle number plates (also called license plates or registration plates in other countries). A system for Indian number plate recognition must cope with wide variations in the appearance of the plates. Each state uses its own range of designs with font variations between the designs. Also, vehicle owners may place the plates inside glass covered frames or use plates made of nonstandard materials. These issues compound the complexity of automatic number plate recognition, making existing approaches inadequate. We have developed a system that incorporates a novel combination of image processing and artificial neural network technologies to successfully locate and read Indian vehicle number plates in digital images. Commercial application of the system is envisaged.

  3. Stochastic Process Underlying Emergent Recognition of Visual Objects Hidden in Degraded Images

    PubMed Central

    Murata, Tsutomu; Hamada, Takashi; Shimokawa, Tetsuya; Tanifuji, Manabu; Yanagida, Toshio

    2014-01-01

    When a degraded two-tone image such as a “Mooney” image is seen for the first time, it is unrecognizable in the initial seconds. The recognition of such an image is facilitated by giving prior information on the object, which is known as top-down facilitation and has been intensively studied. Even in the absence of any prior information, however, we experience sudden perception of the emergence of a salient object after continued observation of the image, whose processes remain poorly understood. This emergent recognition is characterized by a comparatively long reaction time ranging from seconds to tens of seconds. In this study, to explore this time-consuming process of emergent recognition, we investigated the properties of the reaction times for recognition of degraded images of various objects. The results show that the time-consuming component of the reaction times follows a specific exponential function related to levels of image degradation and subject's capability. Because generally an exponential time is required for multiple stochastic events to co-occur, we constructed a descriptive mathematical model inspired by the neurophysiological idea of combination coding of visual objects. Our model assumed that the coincidence of stochastic events complement the information loss of a degraded image leading to the recognition of its hidden object, which could successfully explain the experimental results. Furthermore, to see whether the present results are specific to the task of emergent recognition, we also conducted a comparison experiment with the task of perceptual decision making of degraded images, which is well known to be modeled by the stochastic diffusion process. The results indicate that the exponential dependence on the level of image degradation is specific to emergent recognition. The present study suggests that emergent recognition is caused by the underlying stochastic process which is based on the coincidence of multiple stochastic events

  4. Effects of selective neonatal hippocampal lesions on tests of object and spatial recognition memory in monkeys.

    PubMed

    Heuer, Eric; Bachevalier, Jocelyne

    2011-04-01

    Earlier studies in monkeys have reported mild impairment in recognition memory after nonselective neonatal hippocampal lesions. To assess whether the memory impairment could have resulted from damage to cortical areas adjacent to the hippocampus, we tested adult monkeys with neonatal focal hippocampal lesions and sham-operated controls in three recognition tasks: delayed nonmatching-to-sample, object memory span, and spatial memory span. Further, to rule out that normal performance on these tasks may relate to functional sparing following neonatal hippocampal lesions, we tested adult monkeys that had received the same focal hippocampal lesions in adulthood and their controls in the same three memory tasks. Both early and late onset focal hippocampal damage did not alter performance on any of the three tasks, suggesting that damage to cortical areas adjacent to the hippocampus was likely responsible for the recognition impairment reported by the earlier studies. In addition, given that animals with early and late onset hippocampal lesions showed object and spatial recognition impairment when tested in a visual paired comparison task, the data suggest that not all object and spatial recognition tasks are solved by hippocampal-dependent memory processes. The current data may not only help explain the neural substrate for the partial recognition memory impairment reported in cases of developmental amnesia, but they are also clinically relevant given that the object and spatial memory tasks used in monkeys are often translated to investigate memory functions in several populations of human infants and children in which dysfunction of the hippocampus is suspected.

  5. Central administration of angiotensin IV rapidly enhances novel object recognition among mice.

    PubMed

    Paris, Jason J; Eans, Shainnel O; Mizrachi, Elisa; Reilley, Kate J; Ganno, Michelle L; McLaughlin, Jay P

    2013-07-01

    Angiotensin IV (Val(1)-Tyr(2)-Ile(3)-His(4)-Pro(5)-Phe(6)) has demonstrated potential cognitive-enhancing effects. The present investigation assessed and characterized: (1) dose-dependency of angiotensin IV's cognitive enhancement in a C57BL/6J mouse model of novel object recognition, (2) the time-course for these effects, (3) the identity of residues in the hexapeptide important to these effects and (4) the necessity of actions at angiotensin IV receptors for procognitive activity. Assessment of C57BL/6J mice in a novel object recognition task demonstrated that prior administration of angiotensin IV (0.1, 1.0, or 10.0, but not 0.01 nmol, i.c.v.) significantly enhanced novel object recognition in a dose-dependent manner. These effects were time dependent, with improved novel object recognition observed when angiotensin IV (0.1 nmol, i.c.v.) was administered 10 or 20, but not 30 min prior to the onset of the novel object recognition testing. An alanine scan of the angiotensin IV peptide revealed that replacement of the Val(1), Ile(3), His(4), or Phe(6) residues with Ala attenuated peptide-induced improvements in novel object recognition, whereas Tyr(2) or Pro(5) replacement did not significantly affect performance. Administration of the angiotensin IV receptor antagonist, divalinal-Ang IV (20 nmol, i.c.v.), reduced (but did not abolish) novel object recognition; however, this antagonist completely blocked the procognitive effects of angiotensin IV (0.1 nmol, i.c.v.) in this task. Rotorod testing demonstrated no locomotor effects with any angiotensin IV or divalinal-Ang IV dose tested. These data demonstrate that angiotensin IV produces a rapid enhancement of associative learning and memory performance in a mouse model that was dependent on the angiotensin IV receptor.

  6. From neural-based object recognition toward microelectronic eyes

    NASA Technical Reports Server (NTRS)

    Sheu, Bing J.; Bang, Sa Hyun

    1994-01-01

    Engineering neural network systems are best known for their abilities to adapt to the changing characteristics of the surrounding environment by adjusting system parameter values during the learning process. Rapid advances in analog current-mode design techniques have made possible the implementation of major neural network functions in custom VLSI chips. An electrically programmable analog synapse cell with large dynamic range can be realized in a compact silicon area. New designs of the synapse cells, neurons, and analog processor are presented. A synapse cell based on Gilbert multiplier structure can perform the linear multiplication for back-propagation networks. A double differential-pair synapse cell can perform the Gaussian function for radial-basis network. The synapse cells can be biased in the strong inversion region for high-speed operation or biased in the subthreshold region for low-power operation. The voltage gain of the sigmoid-function neurons is externally adjustable which greatly facilitates the search of optimal solutions in certain networks. Various building blocks can be intelligently connected to form useful industrial applications. Efficient data communication is a key system-level design issue for large-scale networks. We also present analog neural processors based on perceptron architecture and Hopfield network for communication applications. Biologically inspired neural networks have played an important role towards the creation of powerful intelligent machines. Accuracy, limitations, and prospects of analog current-mode design of the biologically inspired vision processing chips and cellular neural network chips are key design issues.

  7. Space-object identification using spatial pattern recognition

    NASA Astrophysics Data System (ADS)

    Silversmith, Paul E.

    The traditional method of determining spacecraft attitude with a star tracker is by comparing the angle measurements between stars within a certain field-of-view (FOV) with that of angle measurements in a catalog. This technique is known as the angle method. A new approach, the planar triangle method (PTM), uses the properties of planar triangles to compare stars in a FOV with stars in a catalog. Specifically, the area and polar moment of planar triangle combinations are the comparison parameters used in the method. The PTM has been shown to provide a more consistent success rate than that of the traditional angle method. The work herein presents a technique of data association through the use of the planar triangle method. Instead of comparing the properties of stars with that of a catalog, a comparison is made between the properties of resident space objects (RSOs) and a catalog comprised of Fengyun 1C debris data and simulated data. It is shown that the planar triangle method is effective in RSO identification and is robust to the presence of measurement and sensor error.

  8. A hierarchical multiple-view approach to three-dimensional object recognition.

    PubMed

    Lin, W C; Liao, F Y; Tsao, C K; Lingutla, T

    1991-01-01

    A hierarchical approach is proposed for solving the surface and vertex correspondence problems in multiple-view-based 3D object-recognition systems. The proposed scheme is a coarse-to-fine search process, and a Hopfield network is used at each stage. Compared with conventional object-matching schemes, the proposed technique provides a more general and compact formulation of the problem and a solution more suitable for parallel implementation. At the coarse search stage, the surface matching scores between the input image and each object model in the database are computed through a Hopfield network and are used to select the candidates for further consideration. At the fine search stage, the object models selected from the previous stage are fed into another Hopfield network for vertex matching. The object model that has the best surface and vertex correspondences with the input image is finally singled out as the best matched model. Experimental results are reported using both synthetic and real range images to corroborate the proposed theory.

  9. The neural bases of crossmodal object recognition in non-human primates and rodents: a review.

    PubMed

    Cloke, Jacob M; Jacklin, Derek L; Winters, Boyer D

    2015-05-15

    The ability to integrate information from different sensory modalities to form unique multisensory object representations is a highly adaptive cognitive function. Surprisingly, non-human animal studies of the neural substrates of this form of multisensory integration have been somewhat sparse until very recently, and this may be due in part to a relative paucity of viable testing methods. Here we review the historical development and use of various "crossmodal" cognition tasks for non-human primates and rodents, focusing on tests of "crossmodal object recognition", the ability to recognize an object across sensory modalities. Such procedures have great potential to elucidate the cognitive and neural bases of object representation as it pertains to perception and memory. Indeed, these studies have revealed roles in crossmodal cognition for various brain regions (e.g., prefrontal and temporal cortices) and neurochemical systems (e.g., acetylcholine). A recent increase in behavioral and physiological studies of crossmodal cognition in rodents augurs well for the future of this research area, which should provide essential information about the basic mechanisms of object representation in the brain, in addition to fostering a better understanding of the causes of, and potential treatments for, cognitive deficits in human diseases characterized by atypical multisensory integration.

  10. Expertise modulates the neural basis of context dependent recognition of objects and their relations.

    PubMed

    Bilalić, Merim; Turella, Luca; Campitelli, Guillermo; Erb, Michael; Grodd, Wolfgang

    2012-11-01

    Recognition of objects and their relations is necessary for orienting in real life. We examined cognitive processes related to recognition of objects, their relations, and the patterns they form by using the game of chess. Chess enables us to compare experts with novices and thus gain insight in the nature of development of recognition skills. Eye movement recordings showed that experts were generally faster than novices on a task that required enumeration of relations between chess objects because their extensive knowledge enabled them to immediately focus on the objects of interest. The advantage was less pronounced on random positions where the location of chess objects, and thus typical relations between them, was randomized. Neuroimaging data related experts' superior performance to the areas along the dorsal stream-bilateral posterior temporal areas and left inferior parietal lobe were related to recognition of object and their functions. The bilateral collateral sulci, together with bilateral retrosplenial cortex, were also more sensitive to normal than random positions among experts indicating their involvement in pattern recognition. The pattern of activations suggests experts engage the same regions as novices, but also that they employ novel additional regions. Expert processing, as the final stage of development, is qualitatively different than novice processing, which can be viewed as the starting stage. Since we are all experts in real life and dealing with meaningful stimuli in typical contexts, our results underline the importance of expert-like cognitive processing on generalization of laboratory results to everyday life.

  11. Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning

    PubMed Central

    Yee, Meagan; Jones, Susan S.; Smith, Linda B.

    2012-01-01

    Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015

  12. Hippocampal NMDA receptors are involved in rats' spontaneous object recognition only under high memory load condition.

    PubMed

    Sugita, Manami; Yamada, Kazuo; Iguchi, Natsumi; Ichitani, Yukio

    2015-10-22

    The possible involvement of hippocampal N-methyl-D-aspartate (NMDA) receptors in spontaneous object recognition was investigated in rats under different memory load conditions. We first estimated rats' object memory span using 3-5 objects in "Different Objects Task (DOT)" in order to confirm the highest memory load condition in object recognition memory. Rats were allowed to explore a field in which 3 (3-DOT), 4 (4-DOT), or 5 (5-DOT) different objects were presented. After a delay period, they were placed again in the same field in which one of the sample objects was replaced by another object, and their object exploration behavior was analyzed. Rats could differentiate the novel object from the familiar ones in 3-DOT and 4-DOT but not in 5-DOT, suggesting that rats' object memory span was about 4. Then, we examined the effects of hippocampal AP5 infusion on performance in both 2-DOT (2 different objects were used) and 4-DOT. The drug treatment before the sample phase impaired performance only in 4-DOT. These results suggest that hippocampal NMDA receptors play a critical role in spontaneous object recognition only when the memory load is high.

  13. Sub-OBB based object recognition and localization algorithm using range images

    NASA Astrophysics Data System (ADS)

    Hoang, Dinh-Cuong; Chen, Liang-Chia; Nguyen, Thanh-Hung

    2017-02-01

    This paper presents a novel approach to recognize and estimate pose of the 3D objects in cluttered range images. The key technical breakthrough of the developed approach can enable robust object recognition and localization under undesirable condition such as environmental illumination variation as well as optical occlusion to viewing the object partially. First, the acquired point clouds are segmented into individual object point clouds based on the developed 3D object segmentation for randomly stacked objects. Second, an efficient shape-matching algorithm called Sub-OBB based object recognition by using the proposed oriented bounding box (OBB) regional area-based descriptor is performed to reliably recognize the object. Then, the 3D position and orientation of the object can be roughly estimated by aligning the OBB of segmented object point cloud with OBB of matched point cloud in a database generated from CAD model and 3D virtual camera. To detect accurate pose of the object, the iterative closest point (ICP) algorithm is used to match the object model with the segmented point clouds. From the feasibility test of several scenarios, the developed approach is verified to be feasible for object pose recognition and localization.

  14. Representations of Shape in Object Recognition and Long-Term Visual Memory

    DTIC Science & Technology

    1993-02-11

    Pinker (1989) proposed the Multiple- Views-Plus-Transformation theory of object recognition. The foundation of this theory is th at objecto a,- represented... Pinker (1990) have shown that such shapes are immediately and consistently recognized independently of their orientation. Consequentially, throughout...along which parts may be located. Tarr and Pinker have shown that such contrasts lead to the use of orientation-dependent recognition mechanisms utilizing

  15. A Survey on Automatic Speaker Recognition Systems

    NASA Astrophysics Data System (ADS)

    Saquib, Zia; Salam, Nirmala; Nair, Rekha P.; Pandey, Nipun; Joshi, Akanksha

    Human listeners are capable of identifying a speaker, over the telephone or an entryway out of sight, by listening to the voice of the speaker. Achieving this intrinsic human specific capability is a major challenge for Voice Biometrics. Like human listeners, voice biometrics uses the features of a person's voice to ascertain the speaker's identity. The best-known commercialized forms of voice Biometrics is Speaker Recognition System (SRS). Speaker recognition is the computing task of validating a user's claimed identity using characteristics extracted from their voices. This literature survey paper gives brief introduction on SRS, and then discusses general architecture of SRS, biometric standards relevant to voice/speech, typical applications of SRS, and current research in Speaker Recognition Systems. We have also surveyed various approaches for SRS.

  16. Crowded and Sparse Domains in Object Recognition: Consequences for Categorization and Naming

    ERIC Educational Resources Information Center

    Gale, Tim M.; Laws, Keith R.; Foley, Kerry

    2006-01-01

    Some models of object recognition propose that items from structurally crowded categories (e.g., living things) permit faster access to superordinate semantic information than structurally dissimilar categories (e.g., nonliving things), but slower access to individual object information when naming items. We present four experiments that utilize…

  17. Modeling guidance and recognition in categorical search: Bridging human and computer object detection

    PubMed Central

    Zelinsky, Gregory J.; Peng, Yifan; Berg, Alexander C.; Samaras, Dimitris

    2013-01-01

    Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery. PMID:24105460

  18. Vision holds a greater share in visuo-haptic object recognition than touch.

    PubMed

    Kassuba, Tanja; Klinge, Corinna; Hölig, Cordula; Röder, Brigitte; Siebner, Hartwig R

    2013-01-15

    The integration of visual and haptic input can facilitate object recognition. Yet, vision might dominate visuo-haptic interactions as it is more effective than haptics in processing several object features in parallel and recognizing objects outside of reaching space. The maximum likelihood approach of multisensory integration would predict that haptics as the less efficient sense for object recognition gains more from integrating additional visual information than vice versa. To test for asymmetries between vision and touch in visuo-haptic interactions, we measured regional changes in brain activity using functional magnetic resonance imaging while healthy individuals performed a delayed-match-to-sample task. We manipulated identity matching of sample and target objects: We hypothesized that only coherent visual and haptic object features would activate unified object representations. The bilateral object-specific lateral occipital cortex, fusiform gyrus, and intraparietal sulcus showed increased activation to crossmodal compared to unimodal matching but only for congruent object pairs. Critically, the visuo-haptic interaction effects in these regions depended on the sensory modality which processed the target object, being more pronounced for haptic than visual targets. This preferential response of visuo-haptic regions indicates a modality-specific asymmetry in crossmodal matching of visual and haptic object features, suggesting a functional primacy of vision over touch in visuo-haptic object recognition.

  19. Intraperirhinal cortex administration of the synthetic cannabinoid, HU210, disrupts object recognition memory in rats.

    PubMed

    Sticht, Martin A; Jacklin, Derek L; Mechoulam, Raphael; Parker, Linda A; Winters, Boyer D

    2015-03-25

    Cannabinoids disrupt learning and memory in human and nonhuman participants. Object recognition memory, which is particularly susceptible to the impairing effects of cannabinoids, relies critically on the perirhinal cortex (PRh); however, to date, the effects of cannabinoids within PRh have not been assessed. In the present study, we evaluated the effects of localized administration of the synthetic cannabinoid, HU210 (0.01, 1.0 μg/hemisphere), into PRh on spontaneous object recognition in Long-Evans rats. Animals received intra-PRh infusions of HU210 before the sample phase, and object recognition memory was assessed at various delays in a subsequent retention test. We found that presample intra-PRh HU210 dose dependently (1.0 μg but not 0.01 μg) interfered with spontaneous object recognition performance, exerting an apparently more pronounced effect when memory demands were increased. These novel findings show that cannabinoid agonists in PRh disrupt object recognition memory.

  20. Contribution of the parafascicular nucleus in the spontaneous object recognition task.

    PubMed

    Castiblanco-Piñeros, Edwin; Quiroz-Padilla, Maria Fernanda; Cardenas-Palacio, Carlos Andres; Cardenas, Fernando P

    2011-09-01

    The parafascicular (PF) nucleus, a posterior component of the intralaminar nuclei of the thalamus, is considered to be an essential structure in the feedback systems of basal ganglia-thalamo-cortical circuits critically involved in cognitive processes. The specific role played by multimodal information encoded in PF neurons in learning and memory processes is still unclear. We conducted two experiments to investigate the role of the PF in the spontaneous object recognition (SOR) task. The behavioral effects of pretraining rats with bilateral lesions of PF with N-methyl-D-aspartate (NMDA) were compared to vehicle controls. In the first experiment, rats were tested on their ability to remember the association immediately after training trials and in the second experiment after a 24h delay. Our findings provide evidence that PF lesions critically affect both SOR tests and support its role in that non-spatial form of relational memory.

  1. Object discrimination through active electrolocation: Shape recognition and the influence of electrical noise.

    PubMed

    Schumacher, Sarah; Burt de Perera, Theresa; von der Emde, Gerhard

    2016-12-12

    The weakly electric fish Gnathonemus petersii can recognise objects using active electrolocation. Here, we tested two aspects of object recognition; first whether shape recognition might be influenced by movement of the fish, and second whether object discrimination is affected by the presence of electrical noise from conspecifics. (i) Unlike other object features, such as size or volume, no parameter within a single electrical image has been found that encodes object shape. We investigated whether shape recognition might be facilitated by movement-induced modulations (MIM) of the set of electrical images that are created as a fish swims past an object. Fish were trained to discriminate between pairs of objects that either created similar or dissimilar levels of MIM of the electrical images. As predicted, the fish were able to discriminate between objects up to a longer distance if there was a large difference in MIM between the objects than if there was a small difference. This supports an involvement of MIMs in shape recognition but the use of other cues cannot be excluded. (ii) Electrical noise might impair object recognition if the noise signals overlap with the EODs of an electrolocating fish. To avoid jamming, we predicted that fish might employ pulsing strategies to prevent overlaps. To investigate the influence of electrical noise on discrimination performance, two fish were tested either in the presence of a conspecific or of playback signals and the electric signals were recorded during the experiments. The fish were surprisingly immune to jamming by conspecifics: While the discrimination performance of one fish dropped to chance level when more than 22% of its EODs overlapped with the noise signals, the performance of the other fish was not impaired even when all its EODs overlapped. Neither of the fish changed their pulsing behaviour, suggesting that they did not use any kind of jamming avoidance strategy.

  2. Grouping in object recognition: the role of a Gestalt law in letter identification.

    PubMed

    Pelli, Denis G; Majaj, Najib J; Raizman, Noah; Christian, Christopher J; Kim, Edward; Palomares, Melanie C

    2009-02-01

    The Gestalt psychologists reported a set of laws describing how vision groups elements to recognize objects. The Gestalt laws "prescribe for us what we are to recognize 'as one thing'" (Kohler, 1920). Were they right? Does object recognition involve grouping? Tests of the laws of grouping have been favourable, but mostly assessed only detection, not identification, of the compound object. The grouping of elements seen in the detection experiments with lattices and "snakes in the grass" is compelling, but falls far short of the vivid everyday experience of recognizing a familiar, meaningful, named thing, which mediates the ordinary identification of an object. Thus, after nearly a century, there is hardly any evidence that grouping plays a role in ordinary object recognition. To assess grouping in object recognition, we made letters out of grating patches and measured threshold contrast for identifying these letters in visual noise as a function of perturbation of grating orientation, phase, and offset. We define a new measure, "wiggle", to characterize the degree to which these various perturbations violate the Gestalt law of good continuation. We find that efficiency for letter identification is inversely proportional to wiggle and is wholly determined by wiggle, independent of how the wiggle was produced. Thus the effects of three different kinds of shape perturbation on letter identifiability are predicted by a single measure of goodness of continuation. This shows that letter identification obeys the Gestalt law of good continuation and may be the first confirmation of the original Gestalt claim that object recognition involves grouping.

  3. Exploring tiny images: the roles of appearance and contextual information for machine and human object recognition.

    PubMed

    Parikh, Devi; Zitnick, C Lawrence; Chen, Tsuhan

    2012-10-01

    Typically, object recognition is performed based solely on the appearance of the object. However, relevant information also exists in the scene surrounding the object. In this paper, we explore the roles that appearance and contextual information play in object recognition. Through machine experiments and human studies, we show that the importance of contextual information varies with the quality of the appearance information, such as an image's resolution. Our machine experiments explicitly model context between object categories through the use of relative location and relative scale, in addition to co-occurrence. With the use of our context model, our algorithm achieves state-of-the-art performance on the MSRC and Corel data sets. We perform recognition tests for machines and human subjects on low and high resolution images, which vary significantly in the amount of appearance information present, using just the object appearance information, the combination of appearance and context, as well as just context without object appearance information (blind recognition). We also explore the impact of the different sources of context (co-occurrence, relative-location, and relative-scale). We find that the importance of different types of contextual information varies significantly across data sets such as MSRC and PASCAL.

  4. Contributions of low and high spatial frequency processing to impaired object recognition circuitry in schizophrenia.

    PubMed

    Calderone, Daniel J; Hoptman, Matthew J; Martínez, Antígona; Nair-Collins, Sangeeta; Mauro, Cristina J; Bar, Moshe; Javitt, Daniel C; Butler, Pamela D

    2013-08-01

    Patients with schizophrenia exhibit cognitive and sensory impairment, and object recognition deficits have been linked to sensory deficits. The "frame and fill" model of object recognition posits that low spatial frequency (LSF) information rapidly reaches the prefrontal cortex (PFC) and creates a general shape of an object that feeds back to the ventral temporal cortex to assist object recognition. Visual dysfunction findings in schizophrenia suggest a preferential loss of LSF information. This study used functional magnetic resonance imaging (fMRI) and resting state functional connectivity (RSFC) to investigate the contribution of visual deficits to impaired object "framing" circuitry in schizophrenia. Participants were shown object stimuli that were intact or contained only LSF or high spatial frequency (HSF) information. For controls, fMRI revealed preferential activation to LSF information in precuneus, superior temporal, and medial and dorsolateral PFC areas, whereas patients showed a preference for HSF information or no preference. RSFC revealed a lack of connectivity between early visual areas and PFC for patients. These results demonstrate impaired processing of LSF information during object recognition in schizophrenia, with patients instead displaying increased processing of HSF information. This is consistent with findings of a preference for local over global visual information in schizophrenia.

  5. Rapid Pattern Recognition of Three Dimensional Objects Using Parallel Processing Within a Hierarchy of Hexagonal Grids

    NASA Astrophysics Data System (ADS)

    Tang, Haojun

    1995-01-01

    This thesis describes using parallel processing within a hierarchy of hexagonal grids to achieve rapid recognition of patterns. A seven-pixel basic hexagonal neighborhood, a sixty-one-pixel superneighborhood and pyramids of a 2-to-4 area ratio are employed. The hexagonal network achieves improved accuracy over the square network for object boundaries. The hexagonal grid with less directional sensitivity is a better approximation of the human vision grid, is more suited to natural scenes than the square grid and avoids the 4-neighbor/8-neighbor problem. Parallel processing in image analysis saves considerable time versus the traditional line-by-line method. Hexagonal parallel processing combines the optimum hexagonal geometry with the parallel structure. Our work has surveyed behavior and internal properties to construct the image on the different level of hexagonal pixel grids in a parallel computation scheme. A computer code has been developed to detect edges of digital images of real objects taken with a CCD camera within a hexagonal grid at any level. The algorithm uses the differences of the local gray level and those of its six neighbors, and is able to determine the boundary of a digital image in parallel. Also a series of algorithms and techniques have been built up to manage edge linking, feature extraction, etc. The digital images obtained from the improved CRS digital image processing system are a good approximation to the images which would be obtained with a real physical hexagonal grid. We envision that our work done within this little-known area will have some important applications in real-time machine vision. A parallel two-layer hexagonal-array retina has been designed to do pattern recognition using simple operations such as differencing, rationing, thresholding, etc. which may occur in the human retina and other biological vision systems.

  6. Crocins, the active constituents of Crocus sativus L., counteracted apomorphine-induced performance deficits in the novel object recognition task, but not novel object location task, in rats.

    PubMed

    Pitsikas, Nikolaos; Tarantilis, Petros A

    2017-02-17

    Schizophrenia is a chronic mental disease that affects nearly 1% of the population worldwide. Several lines of evidence suggest that the dopaminergic (DAergic) system might be compromised in schizophrenia. Specifically, the mixed dopamine (DA) D1/D2 receptor agonist apomorphine induces schizophrenia-like symptoms in rodents, including disruption of memory abilities. Crocins are among the active components of saffron (dried stigmas of Crocus sativus L. plant) and their implication in cognition is well documented. The present study investigated whether crocins counteract non-spatial and spatial recognition memory deficits induced by apomorphine in rats. For this purpose, the novel object recognition task (NORT) and the novel object location task (NOLT) were used. The effects of compounds on mobility in a locomotor activity chamber were also investigated in rats. Post-training peripheral administration of crocins (15 and 30mg/kg) counteracted apomorphine (1mg/kg)-induced performance deficits in the NORT. Conversely, crocins did not attenuate spatial recognition memory deficits produced by apomorphine in the NOLT. The present data show that crocins reversed non-spatial recognition memory impairments produced by dysfunction of the DAergic system and modulate different aspects of memory components (storage and/or retrieval). The effects of compounds on recognition memory cannot be attributed to changes in locomotor activity. Further, our findings illustrate a functional interaction between crocins and the DAergic system that may be of relevance for schizophrenia-like behavioral deficits. Therefore, the utilization of crocins as an adjunctive agent, for the treatment of cognitive deficits observed in schizophrenic patients should be further investigated.

  7. Orientation estimation of anatomical structures in medical images for object recognition

    NASA Astrophysics Data System (ADS)

    Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian

    2011-03-01

    Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.

  8. Spontaneous object recognition: a promising approach to the comparative study of memory

    PubMed Central

    Blaser, Rachel; Heyser, Charles

    2015-01-01

    Spontaneous recognition of a novel object is a popular measure of exploratory behavior, perception and recognition memory in rodent models. Because of its relative simplicity and speed of testing, the variety of stimuli that can be used, and its ecological validity across species, it is also an attractive task for comparative research. To date, variants of this test have been used with vertebrate and invertebrate species, but the methods have seldom been sufficiently standardized to allow cross-species comparison. Here, we review the methods necessary for the study of novel object recognition in mammalian and non-mammalian models, as well as the results of these experiments. Critical to the use of this test is an understanding of the organism’s initial response to a novel object, the modulation of exploration by context, and species differences in object perception and exploratory behaviors. We argue that with appropriate consideration of species differences in perception, object affordances, and natural exploratory behaviors, the spontaneous object recognition test can be a valid and versatile tool for translational research with non-mammalian models. PMID:26217207

  9. Retrieval and reconsolidation of object recognition memory are independent processes in the perirhinal cortex.

    PubMed

    Balderas, I; Rodriguez-Ortiz, C J; Bermudez-Rattoni, F

    2013-12-03

    Reconsolidation refers to the destabilization/re-stabilization process upon memory reactivation. However, the parameters needed to induce reconsolidation remain unclear. Here we evaluated the capacity of memory retrieval to induce reconsolidation of object recognition memory in rats. To assess whether retrieval is indispensable to trigger reconsolidation, we injected muscimol in the perirhinal cortex to block retrieval, and anisomycin (ani) to impede reconsolidation. We observed that ani impaired reconsolidation in the absence of retrieval. Therefore, stored memory underwent reconsolidation even though it was not recalled. These results indicate that retrieval and reconsolidation of object recognition memory are independent processes.

  10. Recognition of partially occluded threat objects using the annealed Hopefield network

    NASA Technical Reports Server (NTRS)

    Kim, Jung H.; Yoon, Sung H.; Park, Eui H.; Ntuen, Celestine A.

    1992-01-01

    Recognition of partially occluded objects has been an important issue to airport security because occlusion causes significant problems in identifying and locating objects during baggage inspection. The neural network approach is suitable for the problems in the sense that the inherent parallelism of neural networks pursues many hypotheses in parallel resulting in high computation rates. Moreover, they provide a greater degree of robustness or fault tolerance than conventional computers. The annealed Hopfield network which is derived from the mean field annealing (MFA) has been developed to find global solutions of a nonlinear system. In the study, it has been proven that the system temperature of MFA is equivalent to the gain of the sigmoid function of a Hopfield network. In our early work, we developed the hybrid Hopfield network (HHN) for fast and reliable matching. However, HHN doesn't guarantee global solutions and yields false matching under heavily occluded conditions because HHN is dependent on initial states by its nature. In this paper, we present the annealed Hopfield network (AHN) for occluded object matching problems. In AHN, the mean field theory is applied to the hybird Hopfield network in order to improve computational complexity of the annealed Hopfield network and provide reliable matching under heavily occluded conditions. AHN is slower than HHN. However, AHN provides near global solutions without initial restrictions and provides less false matching than HHN. In conclusion, a new algorithm based upon a neural network approach was developed to demonstrate the feasibility of the automated inspection of threat objects from x-ray images. The robustness of the algorithm is proved by identifying occluded target objects with large tolerance of their features.

  11. On the three-quarter view advantage of familiar object recognition.

    PubMed

    Nonose, Kohei; Niimi, Ryosuke; Yokosawa, Kazuhiko

    2016-11-01

    A three-quarter view, i.e., an oblique view, of familiar objects often leads to a higher subjective goodness rating when compared with other orientations. What is the source of the high goodness for oblique views? First, we confirmed that object recognition performance was also best for oblique views around 30° view, even when the foreshortening disadvantage of front- and side-views was minimized (Experiments 1 and 2). In Experiment 3, we measured subjective ratings of view goodness and two possible determinants of view goodness: familiarity of view, and subjective impression of three-dimensionality. Three-dimensionality was measured as the subjective saliency of visual depth information. The oblique views were rated best, most familiar, and as approximating greatest three-dimensionality on average; however, the cluster analyses showed that the "best" orientation systematically varied among objects. We found three clusters of objects: front-preferred objects, oblique-preferred objects, and side-preferred objects. Interestingly, recognition performance and the three-dimensionality rating were higher for oblique views irrespective of the clusters. It appears that recognition efficiency is not the major source of the three-quarter view advantage. There are multiple determinants and variability among objects. This study suggests that the classical idea that a canonical view has a unique advantage in object perception requires further discussion.

  12. Object recognition in congruent and incongruent natural scenes: a life-span study.

    PubMed

    Rémy, F; Saint-Aubert, L; Bacon-Macé, N; Vayssière, N; Barbeau, E; Fabre-Thorpe, M

    2013-10-18

    Efficient processing of our complex visual environment is essential and many daily visual tasks rely on accurate and fast object recognition. It is therefore important to evaluate how object recognition performance evolves during the course of adulthood. Surprisingly, this ability has not yet been investigated in the aged population, although several neuroimaging studies have reported altered activity in high-level visual ventral regions when elderly subjects process natural stimuli. In the present study, color photographs of various objects embedded in contextual scenes were used to assess object categorization performance in 97 participants aged from 20 to 91. Objects were either animals or pieces of furniture, embedded in either congruent or incongruent contexts. In every age group, subjects showed reduced categorization performance, both in terms of accuracy and speed, when objects were seen in incongruent vs. congruent contexts. In subjects over 60 years old, object categorization was greatly slowed down when compared to young and middle-aged subjects. Moreover, subjects over 75 years old evidenced a significant decrease in categorization accuracy when objects were seen in incongruent contexts. This indicates that incongruence of the scene may be particularly disturbing in late adulthood, therefore impairing object recognition. Our results suggest that daily visual processing of complex natural environments may be less efficient with age, which might impact performance in everyday visual tasks.

  13. Ontogeny of object versus location recognition in the rat: acquisition and retention effects.

    PubMed

    Westbrook, Sara R; Brennan, Lauren E; Stanton, Mark E

    2014-11-01

    Novel object and location recognition tasks harness the rat's natural tendency to explore novelty (Berlyne, 1950) to study incidental learning. The present study examined the ontogenetic profile of these two tasks and retention of spatial learning between postnatal day (PD) 17 and 31. Experiment 1 showed that rats ages PD17, 21, and 26 recognize novel objects, but only PD21 and PD26 rats recognize a novel location of a familiar object. These results suggest that novel object recognition develops before PD17, while object location recognition emerges between PD17 and PD21. Experiment 2 studied the ontogenetic profile of object location memory retention in PD21, 26, and 31 rats. PD26 and PD31 rats retained the object location memory for both 10-min and 24-hr delays. PD21 rats failed to retain the object location memory for the 24-hr delay, suggesting differential development of short- versus long-term memory in the ontogeny of object location memory.

  14. Joint Segmentation and Recognition of Categorized Objects from Noisy Web Image Collection.

    PubMed

    Wang, Le; Hua, Gang; Xue, Jianru; Gao, Zhanning; Zheng, Nanning

    2014-07-14

    The segmentation of categorized objects addresses the problem of joint segmentation of a single category of object across a collection of images, where categorized objects are referred to objects in the same category. Most existing methods of segmentation of categorized objects made the assumption that all images in the given image collection contain the target object. In other words, the given image collection is noise free. Therefore, they may not work well when there are some noisy images which are not in the same category, such as those image collections gathered by a text query from modern image search engines. To overcome this limitation, we propose a method for automatic segmentation and recognition of categorized objects from noisy Web image collections. This is achieved by cotraining an automatic object segmentation algorithm that operates directly on a collection of images, and an object category recognition algorithm that identifies which images contain the target object. The object segmentation algorithm is trained on a subset of images from the given image collection which are recognized to contain the target object with high confidence, while training the object category recognition model is guided by the intermediate segmentation results obtained from the object segmentation algorithm. This way, our co-training algorithm automatically identifies the set of true positives in the noisy Web image collection, and simultaneously extracts the target objects from all the identified images. Extensive experiments validated the efficacy of our proposed approach on four datasets: 1) the Weizmann horse dataset, 2) the MSRC object category dataset, 3) the iCoseg dataset, and 4) a new 30-categories dataset including 15,634 Web images with both hand-annotated category labels and ground truth segmentation labels. It is shown that our method compares favorably with the state-of-the-art, and has the ability to deal with noisy image collections.

  15. A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP

    PubMed Central

    Balduzzi, David; Tononi, Giulio

    2012-01-01

    In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips. PMID:22615855

  16. Securing iris recognition systems against masquerade attacks

    NASA Astrophysics Data System (ADS)

    Galbally, Javier; Gomez-Barrero, Marta; Ross, Arun; Fierrez, Julian; Ortega-Garcia, Javier

    2013-05-01

    A novel two-stage protection scheme for automatic iris recognition systems against masquerade attacks carried out with synthetically reconstructed iris images is presented. The method uses different characteristics of real iris images to differentiate them from the synthetic ones, thereby addressing important security flaws detected in state-of-the-art commercial systems. Experiments are carried out on the publicly available Biosecure Database and demonstrate the efficacy of the proposed security enhancing approach.

  17. Eyeblink Conditioning and Novel Object Recognition in the Rabbit: Behavioral Paradigms for Assaying Psychiatric Diseases

    PubMed Central

    Weiss, Craig; Disterhoft, John F.

    2015-01-01

    Analysis of data collected from behavioral paradigms has provided important information for understanding the etiology and progression of diseases that involve neural regions mediating abnormal behavior. The trace eyeblink conditioning (EBC) paradigm is particularly suited to examine cerebro-cerebellar interactions since the paradigm requires the cerebellum, forebrain, and awareness of the stimulus contingencies. Impairments in acquiring EBC have been noted in several neuropsychiatric conditions, including schizophrenia, Alzheimer’s disease (AD), progressive supranuclear palsy, and post-traumatic stress disorder. Although several species have been used to examine EBC, the rabbit is unique in its tolerance for restraint, which facilitates imaging, its relatively large skull that facilitates chronic neuronal recordings, a genetic sequence for amyloid that is identical to humans which makes it a valuable model to study AD, and in contrast to rodents, it has a striatum that is differentiated into a caudate and a putamen that facilitates analysis of diseases involving the striatum. This review focuses on EBC during schizophrenia and AD since impairments in cerebro-cerebellar connections have been hypothesized to lead to a cognitive dysmetria. We also relate EBC to conditioned avoidance responses that are more often examined for effects of antipsychotic medications, and we propose that an analysis of novel object recognition (NOR) may add to our understanding of how the underlying neural circuitry has changed during disease states. We propose that the EBC and NOR paradigms will help to determine which therapeutics are effective for treating the cognitive aspects of schizophrenia and AD, and that neuroimaging may reveal biomarkers of the diseases and help to evaluate potential therapeutics. The rabbit, thus, provides an important translational system for studying neural mechanisms mediating maladaptive behaviors that underlie some psychiatric diseases, especially

  18. A roadmap to global illumination in 3D scenes: solutions for GPU object recognition applications

    NASA Astrophysics Data System (ADS)

    Picos, Kenia; Díaz-Ramírez, Victor H.; Tapia, Juan J.

    2014-09-01

    Light interactions with matter is of remarkable complexity. An adequate modeling of global illumination is a vastly studied topic since the beginning of computer graphics, and still is an unsolved problem. The rendering equation for global illumination is based of refraction and reflection of light in interaction with matter within an environment. This physical process possesses a high computational complexity when implemented in a digital computer. The appearance of an object depends on light interactions with the surface of the material, such as emission, scattering, and absorption. Several image-synthesis methods have been used to realistically render the appearance of light incidence on an object. Recent global illumination algorithms employ mathematical models and computational strategies that improve the efficiency of the simulation solution. This work presents a review the state of the art of global illumination algorithms and focuses on the efficiency of the solution in a computational implementation in a graphics processing unit. A reliable system is developed to simulate realistics scenes in the context of real-time object recognition under different lighting conditions. Computer simulations results are presented and discussed in terms of discrimination capability, and robustness to additive noise, when considering several lighting model reflections and multiple light sources.

  19. The influence of surface color information and color knowledge information in object recognition.

    PubMed

    Bramão, Inês; Faísca, Luís; Petersson, Karl Magnus; Reis, Alexandra

    2010-01-01

    In order to clarify whether the influence of color knowledge information in object recognition depends on the presence of the appropriate surface color, we designed a name-object verification task. The relationship between color and shape information provided by the name and by the object photo was manipulated in order to assess color interference independently of shape interference. We tested three different versions for each object: typically colored, black and white, and nontypically colored. The response times on the nonmatching trials were used to measure the interference between the name and the photo. We predicted that the more similar the name and the photo are, the longer it would take to respond. Overall, the color similarity effect disappeared in the black-and-white and nontypical color conditions, suggesting that the influence of color knowledge on object recognition depends on the presence of the appropriate surface color information.

  20. AVNG system objectives and concept

    SciTech Connect

    Macarthur, Duncan W; Thron, Jonathan; Razinkov, Sergey; Livke, Alexander; Kondratov, Sergey

    2010-01-01

    Any verification measurement performed on potentially classified nuclear material must satisfy two constraints. First and foremost, no classified information can be released to the monitoring party. At the same time, the monitoring party must gain sufficient confidence from the measurement to believe that the material being measured is consistent with the host's declarations concerning that material. The attribute measurement technique addresses both concerns by measuring several attributes of the nuclear material and displaying unclassified results through green (indicating that the material does possess the specified attribute) and red (indicating that the material does not possess the specified attribute) lights. The AVNG that we describe is an attribute measurement system built by RFNC-VNIIEF in Sarov, Russia. The AVNG measures the three attributes of 'plutonium presence,' 'plutonium mass >2 kg,' and 'plutonium isotopic ratio ({sup 240}Pu to {sup 239}Pu) <0.1' and was demonstrated in Sarov for a joint US/Russian audience in June 2009. In this presentation, we will outline the goals and objectives of the AVNG measurement system. These goals are driven by the two, sometimes conflicting, requirements mentioned above. We will describe the conceptual design of the AVNG and show how this conceptual design grew out of these goals and objectives.

  1. Augmented reality three-dimensional object visualization and recognition with axially distributed sensing.

    PubMed

    Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-01-15

    An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.

  2. A Genetic-Algorithm-Based Explicit Description of Object Contour and its Ability to Facilitate Recognition.

    PubMed

    Wei, Hui; Tang, Xue-Song

    2015-11-01

    Shape representation is an extremely important and longstanding problem in the field of pattern recognition. Closed contour, which refers to shape contour, plays a crucial role in the comparison of shapes. Because shape contour is the most stable, distinguishable, and invariable feature of an object, it is useful to incorporate it into the recognition process. This paper proposes a method based on genetic algorithms. The proposed method can be used to identify the most common contour fragments, which can be used to represent the contours of a shape category. The common fragments clarify the particular logics included in the contours. This paper shows that the explicit representation of the shape contour contributes significantly to shape representation and object recognition.

  3. Optimization of object region and boundary extraction by energy minimization for activity recognition

    NASA Astrophysics Data System (ADS)

    Albalooshi, Fatema A.; Asari, Vijayan K.

    2013-05-01

    Automatic video segmentation for human activity recognition has played an important role in several computer vision applications. Active contour model (ACM) has been used extensively for unsupervised adaptive segmentation and automatic object region and boundary extraction in video sequences. This paper presents optimizing Active Contour Model using recurrent architecture for automatic object region and boundary extraction in human activity video sequences. Taking advantage of the collective computational ability and energy convergence capability of the recurrent architecture, energy function of Active Contour Model is optimized with lower computational time. The system starts with initializing recurrent architecture state based on the initial boundary points and ends up with final contour which represent actual boundary points of human body region. The initial contour of the Active Contour Model is computed using background subtraction based on Gaussian Mixture Model (GMM) such that background model is built dynamically and regularly updated to overcome different challenges including illumination changes, camera oscillations, and changes in background geometry. The recurrent nature is useful for dealing with optimization problems due to its dynamic nature, thus, ensuring convergence of the system. The proposed boundary detection and region extraction can be used for real time processing. This method results in an effective segmentation that is less sensitive to noise and complex environments. Experiments on different databases of human activity show that our method is effective and can be used for real-time video segmentation.

  4. Intelligent recognitive systems in nanomedicine

    PubMed Central

    Culver, Heidi; Daily, Adam; Khademhosseini, Ali

    2014-01-01

    There is a bright future in the development and utilization of nanoscale systems based on intelligent materials that can respond to external input providing a beneficial function. Specific functional groups can be incorporated into polymers to make them responsive to environmental stimuli such as pH, temperature, or varying concentrations of biomolecules. The fusion of such “intelligent” biomaterials with nanotechnology has led to the development of powerful therapeutic and diagnostic platforms. For example, targeted release of proteins and chemotherapeutic drugs has been achieved using pH-responsive nanocarriers while biosensors with ultra-trace detection limits are being made using nanoscale, molecularly imprinted polymers. The efficacy of therapeutics and the sensitivity of diagnostic platforms will continue to progress as unique combinations of responsive polymers and nanomaterials emerge. PMID:24860724

  5. A neural network based speech recognition system

    NASA Astrophysics Data System (ADS)

    Carroll, Edward J.; Coleman, Norman P., Jr.; Reddy, G. N.

    1990-02-01

    An overview is presented of the development of a neural network based speech recognition system. The two primary tasks involved were the development of a time invariant speech encoder and a pattern recognizer or detector. The speech encoder uses amplitude normalization and a Fast Fourier Transform to eliminate amplitude and frequency shifts of acoustic clues. The detector consists of a back-propagation network which accepts data from the encoder and identifies individual words. This use of neural networks offers two advantages over conventional algorithmic detectors: the detection time is no more than a few network time constants, and its recognition speed is independent of the number of the words in the vocabulary. The completed system has functioned as expected with high tolerance to input variation and with error rates comparable to a commercial system when used in a noisy environment.

  6. Automatic TLI recognition system, user`s guide

    SciTech Connect

    Lassahn, G.D.

    1997-02-01

    This report describes how to use an automatic target recognition system (version 14). In separate volumes are a general description of the ATR system, Automatic TLI Recognition System, General Description, and a programmer`s manual, Automatic TLI Recognition System, Programmer`s Guide.

  7. Feature discovery in gray level imagery for one-class object recognition

    SciTech Connect

    Koch, M.W.; Moya, M.M.

    1993-12-31

    Feature extraction transforms an object`s image representation to an alternate reduced representation. In one-class object recognition, we would like this alternate representation to give improved discrimination between the object and all possible non-objects and improved generation between different object poses. Feature selection can be time-consuming and difficult to optimize so we have investigated unsupervised neural networks for feature discovery. We first discuss an inherent limitation in competitive type neural networks for discovering features in gray level images. We then show how Sanger`s Generalized Hebbian Algorithm (GHA) removes this limitation and describe a novel GHA application for learning object features that discriminate the object from clutter. Using a specific example, we show how these features are better at distinguishing the target object from other nontarget object with Carpenter`s ART 2-A as the pattern classifier.

  8. HONTIOR - HIGHER-ORDER NEURAL NETWORK FOR TRANSFORMATION INVARIANT OBJECT RECOGNITION

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.

    1994-01-01

    Neural networks have been applied in numerous fields, including transformation invariant object recognition, wherein an object is recognized despite changes in the object's position in the input field, size, or rotation. One of the more successful neural network methods used in invariant object recognition is the higher-order neural network (HONN) method. With a HONN, known relationships are exploited and the desired invariances are built directly into the architecture of the network, eliminating the need for the network to learn invariance to transformations. This results in a significant reduction in the training time required, since the network needs to be trained on only one view of each object, not on numerous transformed views. Moreover, one hundred percent accuracy is guaranteed for images characterized by the built-in distortions, providing noise is not introduced through pixelation. The program HONTIOR implements a third-order neural network having invariance to translation, scale, and in-plane rotation built directly into the architecture, Thus, for 2-D transformation invariance, the network needs only to be trained on just one view of each object. HONTIOR can also be used for 3-D transformation invariant object recognition by training the network only on a set of out-of-plane rotated views. Historically, the major drawback of HONNs has been that the size of the input field was limited to the memory required for the large number of interconnections in a fully connected network. HONTIOR solves this problem by coarse coding the input images (coding an image as a set of overlapping but offset coarser images). Using this scheme, large input fields (4096 x 4096 pixels) can easily be represented using very little virtual memory (30Mb). The HONTIOR distribution consists of three main programs. The first program contains the training and testing routines for a third-order neural network. The second program contains the same training and testing procedures as the

  9. Recognition of 3D objects for autonomous mobile robot's navigation in automated shipbuilding

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Cho, Hyungsuck

    2007-10-01

    Nowadays many parts of shipbuilding process are automated, but the painting process is not, because of the difficulty of automated on-line painting quality measurement, harsh painting environment and the difficulty of robot navigation. However, the painting automation is necessary, because it can provide consistent performance of painting film thickness. Furthermore, autonomous mobile robots are strongly required for flexible painting work. However, the main problem of autonomous mobile robot's navigation is that there are many obstacles which are not expressed in the CAD data. To overcome this problem, obstacle detection and recognition are necessary to avoid obstacles and painting work effectively. Until now many object recognition algorithms have been studied, especially 2D object recognition methods using intensity image have been widely studied. However, in our case environmental illumination does not exist, so these methods cannot be used. To overcome this, to use 3D range data must be used, but the problem of using 3D range data is high computational cost and long estimation time of recognition due to huge data base. In this paper, we propose a 3D object recognition algorithm based on PCA (Principle Component Analysis) and NN (Neural Network). In the algorithm, the novelty is that the measured 3D range data is transformed into intensity information, and then adopts the PCA and NN algorithm for transformed intensity information to reduce the processing time and make the data easy to handle which are disadvantages of previous researches of 3D object recognition. A set of experimental results are shown to verify the effectiveness of the proposed algorithm.

  10. Effects of selective neonatal hippocampal lesions on tests of object and spatial recognition memory in monkeys

    PubMed Central

    Heuer, Eric; Bachevalier, Jocelyne

    2011-01-01

    Earlier studies in monkeys have reported mild impairment in recognition memory following nonselective neonatal hippocampal lesions (Bachevalier, Beauregard, & Alvarado, 1999; Rehbein, Killiany, & Mahut, 2005). To assess whether the memory impairment could have resulted from damage to cortical areas adjacent to the hippocampus, we tested adult monkeys with neonatal focal hippocampal lesions and sham-operated controls in three recognition tasks: delayed nonmatching-to-sample, object memory span, and spatial memory span. Further, to rule out that normal performance on these tasks may relate to functional sparing following neonatal hippocampal lesions, we tested adult monkeys that had received the same focal hippocampal lesions in adulthood and their controls in the same three memory tasks. Both early and late onset focal hippocampal damage did not alter performance on any of the three tasks, suggesting that damage to cortical areas adjacent to the hippocampus was likely responsible for the recognition impairment reported by the earlier studies. In addition, given that animals with early and late onset hippocampal lesions showed object and spatial recognition impairment when tested in a visual paired comparison task (Zeamer, Meunier, & Bachevalier, Submitted; Zeamer, Heuer & Bachevalier, 2010), the data suggest that not all object and spatial recognition tasks are solved by hippocampal-dependent memory processes. The current data may not only help explain the neural substrate for the partial recognition memory impairment reported in cases of developmental amnesia (Adlam, Malloy, Mishkin, & Vargha-Khadem, 2009), but they are also clinically relevant given that the object and spatial memory tasks used in monkeys are often translated to investigate memory functions in several populations of human infants and children in which dysfunction of the hippocampus is suspected. PMID:21341885

  11. Effects of exposure to heavy particles and aging on object recognition memory in rats

    NASA Astrophysics Data System (ADS)

    Rabin, Bernard; Joseph, James; Shukitt-Hale, Barbara; Carrihill-Knoll, Kirsty; Shannahan, Ryan; Hering, Kathleen

    Exposure to HZE particles produces changes in neurocognitive performance. These changes, including deficits in spatial learning and memory, object recognition memory and operant responding, are also observed in the aged organism. As such, it has been proposed that exposure to heavy particles produces "accelerated aging". Because aging is an ongoing process, it is possible that there would be an interaction between the effects of exposure and the effects of aging, such that doses of HZE particles that do not affect the performance of younger organisms will affect the performance of organisms as they age. The present experiments were designed to test the hypothesis that young rats that had been exposed to HZE particles would show a progressive deterioration in object recognition memory as a function of the age of testing. Rats were exposed to 12 C, 28 S or 48 Ti particles at the N.A.S.A. Space Radiation Laboratory at Brookhaven National Laboratory. Following irradiation the rats were shipped to UMBC for behavioral testing. HZE particle-induced changes in object recognition memory were tested using a standard procedure: rats were placed in an open field and allowed to interact with two identical objects for up to 30 sec; twenty-four hrs later the rats were again placed in the open field, this time containing one familiar and one novel object. Non-irradiated control animals spent significantly more time with the novel object than with the familiar object. In contrast, the rats that been exposed to heavy particles spent equal amounts of time with both the novel and familiar object. The lowest dose of HZE particles which produced a disruption of object recognition memory was determined three months and eleven months following exposure. The threshold dose needed to disrupt object recognition memory three months following irradiation varied as a function of the specific particle and energy. When tested eleven months following irradiation, doses of HZE particles that did

  12. What is special about expertise? Visual expertise reveals the interactive nature of real-world object recognition.

    PubMed

    Harel, Assaf

    2016-03-01

    Ever since Diamond and Carey (1986. J. Exp. Psychol.: Gen., vol. 115, pp. 107-117) seminal work, the main model for studying expertise in visual object recognition ("visual expertise") has been face perception. The underlying assumption was that since faces may be considered the ultimate domain of visual expertise, any face-processing signature might actually be a general characteristic of visual expertise. However, while humans are clearly experts in face recognition, visual expertise is not restricted to faces and can be observed in a variety of domains. This raises the question of whether face recognition is in fact the right model to study visual expertise, and if not, what are the common cognitive and neural characteristics of visual expertise. The current perspective article addresses this question by revisiting past and recent neuroimaging and behavioural works on visual expertise. The view of visual expertise that emerges from these works is that expertise is a unique phenomenon, with distinctive neural and cognitive characteristics. Specifically, visual expertise is a controlled, interactive process that develops from the reciprocal interactions between the visual system and multiple top-down factors, including semantic knowledge, top-down attentional control, and task relevance. These interactions enable the ability to flexibly access domain-specific information at multiple scales and levels guided by multiple recognition goals. Extensive visual experience with a given object category culminates in the recruitment of these multiple systems, and is reflected in widespread neural activity, extending well beyond visual cortex, to include higher-level cortical areas.

  13. Perirhinal Cortex Resolves Feature Ambiguity in Configural Object Recognition and Perceptual Oddity Tasks

    ERIC Educational Resources Information Center

    Bartko, Susan J.; Winters, Boyer D.; Cowell, Rosemary A.; Saksida, Lisa M.; Bussey, Timothy J.

    2007-01-01

    The perirhinal cortex (PRh) has a well-established role in object recognition memory. More recent studies suggest that PRh is also important for two-choice visual discrimination tasks. Specifically, it has been suggested that PRh contains conjunctive representations that help resolve feature ambiguity, which occurs when a task cannot easily be…

  14. Developmental Changes in Visual Object Recognition between 18 and 24 Months of Age

    ERIC Educational Resources Information Center

    Pereira, Alfredo F.; Smith, Linda B.

    2009-01-01

    Two experiments examined developmental changes in children's visual recognition of common objects during the period of 18 to 24 months. Experiment 1 examined children's ability to recognize common category instances that presented three different kinds of information: (1) richly detailed and prototypical instances that presented both local and…

  15. Developmental Trajectories of Part-Based and Configural Object Recognition in Adolescence

    ERIC Educational Resources Information Center

    Juttner, Martin; Wakui, Elley; Petters, Dean; Kaur, Surinder; Davidoff, Jules

    2013-01-01

    Three experiments assessed the development of children's part and configural (part-relational) processing in object recognition during adolescence. In total, 312 school children aged 7-16 years and 80 adults were tested in 3-alternative forced choice (3-AFC) tasks. They judged the correct appearance of upright and inverted presented familiar…

  16. Mechanisms and Neural Basis of Object and Pattern Recognition: A Study with Chess Experts

    ERIC Educational Resources Information Center

    Bilalic, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang

    2010-01-01

    Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and…

  17. Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing

    DTIC Science & Technology

    2013-09-01

    results for precision, recall, and F-measure indicate that the best approach to use for image segmentation is Sobel edge detection and to use Canny...or Sobel for object recognition. The process for this report would not work for a warfighter or analyst. It has poor performance. Additionally...1 2.1. Sobel Edge Detection

  18. Cross domains Arabic named entity recognition system

    NASA Astrophysics Data System (ADS)

    Al-Ahmari, S. Saad; Abdullatif Al-Johar, B.

    2016-07-01

    Named Entity Recognition (NER) plays an important role in many Natural Language Processing (NLP) applications such as; Information Extraction (IE), Question Answering (QA), Text Clustering, Text Summarization and Word Sense Disambiguation. This paper presents the development and implementation of domain independent system to recognize three types of Arabic named entities. The system works based on a set of domain independent grammar-rules along with Arabic part of speech tagger in addition to gazetteers and lists of trigger words. The experimental results shown, that the system performed as good as other systems with better results in some cases of cross-domains corpora.

  19. Comparing object recognition from binary and bipolar edge images for visual prostheses

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2016-11-01

    Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition.

  20. Blockade of glutamatergic transmission in perirhinal cortex impairs object recognition memory in macaques.

    PubMed

    Malkova, Ludise; Forcelli, Patrick A; Wellman, Laurie L; Dybdal, David; Dubach, Mark F; Gale, Karen

    2015-03-25

    The perirhinal cortex (PRc) is essential for visual recognition memory, as shown by electrophysiological recordings and lesion studies in a variety of species. However, relatively little is known about the functional contributions of perirhinal subregions. Here we used a systematic mapping approach to identify the critical subregions of PRc through transient, focal blockade of glutamate receptors by intracerebral infusion of kynurenic acid. Nine macaques were tested for visual recognition memory using the delayed nonmatch-to-sample task. We found that inactivation of medial PRc (consisting of Area 35 together with the medial portion of Area 36), but not lateral PRc (the lateral portion of Area 36), resulted in a significant delay-dependent impairment. Significant impairment was observed with 30 and 60 s delays but not with 10 s delays. The magnitude of impairment fell within the range previously reported after PRc lesions. Furthermore, we identified a restricted area located within the most anterior part of medial PRc as critical for this effect. Moreover, we found that focal blockade of either NMDA receptors by the receptor-specific antagonist AP-7 or AMPA receptors by the receptor-specific antagonist NBQX was sufficient to disrupt object recognition memory. The present study expands the knowledge of the role of PRc in recognition memory by identifying a subregion within this area that is critical for this function. Our results also indicate that, like in the rodent, both NMDA and AMPA-mediated transmission contributes to object recognition memory.

  1. Signal evolution in prey recognition systems.

    PubMed

    Pie, Marcio R

    2005-01-31

    In this paper a graphical model first developed in the context of kin recognition is adapted to the study of signalling in predator-prey systems. Antipredation strategies are envisioned as points along a signal-to-noise (S/N) axis, with concealing (low S/N) and conspicuous (high S/N) strategies being placed at opposite sides of this axis. Optimal prey recognition systems should find a trade-off between acceptance errors (going after a background cue as if it were a prey) and rejection errors (not going after a prey as if it were background noise). The model also predicts the types of cues the predator should use in opposite sides of the S/N axis.

  2. Perirhinal cortex lesions impair tests of object recognition memory but spare novelty detection.

    PubMed

    Olarte-Sánchez, Cristian M; Amin, Eman; Warburton, E Clea; Aggleton, John P

    2015-12-01

    The present study examined why perirhinal cortex lesions in rats impair the spontaneous ability to select novel objects in preference to familiar objects, when both classes of object are presented simultaneously. The study began by repeating this standard finding, using a test of delayed object recognition memory. As expected, the perirhinal cortex lesions reduced the difference in exploration times for novel vs. familiar stimuli. In contrast, the same rats with perirhinal cortex lesions appeared to perform normally when the preferential exploration of novel vs. familiar objects was tested sequentially, i.e. when each trial consisted of only novel or only familiar objects. In addition, there was no indication that the perirhinal cortex lesions reduced total levels of object exploration for novel objects, as would be predicted if the lesions caused novel stimuli to appear familiar. Together, the results show that, in the absence of perirhinal cortex tissue, rats still receive signals of object novelty, although they may fail to link that information to the appropriate object. Consequently, these rats are impaired in discriminating the source of object novelty signals, leading to deficits on simultaneous choice tests of recognition.

  3. Progestogens’ effects and mechanisms for object recognition memory across the lifespan

    PubMed Central

    Walf, Alicia A.; Koonce, Carolyn J.; Frye, Cheryl A.

    2016-01-01

    This review explores the effects of female reproductive hormones, estrogens and progestogens, with a focus on progesterone and allopregnanolone, on object memory. Progesterone and its metabolites, in particular allopregnanolone, exert various effects on both cognitive and non-mnemonic functions in females. The well-known object recognition task is a valuable experimental paradigm that can be used to determine the effects and mechanisms of progestogens for mnemonic effects across the lifespan, which will be discussed herein. In this task there is little test-decay when different objects are used as targets and baseline valance for objects is controlled. This allows repeated testing, within-subjects designs, and longitudinal assessments, which aid understanding of changes in hormonal milieu. Objects are not aversive or food-based, which are hormone-sensitive factors. This review focuses on published data from our laboratory, and others, using the object recognition task in rodents to assess the role and mechanisms of progestogens throughout the lifespan. Improvements in object recognition performance of rodents are often associated with higher hormone levels in the hippocampus and prefrontal cortex during natural cycles, with hormone replacement following ovariectomy in young animals, or with aging. The capacity for reversal of age- and reproductive senescence-related decline in cognitive performance, and changes in neural plasticity that may be dissociated from peripheral effects with such decline, are discussed. The focus here will be on the effects of brain-derived factors, such as the neurosteroid, allopregnanolone, and other hormones, for enhancing object recognition across the lifespan. PMID:26235328

  4. LASSBio-579, a prototype antipsychotic drug, and clozapine are effective in novel object recognition task, a recognition memory model.

    PubMed

    Antonio, Camila B; Betti, Andresa H; Herzfeldt, Vivian; Barreiro, Eliezer J; Fraga, Carlos A M; Rates, Stela M K

    2016-06-01

    Previous studies on the N-phenylpiperazine derivative LASSBio-579 have suggested that LASSBio-579 has an atypical antipsychotic profile. It binds to D2, D4 and 5-HT1A receptors and is effective in animal models of schizophrenia symptoms (prepulse inhibition disruption, apomorphine-induced climbing and amphetamine-induced stereotypy). In the current study, we evaluated the effect of LASSBio-579, clozapine (atypical antipsychotic) and haloperidol (typical antipsychotic) in the novel object recognition task, a recognition memory model with translational value. Haloperidol (0.01 mg/kg, orally) impaired the ability of the animals (CF1 mice) to recognize the novel object on short-term and long-term memory tasks, whereas LASSBio-579 (5 mg/kg, orally) and clozapine (1 mg/kg, orally) did not. In another set of experiments, animals previously treated with ketamine (10 mg/kg, intraperitoneally) or vehicle (saline 1 ml/100 g, intraperitoneally) received LASSBio-579, clozapine or haloperidol at different time-points: 1 h before training (encoding/consolidation); immediately after training (consolidation); or 1 h before long-term memory testing (retrieval). LASSBio-579 and clozapine protected against the long-term memory impairment induced by ketamine when administered at the stages of encoding, consolidation and retrieval of memory. These findings point to the potential of LASSBio-579 for treating cognitive symptoms of schizophrenia and other disorders.

  5. Dance recognition system using lower body movement.

    PubMed

    Simpson, Travis T; Wiesner, Susan L; Bennett, Bradford C

    2014-02-01

    The current means of locating specific movements in film necessitate hours of viewing, making the task of conducting research into movement characteristics and patterns tedious and difficult. This is particularly problematic for the research and analysis of complex movement systems such as sports and dance. While some systems have been developed to manually annotate film, to date no automated way of identifying complex, full body movement exists. With pattern recognition technology and knowledge of joint locations, automatically describing filmed movement using computer software is possible. This study used various forms of lower body kinematic analysis to identify codified dance movements. We created an algorithm that compares an unknown move with a specified start and stop against known dance moves. Our recognition method consists of classification and template correlation using a database of model moves. This system was optimized to include nearly 90 dance and Tai Chi Chuan movements, producing accurate name identification in over 97% of trials. In addition, the program had the capability to provide a kinematic description of either matched or unmatched moves obtained from classification recognition.

  6. Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition.

    PubMed

    Kheradpisheh, Saeed Reza; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée

    2016-09-07

    Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations.

  7. Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition

    PubMed Central

    Kheradpisheh, Saeed Reza; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée

    2016-01-01

    Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations. PMID:27601096

  8. Mechanisms of Visual Object Recognition in Infancy: Five-Month-Olds Generalize beyond the Interpolation of Familiar Views

    ERIC Educational Resources Information Center

    Mash, Clay; Arterberry, Martha E.; Bornstein, Marc H.

    2007-01-01

    This work examined predictions of the interpolation of familiar views (IFV) account of object recognition performance in 5-month-olds. Infants were familiarized to an object either from a single viewpoint or from multiple viewpoints varying in rotation around a single axis. Object recognition was then tested in both conditions with the same object…

  9. Image Processing Strategies Based on a Visual Saliency Model for Object Recognition Under Simulated Prosthetic Vision.

    PubMed

    Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu

    2016-01-01

    Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients.

  10. c-Fos expression correlates with performance on novel object and novel place recognition tests.

    PubMed

    Mendez, Marta; Arias, Natalia; Uceda, Sara; Arias, Jorge L

    2015-08-01

    In rodents, many studies have been carried out using novelty-preference paradigms. The results show that the perirhinal cortex and the hippocampus are involved in the recognition of a novel object, "what", and its new position, "where", respectively. We employed these two variants of a novelty-preference paradigm to assess whether the expression of the immediate-early gene c-fos in the dorsal hippocampus and perirhinal cortex correlates with the performance discrimination ratio (d2), on the respective versions of the novelty preference tests. A control group (CO) was added to explore c-fos activation not specific to recognition. The results showed different patterns of c-Fos protein expression in the hippocampus and perirhinal cortex. The Where Group presented more c-Fos positive nuclei than the What and CO groups in the CA1 and CA3 regions, whereas in the perirhinal cortex, the What Group showed more c-Fos positive nuclei than the Where and CO groups. The correlation results indicate that levels of c-Fos in the CA1 area and perirhinal cortex correlate with effective exploration, d2, on the respective versions of the novelty preference tests, novel place and novel object recognition. These data suggest that the hippocampal CA1 and perirhinal cortex are specifically related to the level of recognition of place and objects, respectively.

  11. Optimized shape semantic graph representation for object understanding and recognition in point clouds

    NASA Astrophysics Data System (ADS)

    Ning, Xiaojuan; Wang, Yinghui; Meng, Weiliang; Zhang, Xiaopeng

    2016-10-01

    To understand and recognize the three-dimensional (3-D) objects represented as point cloud data, we use an optimized shape semantic graph (SSG) to describe 3-D objects. Based on the decomposed components of an object, the boundary surface of different components and the topology of components, the SSG gives a semantic description that is consistent with human vision perception. The similarity measurement of the SSG for different objects is effective for distinguishing the type of object and finding the most similar one. Experiments using a shape database show that the SSG is valuable for capturing the components of the objects and the corresponding relations between them. The SSG is not only suitable for an object without any loops but also appropriate for an object with loops to represent the shape and the topology. Moreover, a two-step progressive similarity measurement strategy is proposed to effectively improve the recognition rate in the shape database containing point-sample data.

  12. Implementation of a Peltier-based cooling device for localized deep cortical deactivation during in vivo object recognition testing

    NASA Astrophysics Data System (ADS)

    Marra, Kyle; Graham, Brett; Carouso, Samantha; Cox, David

    2012-02-01

    While the application of local cortical cooling has recently become a focus of neurological research, extended localized deactivation deep within brain structures is still unexplored. Using a wirelessly controlled thermoelectric (Peltier) device and water-based heat sink, we have achieved inactivating temperatures (<20 C) at greater depths (>8 mm) than previously reported. After implanting the device into Long Evans rats' basolateral amygdala (BLA), an inhibitory brain center that controls anxiety and fear, we ran an open field test during which anxiety-driven behavioral tendencies were observed to decrease during cooling, thus confirming the device's effect on behavior. Our device will next be implanted in the rats' temporal association cortex (TeA) and recordings from our signal-tracing multichannel microelectrodes will measure and compare activated and deactivated neuronal activity so as to isolate and study the TeA signals responsible for object recognition. Having already achieved a top performing computational face-recognition system, the lab will utilize this TeA activity data to generalize its computational efforts of face recognition to achieve general object recognition.

  13. Beyond perceptual expertise: revisiting the neural substrates of expert object recognition.

    PubMed

    Harel, Assaf; Kravitz, Dwight; Baker, Chris I

    2013-12-27

    Real-world expertise provides a valuable opportunity to understand how experience shapes human behavior and neural function. In the visual domain, the study of expert object recognition, such as in car enthusiasts or bird watchers, has produced a large, growing, and often-controversial literature. Here, we synthesize this literature, focusing primarily on results from functional brain imaging, and propose an interactive framework that incorporates the impact of high-level factors, such as attention and conceptual knowledge, in supporting expertise. This framework contrasts with the perceptual view of object expertise that has concentrated largely on stimulus-driven processing in visual cortex. One prominent version of this perceptual account has almost exclusively focused on the relation of expertise to face processing and, in terms of the neural substrates, has centered on face-selective cortical regions such as the Fusiform Face Area (FFA). We discuss the limitations of this face-centric approach as well as the more general perceptual view, and highlight that expert related activity is: (i) found throughout visual cortex, not just FFA, with a strong relationship between neural response and behavioral expertise even in the earliest stages of visual processing, (ii) found outside visual cortex in areas such as parietal and prefrontal cortices, and (iii) modulated by the attentional engagement of the observer suggesting that it is neither automatic nor driven solely by stimulus properties. These findings strongly support a framework in which object expertise emerges from extensive interactions within and between the visual system and other cognitive systems, resulting in widespread, distributed patterns of expertise-related activity across the entire cortex.

  14. Beyond perceptual expertise: revisiting the neural substrates of expert object recognition

    PubMed Central

    Harel, Assaf; Kravitz, Dwight; Baker, Chris I.

    2013-01-01

    Real-world expertise provides a valuable opportunity to understand how experience shapes human behavior and neural function. In the visual domain, the study of expert object recognition, such as in car enthusiasts or bird watchers, has produced a large, growing, and often-controversial literature. Here, we synthesize this literature, focusing primarily on results from functional brain imaging, and propose an interactive framework that incorporates the impact of high-level factors, such as attention and conceptual knowledge, in supporting expertise. This framework contrasts with the perceptual view of object expertise that has concentrated largely on stimulus-driven processing in visual cortex. One prominent version of this perceptual account has almost exclusively focused on the relation of expertise to face processing and, in terms of the neural substrates, has centered on face-selective cortical regions such as the Fusiform Face Area (FFA). We discuss the limitations of this face-centric approach as well as the more general perceptual view, and highlight that expert related activity is: (i) found throughout visual cortex, not just FFA, with a strong relationship between neural response and behavioral expertise even in the earliest stages of visual processing, (ii) found outside visual cortex in areas such as parietal and prefrontal cortices, and (iii) modulated by the attentional engagement of the observer suggesting that it is neither automatic nor driven solely by stimulus properties. These findings strongly support a framework in which object expertise emerges from extensive interactions within and between the visual system and other cognitive systems, resulting in widespread, distributed patterns of expertise-related activity across the entire cortex. PMID:24409134

  15. Relating visual to verbal semantic knowledge: the evaluation of object recognition in prosopagnosia.

    PubMed

    Barton, Jason J S; Hanif, Hashim; Ashraf, Sohi

    2009-12-01

    Assessment of face specificity in prosopagnosia is hampered by difficulty in gauging pre-morbid expertise for non-face object categories, for which humans vary widely in interest and experience. In this study, we examined the correlation between visual and verbal semantic knowledge for cars to determine if visual recognition accuracy could be predicted from verbal semantic scores. We had 33 healthy subjects and six prosopagnosic patients first rated their own knowledge of cars. They were then given a test of verbal semantic knowledge that presented them with the names of car models, to which they were to match the manufacturer. Lastly, they were given a test of visual recognition, presenting them with images of cars to which they were to provide information at three levels of specificity: model, manufacturer and decade of make. In controls, while self-ratings were only moderately correlated with either visual recognition or verbal semantic knowledge, verbal semantic knowledge was highly correlated with visual recognition, particularly for more specific levels of information. Item concordance showed that less-expert subjects were more likely to provide the most specific information (model name) for the image when they could also match the manufacturer to its name. Prosopagnosic subjects showed reduced visual recognition of cars after adjusting for verbal semantic scores. We conclude that visual recognition is highly correlated with verbal semantic knowledge, that formal measures of verbal semantic knowledge are a more accurate gauge of expertise than self-ratings, and that verbal semantic knowledge can be used to adjust tests of visual recognition for pre-morbid expertise in prosopagnosia.

  16. Structural Target Analysis And Recognition System

    NASA Astrophysics Data System (ADS)

    Lee, Harry C.

    1984-06-01

    The structural target analysis and recognition system (STARS) is a pyramid and syntactical based vision system that uniquely classifies targets, using their viewable internal structure. Being a totally structural approach, STARS uses a resolution sequence to develop a hierarchical pyramid organized segmentation and formal language to perform the recognition function. Global structure of the target is derived by the segment connectivity of the inter-resolution levels, while local structure is based on the local relationship of segments at a single level. The relationships of both the global and local structures form a resolution syntax tree (RST). Two targets are said to be structurally similar if they have similar RSTs. The matching process of the RSTs proceeds from the root to the leaves of the tree. The depth to which the match progresses before failure or completion determines the degree of patch in a resolution sense. RSTs from various views of a target are grouped together to form a formal language. The underlying grammar is transformed into a stochastic grammar so as to accommodate segmentation and environmental variations. Recognition metrics are a function of the resolution structure and posterior probability at each resolution level. Because of the inherent resolution sequence, STARS can accommodate both candidate and reference targets from various resolutions.

  17. Euro Banknote Recognition System for Blind People

    PubMed Central

    Dunai Dunai, Larisa; Chillarón Pérez, Mónica; Peris-Fajarnés, Guillermo; Lengua Lengua, Ismael

    2017-01-01

    This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively. PMID:28117703

  18. Euro Banknote Recognition System for Blind People.

    PubMed

    Dunai Dunai, Larisa; Chillarón Pérez, Mónica; Peris-Fajarnés, Guillermo; Lengua Lengua, Ismael

    2017-01-20

    This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively.

  19. False recognition of objects in visual scenes: findings from a combined direct and indirect memory test.

    PubMed

    Weinstein, Yana; Nash, Robert A

    2013-01-01

    We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415-1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants' speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from "true" memories of studied pictures.

  20. Multiple degree of freedom object recognition using optical relational graph decision nets

    NASA Technical Reports Server (NTRS)

    Casasent, David P.; Lee, Andrew J.

    1988-01-01

    Multiple-degree-of-freedom object recognition concerns objects with no stable rest position with all scale, rotation, and aspect distortions possible. It is assumed that the objects are in a fairly benign background, so that feature extractors are usable. In-plane distortion invariance is provided by use of a polar-log coordinate transform feature space, and out-of-plane distortion invariance is provided by linear discriminant function design. Relational graph decision nets are considered for multiple-degree-of-freedom pattern recognition. The design of Fisher (1936) linear discriminant functions and synthetic discriminant function for use at the nodes of binary and multidecision nets is discussed. Case studies are detailed for two-class and multiclass problems. Simulation results demonstrate the robustness of the processors to quantization of the filter coefficients and to noise.

  1. Hippocampal Anatomy Supports the Use of Context in Object Recognition: A Computational Model

    PubMed Central

    Bhattacharyya, Rajan; Fellous, Jean-Marc

    2013-01-01

    The human hippocampus receives distinct signals via the lateral entorhinal cortex, typically associated with object features, and the medial entorhinal cortex, associated with spatial or contextual information. The existence of these distinct types of information calls for some means by which they can be managed in an appropriate way, by integrating them or keeping them separate as required to improve recognition. We hypothesize that several anatomical features of the hippocampus, including differentiation in connectivity between the superior/inferior blades of DG and the distal/proximal regions of CA3 and CA1, work together to play this information managing role. We construct a set of neural network models with these features and compare their recognition performance when given noisy or partial versions of contexts and their associated objects. We found that the anterior and posterior regions of the hippocampus naturally require different ratios of object and context input for optimal performance, due to the greater number of objects versus contexts. Additionally, we found that having separate processing regions in DG significantly aided recognition in situations where object inputs were degraded. However, split processing in both DG and CA3 resulted in performance tradeoffs, though the actual hippocampus may have ways of mitigating such losses. PMID:23781237

  2. Dysgranular retrosplenial cortex lesions in rats disrupt cross-modal object recognition

    PubMed Central

    Hindley, Emma L.; Nelson, Andrew J.D.; Aggleton, John P.; Vann, Seralynne D.

    2014-01-01

    The retrosplenial cortex supports navigation, with one role thought to be the integration of different spatial cue types. This hypothesis was extended by examining the integration of nonspatial cues. Rats with lesions in either the dysgranular subregion of retrosplenial cortex (area 30) or lesions in both the granular and dysgranular subregions (areas 29 and 30) were tested on cross-modal object recognition (Experiment 1). In these tests, rats used different sensory modalities when exploring and subsequently recognizing the same test objects. The objects were first presented either in the dark, i.e., giving tactile and olfactory cues, or in the light behind a clear Perspex barrier, i.e., giving visual cues. Animals were then tested with either constant combinations of sample and test conditions (light to light, dark to dark), or changed “cross-modal” combinations (light to dark, dark to light). In Experiment 2, visual object recognition was tested without Perspex barriers, but using objects that could not be distinguished in the dark. The dysgranular retrosplenial cortex lesions selectively impaired cross-modal recognition when cue conditions switched from dark to light between initial sampling and subsequent object recognition, but no impairment was seen when the cue conditions remained constant, whether dark or light. The combined (areas 29 and 30) lesioned rats also failed the dark to light cross-modal problem but this impairment was less selective. The present findings suggest a role for the dysgranular retrosplenial cortex in mediating the integration of information across multiple cue types, a role that potentially applies to both spatial and nonspatial domains. PMID:24554671

  3. Some consonants sound curvy: effects of sound symbolism on object recognition.

    PubMed

    Aveyard, Mark E

    2012-01-01

    Two experiments explored the influence of consonant sound symbolism on object recognition. In Experiment 1, participants heard a word ostensibly from a foreign language (in reality, a pseudoword) followed by two objects on screen: a rectilinear object and a curvilinear object. The task involved judging which of the two objects was properly described by the unknown pseudoword. The results showed that congruent sound-symbolic pseudoword-object pairs produced higher task accuracy over three rounds of testing than did incongruent pairs, despite the fact that "hard" pseudowords (with three plosives) and "soft" pseudowords (with three nonplosives) were paired equally with rectilinear and curvilinear objects. Experiment 2 reduced awareness of the manipulation by including similar-shaped, target-related distractors. Sound symbolism effects still emerged, though the time course of these effects over three rounds differed from that in Experiment 1.

  4. Endomorphin-1 attenuates Aβ42 induced impairment of novel object and object location recognition tasks in mice.

    PubMed

    Zhang, Rui-san; Xu, Hong-jiao; Jiang, Jin-hong; Han, Ren-wen; Chang, Min; Peng, Ya-li; Wang, Yuan; Wang, Rui

    2015-12-10

    A growing body of evidence suggests that the agglomeration of amyloid-β (Aβ) may be a trigger for Alzheimer׳s disease (AD). Central infusion of Aβ42 can lead to memory impairment in mice. Inhibiting the aggregation of Aβ has been considered a therapeutic strategy for AD. Endomorphin-1 (EM-1), an endogenous agonist of μ-opioid receptors, has been shown to inhibit the aggregation of Aβ in vitro. In the present study, we investigated whether EM-1 could alleviate the memory-impairing effects of Aβ42 in mice using novel object recognition (NOR) and object location recognition (OLR) tasks. We showed that co-administration of EM-1 was able to ameliorate Aβ42-induced amnesia in the lateral ventricle and the hippocampus, and these effects could not be inhibited by naloxone, an antagonist of μ-opioid receptors. Infusion of EM-1 or naloxone separately into the lateral ventricle had no influence on memory in the tasks. These results suggested that EM-1 might be effective as a drug for AD preventative treatment by inhibiting Aβ aggregation directly as a molecular modifier.

  5. System and method for character recognition

    NASA Technical Reports Server (NTRS)

    Hong, J. P. (Inventor)

    1974-01-01

    A character recognition system is disclosed in which each character in a retina, defining a scanning raster, is scanned with random lines uniformly distributed over the retina. For each type of character to be recognized the system stores a probability density function (PDF) of the random line intersection lengths and/or a PDF of the random line number of intersections. As an unknown character is scanned, the random line intersection lengths and/or the random line number of intersections are accumulated and based on a comparison with the prestored PDFs a classification of the unknown character is performed.

  6. Neural mechanisms of infant learning: differences in frontal theta activity during object exploration modulate subsequent object recognition

    PubMed Central

    Begus, Katarina; Southgate, Victoria; Gliga, Teodora

    2015-01-01

    Investigating learning mechanisms in infancy relies largely on behavioural measures like visual attention, which often fail to predict whether stimuli would be encoded successfully. This study explored EEG activity in the theta frequency band, previously shown to predict successful learning in adults, to directly study infants' cognitive engagement, beyond visual attention. We tested 11-month-old infants (N = 23) and demonstrated that differences in frontal theta-band oscillations, recorded during infants' object exploration, predicted differential subsequent recognition of these objects in a preferential-looking test. Given that theta activity is modulated by motivation to learn in adults, these findings set the ground for future investigation into the drivers of infant learning. PMID:26018832

  7. Object location and object recognition memory impairments, motivation deficits and depression in a model of Gulf War illness

    PubMed Central

    Hattiangady, Bharathi; Mishra, Vikas; Kodali, Maheedhar; Shuai, Bing; Rao, Xiolan; Shetty, Ashok K.

    2014-01-01

    Memory and mood deficits are the enduring brain-related symptoms in Gulf War illness (GWI). Both animal model and epidemiological investigations have indicated that these impairments in a majority of GW veterans are linked to exposures to chemicals such as pyridostigmine bromide (PB, an antinerve gas drug), permethrin (PM, an insecticide) and DEET (a mosquito repellant) encountered during the Persian Gulf War-1. Our previous study in a rat model has shown that combined exposures to low doses of GWI-related (GWIR) chemicals PB, PM, and DEET with or without 5-min of restraint stress (a mild stress paradigm) causes hippocampus-dependent spatial memory dysfunction in a water maze test (WMT) and increased depressive-like behavior in a forced swim test (FST). In this study, using a larger cohort of rats exposed to GWIR-chemicals and stress, we investigated whether the memory deficiency identified earlier in a WMT is reproducible with an alternative and stress free hippocampus-dependent memory test such as the object location test (OLT). We also ascertained the possible co-existence of hippocampus-independent memory dysfunction using a novel object recognition test (NORT), and alterations in mood function with additional tests for motivation and depression. Our results provide new evidence that exposure to low doses of GWIR-chemicals and mild stress for 4 weeks causes deficits in hippocampus-dependent object location memory and perirhinal cortex-dependent novel object recognition memory. An open field test performed prior to other behavioral analyses revealed that memory impairments were not associated with increased anxiety or deficits in general motor ability. However, behavioral tests for mood function such as a voluntary physical exercise paradigm and a novelty suppressed feeding test (NSFT) demonstrated decreased motivation levels and depression. Thus, exposure to GWIR-chemicals and stress causes both hippocampus-dependent and hippocampus-independent memory

  8. Object location and object recognition memory impairments, motivation deficits and depression in a model of Gulf War illness.

    PubMed

    Hattiangady, Bharathi; Mishra, Vikas; Kodali, Maheedhar; Shuai, Bing; Rao, Xiolan; Shetty, Ashok K

    2014-01-01

    Memory and mood deficits are the enduring brain-related symptoms in Gulf War illness (GWI). Both animal model and epidemiological investigations have indicated that these impairments in a majority of GW veterans are linked to exposures to chemicals such as pyridostigmine bromide (PB, an antinerve gas drug), permethrin (PM, an insecticide) and DEET (a mosquito repellant) encountered during the Persian Gulf War-1. Our previous study in a rat model has shown that combined exposures to low doses of GWI-related (GWIR) chemicals PB, PM, and DEET with or without 5-min of restraint stress (a mild stress paradigm) causes hippocampus-dependent spatial memory dysfunction in a water maze test (WMT) and increased depressive-like behavior in a forced swim test (FST). In this study, using a larger cohort of rats exposed to GWIR-chemicals and stress, we investigated whether the memory deficiency identified earlier in a WMT is reproducible with an alternative and stress free hippocampus-dependent memory test such as the object location test (OLT). We also ascertained the possible co-existence of hippocampus-independent memory dysfunction using a novel object recognition test (NORT), and alterations in mood function with additional tests for motivation and depression. Our results provide new evidence that exposure to low doses of GWIR-chemicals and mild stress for 4 weeks causes deficits in hippocampus-dependent object location memory and perirhinal cortex-dependent novel object recognition memory. An open field test performed prior to other behavioral analyses revealed that memory impairments were not associated with increased anxiety or deficits in general motor ability. However, behavioral tests for mood function such as a voluntary physical exercise paradigm and a novelty suppressed feeding test (NSFT) demonstrated decreased motivation levels and depression. Thus, exposure to GWIR-chemicals and stress causes both hippocampus-dependent and hippocampus-independent memory

  9. Shape information mediating basic- and subordinate-level object recognition revealed by analyses of eye movements.

    PubMed

    Davitt, Lina I; Cristino, Filipe; Wong, Alan C-N; Leek, E Charles

    2014-04-01

    This study examines the kinds of shape features that mediate basic- and subordinate-level object recognition. Observers were trained to categorize sets of novel objects at either a basic (between-families) or subordinate (within-family) level of classification. We analyzed the spatial distributions of fixations and compared them to model distributions of different curvature polarity (regions of convex or concave bounding contour), as well as internal part boundaries. The results showed a robust preference for fixation at part boundaries and for concave over convex regions of bounding contour, during both basic- and subordinate-level classification. In contrast, mean saccade amplitudes were shorter during basic- than subordinate-level classification. These findings challenge models of recognition that do not posit any special functional status to part boundaries or curvature polarity. We argue that both basic- and subordinate-level classification are mediated by object representations. These representations make explicit internal part boundaries, and distinguish concave and convex regions of bounding contour. The classification task constrains how shape information in these representations is used, consistent with the hypothesis that both parts-based, and image-based, operations support object recognition in human vision.

  10. Selective attention affects conceptual object priming and recognition: a study with young and older adults.

    PubMed

    Ballesteros, Soledad; Mayas, Julia

    2014-01-01

    In the present study, we investigated the effects of selective attention at encoding on conceptual object priming (Experiment 1) and old-new recognition memory (Experiment 2) tasks in young and older adults. The procedures of both experiments included encoding and memory test phases separated by a short delay. At encoding, the picture outlines of two familiar objects, one in blue and the other in green, were presented to the left and to the right of fixation. In Experiment 1, participants were instructed to attend to the picture outline of a certain color and to classify the object as natural or artificial. After a short delay, participants performed a natural/artificial speeded conceptual classification task with repeated attended, repeated unattended, and new pictures. In Experiment 2, participants at encoding memorized the attended pictures and classify them as natural or artificial. After the encoding phase, they performed an old-new recognition memory task. Consistent with previous findings with perceptual priming tasks, we found that conceptual object priming, like explicit memory, required attention at encoding. Significant priming was obtained in both age groups, but only for those pictures that were attended at encoding. Although older adults were slower than young adults, both groups showed facilitation for attended pictures. In line with previous studies, young adults had better recognition memory than older adults.

  11. Selective attention affects conceptual object priming and recognition: a study with young and older adults

    PubMed Central

    Ballesteros, Soledad; Mayas, Julia

    2015-01-01

    In the present study, we investigated the effects of selective attention at encoding on conceptual object priming (Experiment 1) and old–new recognition memory (Experiment 2) tasks in young and older adults. The procedures of both experiments included encoding and memory test phases separated by a short delay. At encoding, the picture outlines of two familiar objects, one in blue and the other in green, were presented to the left and to the right of fixation. In Experiment 1, participants were instructed to attend to the picture outline of a certain color and to classify the object as natural or artificial. After a short delay, participants performed a natural/artificial speeded conceptual classification task with repeated attended, repeated unattended, and new pictures. In Experiment 2, participants at encoding memorized the attended pictures and classify them as natural or artificial. After the encoding phase, they performed an old–new recognition memory task. Consistent with previous findings with perceptual priming tasks, we found that conceptual object priming, like explicit memory, required attention at encoding. Significant priming was obtained in both age groups, but only for those pictures that were attended at encoding. Although older adults were slower than young adults, both groups showed facilitation for attended pictures. In line with previous studies, young adults had better recognition memory than older adults. PMID:25628588

  12. Impairment of novel object recognition in adulthood after neonatal exposure to diazinon.

    PubMed

    Win-Shwe, Tin-Tin; Nakajima, Daisuke; Ahmed, Sohel; Fujimaki, Hidekazu

    2013-04-01

    Diazinon is an organophosphate pesticide that is still heavily used in agriculture, home gardening, and indoor pest control in Japan. The present study investigated the effect of neonatal exposure to diazinon on hippocampus-dependent novel object recognition test performance and the expression of the N-methyl-D-aspartate (NMDA) receptor and its signal transduction pathway-related genes in the hippocampi of young adult and adult mice. Male offspring of C3H/HeN mice were subcutaneously treated with 0, 0.5, or 5 mg/kg of diazinon for 4 consecutive days beginning on postnatal day (PND) 8. Beginning on PND 46 or PND 81, a novel object recognition test was performed on 4 consecutive days. The hippocampi were collected on PND 50 or PND 85 after the completion of the novel object recognition test, and the expression levels of neurotrophins and the NMDA receptor and its signal transduction pathway-related genes were examined using real-time RT-PCR. Diazinon-injected mice exhibited a poor ability to discriminate between novel and familiar objects during both the PND 49 and the PND 84 tests. The NMDA receptor subunits NR1 and NR2B and the related protein kinase calcium/calmodulin-dependent protein kinase (CaMK)-IV and the transcription factor cyclic AMP responsive element binding protein (CREB)-1 mRNA levels were reduced in the PND 50 mice. However, no significant changes in the expressions of the NMDA subunits and their signal transduction molecules were observed in the hippocampi of the PND 85 mice. The expression level of nerve growth factor mRNA was significantly reduced in the PND 50 or 85 mice. These results indicate that neonatal diazinon exposure impaired the hippocampus-dependent novel object recognition ability, accompanied by a modulation in the expressions of the NMDA receptor and neurotrophin in young adult and adult mice.

  13. Artificial neural networks and model-based recognition of three-dimensional objects from two-dimensional images

    NASA Astrophysics Data System (ADS)

    Chao, Chih-Ho; Dhawan, Atam P.

    1994-01-01

    A computer vision system is developed for 3-D object recognition using artificial neural networks and a model-based top-down feedback analysis approach. This system can adequately address the problems caused by an incomplete edge map provided by a low-level processor for 3-D representation and recognition. The system uses key patterns that are selected using a priority assignment. The highest priority is given to the key pattern with the most connected node and associated features. The features are space invariant structures and sets of orientation for edge primitives. The labeled key features are provided as input to an artificial neural network for matching with model key patterns. A Hopfield-Tank network is applied to two levels of matching to increase the computational effectiveness. The first matching is to choose the class of the possible model and the second matching is to find the model closest to the candidate. The result of such matchings is utilized in generating the model-driven top-down feedback analysis. This model is then rotated in 3-D space to find the best match with the candidate and to provide the additional features in 3-D. In the case of multiple objects, a dynamic search strategy is adopted to recognize objects using one pattern at a time. This strategy is also useful in recognizing occluded objects. The experimental results are presented to show the capability and effectiveness of the system.

  14. Study on Information Fusion Based Check Recognition System

    NASA Astrophysics Data System (ADS)

    Wang, Dong

    Automatic check recognition techniques play an important role in financial systems, especially in risk management. This paper presents a novel check recognition system based on multi-cue information fusion theory. For Chinese bank check, the amount can be independently determined by legal amount, courtesy amount, or E13B code. The check recognition algorithm consists of four steps: preprocessing, check layout analysis, segmentation and recognition, and information fusion. For layout analysis, an adaptive template matching algorithm is presented to locate the target recognition regions on the check. The hidden markov model is used to segment and recognize legal amount. Courtesy and E13B code are recognized by artificial neural network method, respectively. Finally, D-S evidence theory is then introduced to fuse above three recognition results for better recognition performance. Experimental results demonstrate that the system can robustly recognize checks and the information fusion based algorithm improves the recognition rate by 5~10 percent.

  15. ROCIT : a visual object recognition algorithm based on a rank-order coding scheme.

    SciTech Connect

    Gonzales, Antonio Ignacio; Reeves, Paul C.; Jones, John J.; Farkas, Benjamin D.

    2004-06-01

    This document describes ROCIT, a neural-inspired object recognition algorithm based on a rank-order coding scheme that uses a light-weight neuron model. ROCIT coarsely simulates a subset of the human ventral visual stream from the retina through the inferior temporal cortex. It was designed to provide an extensible baseline from which to improve the fidelity of the ventral stream model and explore the engineering potential of rank order coding with respect to object recognition. This report describes the baseline algorithm, the model's neural network architecture, the theoretical basis for the approach, and reviews the history of similar implementations. Illustrative results are used to clarify algorithm details. A formal benchmark to the 1998 FERET fafc test shows above average performance, which is encouraging. The report concludes with a brief review of potential algorithmic extensions for obtaining scale and rotational invariance.

  16. Object recognition using neural networks and high-order perspective-invariant relational descriptions

    NASA Astrophysics Data System (ADS)

    Miller, Kenyon R.; Gilmore, John F.

    1992-02-01

    The task of 3-D object recognition can be viewed as consisting of four modules: extraction of structural descriptions, hypothesis generation, pose estimation, and hypothesis verification. The recognition time is determined by the efficiency of each of the four modules, but particularly on the hypothesis generation module which determines how many pose estimates and verifications must be done to recognize the object. In this paper, a set of high-order perspective-invariant relations are defined which can be used with a neural network algorithm to obtain a high-quality set of model-image matches between a model and image of a robot workstation. Using these matches, the number of hypotheses which must be generated to find a correct pose is greatly reduced.

  17. The C57BL/6J mice offspring originated from a parental generation exposed to tannery effluents shows object recognition deficits.

    PubMed

    Guimarães, Abraão Tiago Batista; Ferreira, Raíssa de Oliveira; Rabelo, Letícia Martins; E Silva, Bianca Costa; de Souza, Joyce Moreira; da Silva, Wellington Alves Mizael; de Menezes, Ivandilson Pessoa Pinto; Rodrigues, Aline Sueli de Lima; Vaz, Boniek Gontijo; de Oliveira Costa, Denys Ribeiro; Pereira, Igor; da Silva, Anderson Rodrigo; Malafaia, Guilherme

    2016-12-01

    The main aim of the present paper is to assess whether the parental generation exposure to such discharges could cause object recognition deficits in their offspring. Male and female C57Bl/6J mice were put to mate after they were exposed to 7.5% and 15% tannery effluents or water (control group), for 60 days. The male mice were withdrawn from the boxes after 15 days and the female mice remained exposed to the treatment during the gestation and lactation periods. The offspring were subjected to the object recognition test after weaning in order to assess possible cognition losses. The results of the analysis of the novel object recognition index found in the testing session (performed 1 h after the training session) applied to offspring from different experimental groups appeared to be statistically different. The novel object recognition index of the offspring from female mice exposed to tannery effluents (7.5% and 15% groups) was lower than that of the control group, and it demonstrated object recognition deficit in the studied offspring. The present study is the first to report evidences that parental exposure to effluent of tannery (father and mother) can cause object recognition deficit in the offspring, which is related to problems in the central nervous system.

  18. Object orientation detection and character recognition using optimal feedforward network and Kohonen's feature map

    NASA Astrophysics Data System (ADS)

    Baykal, Nazife; Yalabik, Nese

    1992-09-01

    A neural network model, namely, Kohonen's Feature Map, together with the optimal feedforward network is used for variable font machine printed character recognition with tolerance to rotation, shift in position, and size errors. The determination of object orientation is found using the many rotated versions of individual symbols. Orientations are detected from printed text, but no knowledge of the context is used. The optimal Bayesian detector is derived, and it is shown that the optimal detector has the form of a feedforward network. This network together with the learning vector quantization (LVQ) approach is able to implement an inspection system which determines the orientation of the fonts. After the size normalization, rotation, and component finding process as a preprocessing step, the text becomes the input for the feature map. The feature map is trained first in an unsupervised manner. The algorithm is then adapted for supervised learning using improved LVQ technique. Rectangular and minimal spanning tree (MST) neighborhood topologies are experimented with. The results are encouraging, 87% of the characters of various fonts are correctly recognized even though the pattern is distorted in shape and transformed in a shift, size, and rotation invariant manner. Experimental results and comparisons are described.

  19. Automatic TLI recognition system. Part 1: System description

    SciTech Connect

    Partin, J.K.; Lassahn, G.D.; Davidson, J.R.

    1994-05-01

    This report describes an automatic target recognition system for fast screening of large amounts of multi-sensor image data, based on low-cost parallel processors. This system uses image data fusion and gives uncertainty estimates. It is relatively low cost, compact, and transportable. The software is easily enhanced to expand the system`s capabilities, and the hardware is easily expandable to increase the system`s speed. This volume gives a general description of the ATR system.

  20. Modeling optical pattern recognition algorithms for object tracking based on nonlinear equivalent models and subtraction of frames

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolskyy, Aleksandr I.; Lazarev, Alexander A.

    2015-12-01

    We have proposed and discussed optical pattern recognition algorithms for object tracking based on nonlinear equivalent models and subtraction of frames. Experimental results of suggested algorithms in Mathcad and LabVIEW are shown. Application of equivalent functions and difference of frames gives good results for recognition and tracking moving objects.

  1. Face Memory and Object Recognition in Children with High-Functioning Autism or Asperger Syndrome and in Their Parents

    ERIC Educational Resources Information Center

    Kuusikko-Gauffin, Sanna; Jansson-Verkasalo, Eira; Carter, Alice; Pollock-Wurman, Rachel; Jussila, Katja; Mattila, Marja-Leena; Rahko, Jukka; Ebeling, Hanna; Pauls, David; Moilanen, Irma

    2011-01-01

    Children with Autism Spectrum Disorders (ASDs) have reported to have impairments in face, recognition and face memory, but intact object recognition and object memory. Potential abnormalities, in these fields at the family level of high-functioning children with ASD remains understudied despite, the ever-mounting evidence that ASDs are genetic and…

  2. Edge detection techniques for iris recognition system

    NASA Astrophysics Data System (ADS)

    Tania, U. T.; Motakabber, S. M. A.; Ibrahimy, M. I.

    2013-12-01

    Nowadays security and authentication are the major parts of our daily life. Iris is one of the most reliable organ or part of human body which can be used for identification and authentication purpose. To develop an iris authentication algorithm for personal identification, this paper examines two edge detection techniques for iris recognition system. Between the Sobel and the Canny edge detection techniques, the experimental result shows that the Canny's technique has better ability to detect points in a digital image where image gray level changes even at slow rate.

  3. Exercise improves object recognition memory and induces BDNF expression and cell proliferation in cognitively enriched rats.

    PubMed

    Bechara, R G; Kelly, Á M

    2013-05-15

    Exercise and environmental enrichment are behavioural interventions that have been shown to improve learning and increase neurogenesis in rodents, possibly via neurotrophin-mediated mechanisms. However, many enrichment protocols incorporate exercise, which can itself be viewed as a source of cognitive stimulation in animals housed in standard laboratory conditions. In this experiment we investigate the effect of each intervention separately and in combination on object recognition memory, and analyse associated changes in the dentate gyrus: specifically, in BDNF expression and cell division. We show that both exercise and enrichment improve object recognition memory, but that BDNF mRNA expression and cell proliferation in the dentate gyrus of the hippocampus increase only in exercised rats. These results are in general agreement with recent studies suggesting that the exercise component is the major neurogenic and neurotrophic stimulus in environmental enrichment protocols. We add to the expanding literature several novel aspects including the finding that enrichment in the absence of exercise can improve object recognition memory, probably via mechanisms that are independent of BDNF upregulation and neurogenesis in the dentate gyrus.

  4. Intracellular Zn(2+) signaling in the dentate gyrus is required for object recognition memory.

    PubMed

    Takeda, Atsushi; Tamano, Haruna; Ogawa, Taisuke; Takada, Shunsuke; Nakamura, Masatoshi; Fujii, Hiroaki; Ando, Masaki

    2014-11-01

    The role of perforant pathway-dentate granule cell synapses in cognitive behavior was examined focusing on synaptic Zn(2+) signaling in the dentate gyrus. Object recognition memory was transiently impaired when extracellular Zn(2+) levels were decreased by injection of clioquinol and N,N,N',N'-tetrakis-(2-pyridylmethyl) ethylendediamine. To pursue the effect of the loss and/or blockade of Zn(2+) signaling in dentate granule cells, ZnAF-2DA (100 pmol, 0.1 mM/1 µl), an intracellular Zn(2+) chelator, was locally injected into the dentate molecular layer of rats. ZnAF-2DA injection, which was estimated to chelate intracellular Zn(2+) signaling only in the dentate gyrus, affected object recognition memory 1 h after training without affecting intracellular Ca(2+) signaling in the dentate molecular layer. In vivo dentate gyrus long-term potentiation (LTP) was affected under the local perfusion of the recording region (the dentate granule cell layer) with 0.1 mM ZnAF-2DA, but not with 1-10 mM CaEDTA, an extracellular Zn(2+) chelator, suggesting that the blockade of intracellular Zn(2+) signaling in dentate granule cells affects dentate gyrus LTP. The present study demonstrates that intracellular Zn(2+) signaling in the dentate gyrus is required for object recognition memory, probably via dentate gyrus LTP expression.

  5. NAAG peptidase inhibitors and deletion of NAAG peptidase gene enhance memory in novel object recognition test

    PubMed Central

    Janczura, Karolina J.; Olszewski, Rafal T.; Bzdega, Tomasz; Bacich, Dean J.; Heston, Warren D.; Neale, Joseph H.

    2012-01-01

    The peptide neurotransmitter N-acetylaspartylglutamate (NAAG) is inactivated by the extracellular enzyme glutamate carboxypeptidase II. Inhibitors of this enzyme reverse dizocilpine (MK-801)-induced impairment of short-term memory in the novel object recognition test. The objective of this study was to test the hypothesis that NAAG peptidase inhibition enhances the long-term (24 hr delay) memory of C57BL mice in this test. These mice and mice in which glutamate carboxypeptidase II had been knocked out were presented with two identical objects to explore for 10 minutes on day 1 and tested with one of these familiar objects and one novel object on day 2. Memory was assessed as the degree to which the mice recalled the familiar object and explored the novel object to a greater extent on day 2. Uninjected mice or mice injected with saline prior to the acquisition session on day 1 demonstrated a lack of memory of the acquisition experience by exploring the familiar and novel objects to the same extent on day 2. Mice treated with glutamate carboxypeptidase II inhibitors ZJ43 or 2-PMPA prior to the acquisition trial explored the novel object significantly more time than the familiar object on day 2. Consistent with these results, mice in which glutamate carboxypeptidase II had been knocked out distinguished the novel from the familiar object on day 2 while their heterozygous colony mates did not. Inhibition of glutamate carboxypeptidase II enhances recognition memory, a therapeutic action that might be useful in treatment of memory deficits related to age and neurological disorders. PMID:23200894

  6. Adaptation to Phosphene Parameters Based on Multi-Object Recognition Using Simulated Prosthetic Vision.

    PubMed

    Xia, Peng; Hu, Jie; Peng, Yinghong

    2015-12-01

    Retinal prostheses for the restoration of functional vision are under development and visual prostheses targeting proximal stages of the visual pathway are also being explored. To investigate the experience with visual prostheses, psychophysical experiments using simulated prosthetic vision in normally sighted individuals are necessary. In this study, a helmet display with real-time images from a camera attached to the helmet provided the simulated vision, and experiments of recognition and discriminating multiple objects were used to evaluate visual performance under different parameters (gray scale, distortion, and dropout). The process of fitting and training with visual prostheses was simulated and estimated by adaptation to the parameters with time. The results showed that the increase in the number of gray scale and the decrease in phosphene distortion and dropout rate improved recognition performance significantly, and the recognition accuracy was 61.8 ± 7.6% under the optimum condition (gray scale: 8, distortion: k = 0, dropout: 0%). The adaption experiments indicated that the recognition performance was improved with time and the effect of adaptation to distortion was greater than dropout, which implies the difference of adaptation mechanism to the two parameters.

  7. Distance, shape and more: recognition of object features during active electrolocation in a weakly electric fish.

    PubMed

    von der Emde, Gerhard; Fetz, Steffen

    2007-09-01

    In the absence of light, the weakly electric fish Gnathonemus petersii detects and distinguishes objects in the environment through active electrolocation. In order to test which features of an object the fish use under these conditions to discriminate between differently shaped objects, we trained eight individuals in a food-rewarded, two-alternative, forced-choice procedure. All fish learned to discriminate between two objects of different shapes and volumes. When new object combinations were offered in non-rewarded test trials, fish preferred those objects that resembled the one they had been trained to (S+) and avoided objects resembling the one that had not been rewarded (S-). For a decision, fish paid attention to the relative differences between the two objects they had to discriminate. For discrimination, fish used several object features, the most important ones being volume, material and shape. The importance of shape was demonstrated by reducing the objects to their 3-dimensional contours, which sufficed for the fish to distinguish differently shaped objects. Our results also showed that fish attended strongly to the feature ;volume', because all individuals tended to avoid the larger one of two objects. When confronted with metal versus plastic objects, all fish avoided metal and preferred plastic objects, irrespective of training. In addition to volume, material and shape, fish attended to additional parameters, such as corners or rounded edges. When confronted with two unknown objects, fish weighed up the positive and negative properties of these novel objects and based their decision on the outcome of this comparison. Our results suggest that fish are able to link and assemble local features of an electrolocation pattern to construct a representation of an object, suggesting that some form of a feature extraction mechanism enables them to solve a complex object recognition task.

  8. Estradiol enhances object recognition memory in Swiss female mice by activating hippocampal estrogen receptor α.

    PubMed

    Pereira, Luciana M; Bastos, Cristiane P; de Souza, Jéssica M; Ribeiro, Fabíola M; Pereira, Grace S

    2014-10-01

    In rodents, 17β-estradiol (E2) enhances hippocampal function and improves performance in several memory tasks. Regarding the object recognition paradigm, E2 commonly act as a cognitive enhancer. However, the types of estrogen receptor (ER) involved, as well as the underlying molecular mechanisms are still under investigation. In the present study, we asked whether E2 enhances object recognition memory by activating ERα and/or ERβ in the hippocampus of Swiss female mice. First, we showed that immediately post-training intraperitoneal (i.p.) injection of E2 (0.2 mg/kg) allowed object recognition memory to persist 48 h in ovariectomized (OVX) Swiss female mice. This result indicates that Swiss female mice are sensitive to the promnesic effects of E2 and is in accordance with other studies, which used C57/BL6 female mice. To verify if the activation of hippocampal ERα or ERβ would be sufficient to improve object memory, we used PPT and DPN, which are selective ERα and ERβ agonists, respectively. We found that PPT, but not DPN, improved object memory in Swiss female mice. However, DPN was able to improve memory in C57/BL6 female mice, which is in accordance with other studies. Next, we tested if the E2 effect on improving object memory depends on ER activation in the hippocampus. Thus, we tested if the infusion of intra-hippocampal TPBM and PHTPP, selective antagonists of ERα and ERβ, respectively, would block the memory enhancement effect of E2. Our results showed that TPBM, but not PHTPP, blunted the promnesic effect of E2, strongly suggesting that in Swiss female mice, the ERα and not the ERβ is the receptor involved in the promnesic effect of E2. It was already demonstrated that E2, as well as PPT and DPN, increase the phospho-ERK2 level in the dorsal hippocampus of C57/BL6 mice. Here we observed that PPT increased phospho-ERK1, while DPN decreased phospho-ERK2 in the dorsal hippocampus of Swiss female mice subjected to the object recognition sample phase

  9. Cortical Thickness in Fusiform Face Area Predicts Face and Object Recognition Performance

    PubMed Central

    McGugin, Rankin W.; Van Gulick, Ana E.; Gauthier, Isabel

    2016-01-01

    The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to non-face objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here we show an effect of expertise with non-face objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally-defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. While subjects with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects, but rather living and non-living objects. PMID:26439272

  10. Recognition of error symptoms in large systems

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar K.; Sridhar, V.

    1987-01-01

    A methodology for automatically detecting symptoms of frequently occurring errors in large computer systems is developed. The proposed symptom recognition methodology and its validation are based on probabilistic techniques. The technique is shown to work on real failure data from two CYBER systems at the University of Illinois. The methodology allows for the resolution between independent and dependent causes and, also quantifies a measure of the strength of relationship among errors. Comparison made with failure/repair information obtained from field maintenance engineers shows that in 85% of the cases, the error symptoms recognized by our approach correspond to real system problems. Further, the remaining 15% although not directly supported by field data, were confirmed as valid problems. Some of these were shown to be persistent problems which otherwise would have been considered as minor transients and hence ignored.

  11. Zernike moments and rotation invariant object recognition. A neural network oriented case study

    NASA Astrophysics Data System (ADS)

    Krekel, P. F.

    1992-12-01

    This report presents the results of the feasibility study investigating the characteristics of complex Zernike moments and their application in translation-, scale-, and rotation-invariant object recognition problems. The complex Zernike moments are used as characterizing features in a neural network based target recognition approach for the classification of objects in images recorded by sensors mounted on an airborne platform. The complex Zernike moments are a transformation of the image by the projection of the image onto an extended set of orthogonal polynomials. The emphasis of this study is laid on the evaluation of the performances of Zernike moments in relation with the application of neural networks. Therefore, three types of classifiers are evaluated: a multi-layer perceptron (MLP) neural network, a Bayes statistical classifier and a nearest-neighbor classifier. Experiments are based on a set of binary images simulating military vehicles extracted from the natural background. From these experiments the conclusion can be drawn that complex Zernike moments are efficient and effective object characterizing features that are robust under rotation of the object in the image and to a certain extent under varying affine projections of the object onto the image plane.

  12. Robust laser speckle recognition system for authenticity identification.

    PubMed

    Yeh, Chia-Hung; Sung, Po-Yi; Kuo, Chih-Hung; Yeh, Ruey-Nan

    2012-10-22

    This paper proposes a laser speckle recognition system for authenticity verification. Because of the unique imperfection surfaces of objects, laser speckle provides identifiable features for authentication. A Gabor filter, SIFT (Scale-Invariant Feature Transform), and projection were used to extract the features of laser speckle images. To accelerate the matching process, the extracted Gabor features were organized into an indexing structure using the K-means algorithm. Plastic cards were used as the target objects in the proposed system and the hardware of the speckle capturing system was built. The experimental results showed that the retrieval performance of the proposed method is accurate when the database contains 516 laser speckle images. The proposed system is robust and feasible for authenticity verification.

  13. Automatic TLI recognition system, programmer`s guide

    SciTech Connect

    Lassahn, G.D.

    1997-02-01

    This report describes the software of an automatic target recognition system (version 14), from a programmer`s point of view. The intent is to provide information that will help people who wish to modify the software. In separate volumes are a general description of the ATR system, Automatic TLI Recognition System, General Description, and a user`s manual, Automatic TLI Recognition System, User`s Guide. 2 refs.

  14. The influence of scene context on object recognition is independent of attentional focus.

    PubMed

    Munneke, Jaap; Brentari, Valentina; Peelen, Marius V

    2013-01-01

    Humans can quickly and accurately recognize objects within briefly presented natural scenes. Previous work has provided evidence that scene context contributes to this process, demonstrating improved naming of objects that were presented in semantically consistent scenes (e.g., a sandcastle on a beach) relative to semantically inconsistent scenes (e.g., a sandcastle on a football field). The current study was aimed at investigating which processes underlie the scene consistency effect. Specifically, we tested: (1) whether the effect is due to increased visual feature and/or shape overlap for consistent relative to inconsistent scene-object pairs; and (2) whether the effect is mediated by attention to the background scene. Experiment 1 replicated the scene consistency effect of a previous report (Davenport and Potter, 2004). Using a new, carefully controlled stimulus set, Experiment 2 showed that the scene consistency effect could not be explained by low-level feature or shape overlap between scenes and target objects. Experiments 3a and 3b investigated whether focused attention modulates the scene consistency effect. By using a location cueing manipulation, participants were correctly informed about the location of the target object on a proportion of trials, allowing focused attention to be deployed toward the target object. Importantly, the effect of scene consistency on target object recognition was independent of spatial attention, and was observed both when attention was focused on the target object and when attention was focused on the background scene. These results indicate that a semantically consistent scene context benefits object recognition independently of the focus of attention. We suggest that the scene consistency effect is primarily driven by global scene properties, or "scene gist", that can be processed with minimal attentional resources.

  15. Toward Development of a Face Recognition System for Watchlist Surveillance.

    PubMed

    Kamgar-Parsi, Behrooz; Lawson, Wallace; Kamgar-Parsi, Behzad

    2011-10-01

    The interest in face recognition is moving toward real-world applications and uncontrolled sensing environments. An important application of interest is automated surveillance, where the objective is to recognize and track people who are on a watchlist. For this open world application, a large number of cameras that are increasingly being installed at many locations in shopping malls, metro systems, airports, etc., will be utilized. While a very large number of people will approach or pass by these surveillance cameras, only a small set of individuals must be recognized. That is, the system must reject every subject unless the subject happens to be on the watchlist. While humans routinely reject previously unseen faces as strangers, rejection of previously unseen faces has remained a difficult aspect of automated face recognition. In this paper, we propose an approach motivated by human perceptual ability of face recognition which can handle previously unseen faces. Our approach is based on identifying the decision region(s) in the face space which belong to the target person(s). This is done by generating two large sets of borderline images, projecting just inside and outside of the decision region. For each person on the watchlist, a dedicated classifier is trained. Results of extensive experiments support the effectiveness of our approach. In addition to extensive experiments using our algorithm and prerecorded images, we have conducted considerable live system experiments with people in realistic environments.

  16. Mobile User Objective System (MUOS)

    DTIC Science & Technology

    2013-12-01

    system capacity of the current UHF Follow-On ( UFO ) constellation. MUOS includes the satellite constellation, a ground control and network management...terminals able to support the MUOS CAI. Each MUOS satellite carries a legacy payload similar to that flown on UFO -11. These legacy payloads will...Antecedent Information: The antecedent system to MUOS was the Ultra High Frequency (UHF) Follow-on ( UFO ) satellite communications program. Comparisons of O

  17. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    SciTech Connect

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

  18. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE PAGES

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; ...

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  19. Biased figure-ground assignment affects conscious object recognition in spatial neglect.

    PubMed

    Eramudugolla, Ranmalee; Driver, Jon; Mattingley, Jason B

    2010-09-01

    Unilateral spatial neglect is a disorder of attention and spatial representation, in which early visual processes such as figure-ground segmentation have been assumed to be largely intact. There is evidence, however, that the spatial attention bias underlying neglect can bias the segmentation of a figural region from its background. Relatively few studies have explicitly examined the effect of spatial neglect on processing the figures that result from such scene segmentation. Here, we show that a neglect patient's bias in figure-ground segmentation directly influences his conscious recognition of these figures. By varying the relative salience of figural and background regions in static, two-dimensional displays, we show that competition between elements in such displays can modulate a neglect patient's ability to recognise parsed figures in a scene. The findings provide insight into the interaction between scene segmentation, explicit object recognition, and attention.

  20. The anterior temporal cortex is a primary semantic source of top-down influences on object recognition.

    PubMed

    Chiou, Rocco; Lambon Ralph, Matthew A

    2016-06-01

    Perception emerges from a dynamic interplay between feed-forward sensory input and feedback modulation along the cascade of neural processing. Prior knowledge, a major form of top-down modulatory signal, benefits perception by enabling efficacious inference and resolving ambiguity, particularly under circumstances of degraded visual input. Despite semantic information being a potentially critical source of this top-down influence, to date, the core neural substrate of semantic knowledge (the anterolateral temporal lobe - ATL) has not been considered as a key component of the feedback system. Here we provide direct evidence of its significance for visual cognition - the ATL underpins the semantic aspect of object recognition, amalgamating sensory-based (amount of accumulated sensory input) and semantic-based (representational proximity between exemplars and typicality of appearance) influences. Using transcranial theta-burst stimulation combined with a novel visual identification paradigm, we demonstrate that the left ATL contributes to discrimination between visual objects. Crucially, its contribution is especially vital under situations where semantic knowledge is most needed for supplementing deficiency of input (brief visual exposure), discerning analogously-coded exemplars (close representational distance), and resolving discordance (target appearance violating the statistical typicality of its category). Our findings characterise functional properties of the ATL in object recognition: this neural structure is summoned to augment the visual system when the latter is overtaxed by challenging conditions (insufficient input, overlapped neural coding, and conflict between incoming signal and expected configuration). This suggests a need to revisit current theories of object recognition, incorporating the ATL that interfaces high-level vision with semantic knowledge.

  1. Naringin and Rutin Alleviates Episodic Memory Deficits in Two Differentially Challenged Object Recognition Tasks

    PubMed Central

    Ramalingayya, Grandhi Venkata; Nampoothiri, Madhavan; Nayak, Pawan G.; Kishore, Anoop; Shenoy, Rekha R.; Mallikarjuna Rao, Chamallamudi; Nandakumar, Krishnadas

    2016-01-01

    Background: Cognitive decline or dementia is a debilitating problem of neurological disorders such as Alzheimer's and Parkinson's disease, including special conditions like chemobrain. Dietary flavonoids proved to be efficacious in delaying the incidence of neurodegenerative diseases. Two such flavonoids, naringin (NAR) and rutin (RUT) were reported to have neuroprotective potential with beneficial effects on spatial and emotional memories in particular. However, the efficacy of these flavonoids is poorly understood on episodic memory, which comprises an important form of autobiographical memory. Objective: This study objective is to evaluate NAR and RUT to reverse time-delay-induced long-term and scopolamine-induced short-term episodic memory deficits in Wistar rats. Materials and Methods: We have evaluated both short-term and long-term episodic memory forms using novel object recognition task. Open field paradigm was used to assess locomotor activity for any confounding influence on memory assessment. Donepezil was used as positive control and was effective in both models at 1 mg/kg, i.p. Results: Animals treated with NAR and RUT at 50 and 100 mg/kg, p.o. spent significantly more time exploring novel object compared to familiar one, whereas control animals spent almost equal time with both objects in choice trial. NAR and RUT dose-dependently increased recognition and discriminative indices in time-induced long-term as well as scopolamine-induced short-term episodic memory deficit models without interfering with the locomotor activity. Conclusion: We conclude that, NAR and RUT averted both short- and long-term episodic memory deficits in Wistar rats, which may be potential interventions for neurodegenerative diseases as well as chemobrain condition. SUMMARY Incidence of Alzheimer's disease is increasing globally and the current therapy is only symptomatic. Curative treatment is a major lacuna. NAR and RUT are natural flavonoids proven for their pleiotropic

  2. Optical music recognition system which learns

    NASA Astrophysics Data System (ADS)

    Fujinaga, Ichiro

    1993-01-01

    This paper describes an optical music recognition system composed of a database and three interdependent processes: a recognizer, an editor, and a learner. Given a scanned image of a musical score, the recognizer locates, separates, and classifies symbols into musically meaningful categories. This classification is based on the k-nearest neighbor method using a subset of the database that contains features of symbols classified in previous recognition sessions. Output of the recognizer is corrected by a musically trained human operator using a music notation editor. The editor provides both visual and high-quality audio feedback of the output. Editorial corrections made by the operator are passed to the learner which then adds the newly acquired data to the database. The learner's main task, however, involves selecting a subset of the database and reweighing the importance of the features to improve accuracy and speed for subsequent sessions. Good preliminary results have been obtained with everything from professionally engraved scores to hand-written manuscripts.

  3. Effects of the putative cognitive-enhancing ampakine, CX717, on attention and object recognition memory.

    PubMed

    Zheng, Yiwen; Balabhadrapatruni, Sangeeta; Masumura, Chisako; Darlington, Cynthia L; Smith, Paul F

    2011-12-01

    Ampakines are a class of putative nootropic drug designed to positively modulate the AMPA receptor and have been investigated as a potential treatment for cognitive disorders such as Alzheimer's Disease. Nonetheless, some ampakines such as CX717 have been incompletely characterized in behavioural pharmacological studies. Therefore, in this study, we attempted to further characterize the effects of the ampakine, CX717 (20 mg/kg s.c), on the performance of rats in a 5 choice serial reaction time (5CSRTT) and object recognition memory task, using rats with cognitive deficits caused by bilateral vestibular deafferentation (BVD) as a model. In the 5CSRTT, when the stimulus duration was varied from 5 to 2 sec, the number of incorrect responses was significantly greater for the BVD group compared to sham controls, but significantly less for the CX717 groups, with no significant interaction. With changes in inter-trial interval (ITI), there was a significant effect of surgery/drug and a significant effect of ITI on premature responses, and the BVD group treated with CX717 showed significantly fewer premature responses than the other groups. In the object recognition memory task, CX717 significantly reduced total exploration time and the exploration towards the novel object in both sham and BVD animals. These results suggest that CX717 can reduce the number of incorrect responses in both sham and BVD rats and enhance inhibitory control specifically in BVD rats, in the 5CSRTT. On the other hand, CX717 produced a detrimental effect in the object recognition memory task.

  4. The Vanderbilt Expertise Test reveals domain-general and domain-specific sex effects in object recognition.

    PubMed

    McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Gauthier, Isabel

    2012-09-15

    Individual differences in face recognition are often contrasted with differences in object recognition using a single object category. Likewise, individual differences in perceptual expertise for a given object domain have typically been measured relative to only a single category baseline. In Experiment 1, we present a new test of object recognition, the Vanderbilt Expertise Test (VET), which is comparable in methods to the Cambridge Face Memory Task (CFMT) but uses eight different object categories. Principal component analysis reveals that the underlying structure of the VET can be largely explained by two independent factors, which demonstrate good reliability and capture interesting sex differences inherent in the VET structure. In Experiment 2, we show how the VET can be used to separate domain-specific from domain-general contributions to a standard measure of perceptual expertise. While domain-specific contributions are found for car matching for both men and women and for plane matching in men, women in this sample appear to use more domain-general strategies to match planes. In Experiment 3, we use the VET to demonstrate that holistic processing of faces predicts face recognition independently of general object recognition ability, which has a sex-specific contribution to face recognition. Overall, the results suggest that the VET is a reliable and valid measure of object recognition abilities and can measure both domain-general skills and domain-specific expertise, which were both found to depend on the sex of observers.

  5. Human-inspired sound environment recognition system for assistive vehicles

    NASA Astrophysics Data System (ADS)

    González Vidal, Eduardo; Fredes Zarricueta, Ernesto; Auat Cheein, Fernando

    2015-02-01

    Objective. The human auditory system acquires environmental information under sound stimuli faster than visual or touch systems, which in turn, allows for faster human responses to such stimuli. It also complements senses such as sight, where direct line-of-view is necessary to identify objects, in the environment recognition process. This work focuses on implementing human reaction to sound stimuli and environment recognition on assistive robotic devices, such as robotic wheelchairs or robotized cars. These vehicles need environment information to ensure safe navigation. Approach. In the field of environment recognition, range sensors (such as LiDAR and ultrasonic systems) and artificial vision devices are widely used; however, these sensors depend on environment constraints (such as lighting variability or color of objects), and sound can provide important information for the characterization of an environment. In this work, we propose a sound-based approach to enhance the environment recognition process, mainly for cases that compromise human integrity, according to the International Classification of Functioning (ICF). Our proposal is based on a neural network implementation that is able to classify up to 15 different environments, each selected according to the ICF considerations on environment factors in the community-based physical activities of people with disabilities. Main results. The accuracy rates in environment classification ranges from 84% to 93%. This classification is later used to constrain assistive vehicle navigation in order to protect the user during daily activities. This work also includes real-time outdoor experimentation (performed on an assistive vehicle) by seven volunteers with different disabilities (but without cognitive impairment and experienced in the use of wheelchairs), statistical validation, comparison with previously published work, and a discussion section where the pros and cons of our system are evaluated. Significance

  6. Categorical and coordinate processing in object recognition depends on different spatial frequencies.

    PubMed

    Saneyoshi, Ayako; Michimata, Chikashi

    2015-02-01

    Previous studies have suggested that processing categorical spatial relations requires high spatial frequency (HSF) information, while coordinate spatial relations require low spatial frequency (LSF) information. The aim of the present study was to determine whether spatial frequency influences categorical and coordinate processing in object recognition. Participants performed two object-matching tasks for novel, non-nameable objects consisting of "geons" (c.f. Brain Cogn 71:181-186, 2009). For each original stimulus, categorical and coordinate transformations were applied to create comparison stimuli. These stimuli were high-pass/low-cut-filtered or low-pass/high-cut-filtered by a filter with a 2D Gaussian envelope. The categorical task consisted of the original and categorical-transformed objects. The coordinate task consisted of the original and coordinate-transformed objects. The non-filtered object image was presented on a CRT monitor, followed by a comparison object (non-filtered, high-pass-filtered, and low-pass-filtered stimuli). The results showed that the removal of HSF information from the object image produced longer reaction times (RTs) in the categorical task, while removal of LSF information produced longer RTs in the coordinate task. These results support spatial frequency processing theory, specifically Kosslyn's hypothesis and the double filtering frequency model.

  7. Guppies Show Behavioural but Not Cognitive Sex Differences in a Novel Object Recognition Test

    PubMed Central

    Lucon-Xiccato, Tyrone; Dadda, Marco

    2016-01-01

    The novel object recognition (NOR) test is a widely-used paradigm to study learning and memory in rodents. NOR performance is typically measured as the preference to interact with a novel object over a familiar object based on spontaneous exploratory behaviour. In rats and mice, females usually have greater NOR ability than males. The NOR test is now available for a large number of species, including fish, but sex differences have not been properly tested outside of rodents. We compared male and female guppies (Poecilia reticulata) in a NOR test to study whether sex differences exist also for fish. We focused on sex differences in both performance and behaviour of guppies during the test. In our experiment, adult guppies expressed a preference for the novel object as most rodents and other species do. When we looked at sex differences, we found the two sexes showed a similar preference for the novel object over the familiar object, suggesting that male and female guppies have similar NOR performances. Analysis of behaviour revealed that males were more inclined to swim in the proximity of the two objects than females. Further, males explored the novel object at the beginning of the experiment while females did so afterwards. These two behavioural differences are possibly due to sex differences in exploration. Even though NOR performance is not different between male and female guppies, the behavioural sex differences we found could affect the results of the experiments and should be carefully considered when assessing fish memory with the NOR test. PMID:27305102

  8. Physical exercise during adolescence versus adulthood: differential effects on object recognition memory and brain-derived neurotrophic factor levels.

    PubMed

    Hopkins, M E; Nitecki, R; Bucci, D J

    2011-10-27

    It is well established that physical exercise can enhance hippocampal-dependent forms of learning and memory in laboratory animals, commensurate with increases in hippocampal neural plasticity (brain-derived neurotrophic factor [BDNF] mRNA/protein, neurogenesis, long-term potentiation [LTP]). However, very little is known about the effects of exercise on other, non-spatial forms of learning and memory. In addition, there has been little investigation of the duration of the effects of exercise on behavior or plasticity. Likewise, few studies have compared the effects of exercising during adulthood versus adolescence. This is particularly important since exercise may capitalize on the peak of neural plasticity observed during adolescence, resulting in a different pattern of behavioral and neurobiological effects. The present study addressed these gaps in the literature by comparing the effects of 4 weeks of voluntary exercise (wheel running) during adulthood or adolescence on novel object recognition and BDNF levels in the perirhinal cortex (PER) and hippocampus (HP). Exercising during adulthood improved object recognition memory when rats were tested immediately after 4 weeks of exercise, an effect that was accompanied by increased BDNF levels in PER and HP. When rats were tested again 2 weeks after exercise ended, the effects of exercise on recognition memory and BDNF levels were no longer present. Exercising during adolescence had a very different pattern of effects. First, both exercising and non-exercising rats could discriminate between novel and familiar objects immediately after the exercise regimen ended; furthermore there was no group difference in BDNF levels. Two or four weeks later, however, rats that had previously exercised as adolescents could still discriminate between novel and familiar objects, while non-exercising rats could not. Moreover, the formerly exercising rats exhibited higher levels of BDNF in PER compared to HP, while the reverse was

  9. [Non-conscious perception of emotional faces affects the visual objects recognition].

    PubMed

    Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Mikhaĭlova, E S

    2013-01-01

    In 34 healthy subjects we have analyzed accuracy and reaction time (RT) during the recognition of complex visual images: pictures of animals and non-living objects. The target stimuli were preceded by brief presentation of masking non-target ones, which represented drawings of emotional (angry, fearful, happy) or neutral faces. We have revealed that in contrast to accuracy the RT depended on the emotional expression of the preceding faces. RT was significantly shorter if the target objects were paired with the angry and fearful faces as compared with the happy and neutral ones. These effects depended on the category of the target stimulus and were more prominent for objects than for animals. Further, the emotional faces' effects were determined by emotional and communication personality traits (defined by Cattell's Questionnaire) and were clearer defined in more sensitive, anxious and pessimistic introverts. The data are important for understanding the mechanisms of human visual behavior determination by non-consciously processing of emotional information.

  10. Simple and efficient improvement of spin image for three-dimensional object recognition

    NASA Astrophysics Data System (ADS)

    Lu, Rongrong; Zhu, Feng; Hao, Yingming; Wu, Qingxiao

    2016-11-01

    This paper presents a highly distinctive and robust local three-dimensional (3-D) feature descriptor named longitude and latitude spin image (LLSI). The whole procedure has two modules: local reference frame (LRF) definition and LLSI feature description. We employ the same technique as Tombari to define the LRF. The LLSI feature descriptor is obtained by stitching the longitude and latitude (LL) image to the original spin image vertically, where the LL image was generated similarly with the spin image by mapping a two-tuple (θ,φ) into a discrete two-dimensional histogram. The performance of the proposed LLSI descriptor was rigorously tested on a number of popular and publicly available datasets. The results showed that our method is more robust with respect to noise and varying mesh resolution than existing techniques. Finally, we tested our LLSI-based algorithm for 3-D object recognition on two popular datasets. Our LLSI-based algorithm achieved recognition rates of 100%, 98.2%, and 96.2%, respectively, when tested on the Bologna, University of Western Australia (UWA) (up to 84% occlusion), UWA datasets (all). Moreover, our LLSI-based algorithm achieved 100% recognition rate on the whole UWA dataset when generating the LLSI descriptor with the LRF proposed by Guo.

  11. Nicotine enhances the reconsolidation of novel object recognition memory in rats.

    PubMed

    Tian, Shaowen; Pan, Si; You, Yong

    2015-02-01

    There is increasing evidence that nicotine is involved in learning and memory. However, there are only few studies that have evaluated the relationship between nicotine and memory reconsolidation. In this study, we investigated the effects of nicotine on the reconsolidation of novel object recognition memory in rats. Behavior procedure involved four training phases: habituation (Days 1 and 2), sample (Day 3), reactivation (Day 4) and test (Day 6). Rats were injected with saline or nicotine (0.1, 0.2 and 0.4 mg/kg) immediately or 6h after reactivation. The discrimination index was used to assess memory performance and calculated as the difference in time exploring on the novel and familiar objects. Results showed that nicotine administration immediately but not 6 h after reactivation significantly enhanced memory performance of rats. Further results showed that the enhancing effect of nicotine on memory performance was dependent on memory reactivation, and was not attributed to the changes of the nonspecific responses (locomotor activity and anxiety level) 48 h after nicotine administration. The results suggest that post-reactivation nicotine administration enhances the reconsolidation of novel object recognition memory. Our present finding extends previous research on the nicotinic effects on learning and memory.

  12. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    PubMed

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly.

  13. A discrepancy measure for segmentation evaluation from the perspective of object recognition

    NASA Astrophysics Data System (ADS)

    Yang, Jian; He, Yuhong; Caspersen, John; Jones, Trevor

    2015-03-01

    Within the framework of geographic object-based image analysis (GEOBIA), segmentation evaluation is one of the most important components and thus plays a critical role in controlling the quality of GEOBIA workflow. Among a variety of segmentation evaluation methods and criteria, discrepancy measurement is believed to be the most useful and is therefore one of the most commonly employed techniques in many applications. Existing measures have largely ignored the importance of object recognition in segmentation evaluation. In this study, a new discrepancy measure of segmentation evaluation index (SEI) redefines the corresponding segment using a two-sided 50% overlap instead of one-sided 50% overlap that has been commonly used. The effectiveness of SEI is further investigated using the schematic segmentation cases and remote sensing images. Results demonstrate that the proposed SEI outperforms the other two existing discrepancy measures, Euclidean Distance 2 (ED2) and Euclidean Distance 3 (ED3), both in terms of object recognition accuracy and identification of detailed segmentation differences.

  14. Dopamine D1 receptor stimulation modulates the formation and retrieval of novel object recognition memory: Role of the prelimbic cortex

    PubMed Central

    Pezze, Marie A.; Marshall, Hayley J.; Fone, Kevin C.F.; Cassaday, Helen J.

    2015-01-01

    Previous studies have shown that dopamine D1 receptor antagonists impair novel object recognition memory but the effects of dopamine D1 receptor stimulation remain to be determined. This study investigated the effects of the selective dopamine D1 receptor agonist SKF81297 on acquisition and retrieval in the novel object recognition task in male Wistar rats. SKF81297 (0.4 and 0.8 mg/kg s.c.) given 15 min before the sampling phase impaired novel object recognition evaluated 10 min or 24 h later. The same treatments also reduced novel object recognition memory tested 24 h after the sampling phase and when given 15 min before the choice session. These data indicate that D1 receptor stimulation modulates both the encoding and retrieval of object recognition memory. Microinfusion of SKF81297 (0.025 or 0.05 μg/side) into the prelimbic sub-region of the medial prefrontal cortex (mPFC) in this case 10 min before the sampling phase also impaired novel object recognition memory, suggesting that the mPFC is one important site mediating the effects of D1 receptor stimulation on visual recognition memory. PMID:26277743

  15. Dopamine D1 receptor stimulation modulates the formation and retrieval of novel object recognition memory: Role of the prelimbic cortex.

    PubMed

    Pezze, Marie A; Marshall, Hayley J; Fone, Kevin C F; Cassaday, Helen J

    2015-11-01

    Previous studies have shown that dopamine D1 receptor antagonists impair novel object recognition memory but the effects of dopamine D1 receptor stimulation remain to be determined. This study investigated the effects of the selective dopamine D1 receptor agonist SKF81297 on acquisition and retrieval in the novel object recognition task in male Wistar rats. SKF81297 (0.4 and 0.8 mg/kg s.c.) given 15 min before the sampling phase impaired novel object recognition evaluated 10 min or 24 h later. The same treatments also reduced novel object recognition memory tested 24 h after the sampling phase and when given 15 min before the choice session. These data indicate that D1 receptor stimulation modulates both the encoding and retrieval of object recognition memory. Microinfusion of SKF81297 (0.025 or 0.05 μg/side) into the prelimbic sub-region of the medial prefrontal cortex (mPFC) in this case 10 min before the sampling phase also impaired novel object recognition memory, suggesting that the mPFC is one important site mediating the effects of D1 receptor stimulation on visual recognition memory.

  16. On the Relation between Face and Object Recognition in Developmental Prosopagnosia: No Dissociation but a Systematic Association

    PubMed Central

    Klargaard, Solja K.; Starrfelt, Randi

    2016-01-01

    There is an ongoing debate about whether face recognition and object recognition constitute separate domains. Clarification of this issue can have important theoretical implications as face recognition is often used as a prime example of domain-specificity in mind and brain. An important source of input to this debate comes from studies of individuals with developmental prosopagnosia, suggesting that face recognition can be selectively impaired. We put the selectivity hypothesis to test by assessing the performance of 10 individuals with developmental prosopagnosia on demanding tests of visual object processing involving both regular and degraded drawings. None of the individuals exhibited a clear dissociation between face and object recognition, and as a group they were significantly more affected by degradation of objects than control participants. Importantly, we also find positive correlations between the severity of the face recognition impairment and the degree of impaired performance with degraded objects. This suggests that the face and object deficits are systematically related rather than coincidental. We conclude that at present, there is no strong evidence in the literature on developmental prosopagnosia supporting domain-specific accounts of face recognition. PMID:27792780

  17. The Role of Sensory-Motor Information in Object Recognition: Evidence from Category-Specific Visual Agnosia

    ERIC Educational Resources Information Center

    Wolk, D.A.; Coslett, H.B.; Glosser, G.

    2005-01-01

    The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more…

  18. Developing a Credit Recognition System for Chinese Higher Education Institutions

    ERIC Educational Resources Information Center

    Li, Fuhui

    2015-01-01

    In recent years, a credit recognition system has been developing in Chinese higher education institutions. Much research has been done on this development, but it has been concentrated on system building, barriers/issues and international practices. The relationship between credit recognition system reforms and democratisation of higher education…

  19. Assessment of disease-related cognitive impairments using the novel object recognition (NOR) task in rodents.

    PubMed

    Grayson, Ben; Leger, Marianne; Piercy, Chloe; Adamson, Lisa; Harte, Michael; Neill, Joanna C

    2015-05-15

    The novel object recognition test (NOR) test is a two trial cognitive paradigm that assesses recognition memory. Recognition memory is disturbed in a range of human disorders and NOR is widely used in rodents for investigating deficits in a variety of animal models of human conditions where cognition is impaired. It possesses several advantages over more complex tasks that involve lengthy training procedures and/or food or water deprivation. It is quick to administer, non-rewarded, provides data quickly, cost effective and most importantly, ethologically relevant as it relies on the animal's natural preference for novelty. A PubMed search revealed over 900 publications in rats and mice using this task over the past 3 years with 34 reviews in the past 10 years, demonstrating its increasing popularity with neuroscientists. Although it is widely used in many disparate areas of research, no articles have systematically examined this to date, which is the subject of our review. We reveal that NOR may be used to study recognition memory deficits that occur in Alzheimer's disease and schizophrenia, where research is extensive, in Parkinson's disease and Autism Spectrum Disorders (ASD) where we observed markedly reduced numbers of publications. In addition, we review the use of NOR to study cognitive deficits induced by traumatic brain injury and cancer chemotherapy, not disorders per se, but situations in which cognitive deficits dramatically reduce the quality of life for those affected, see Fig. 1 for a summary. Our review reveals that, in all these animal models, the NOR test is extremely useful for identification of the cognitive deficits observed, their neural basis, and for testing the efficacy of novel therapeutic agents. Our conclusion is that NOR is of considerable value for cognitive researchers of all disciplines and we anticipate that its use will continue to increase due to its versatility and several other advantages, as detailed in this review.

  20. Voice recognition interface for a radiology information system

    NASA Astrophysics Data System (ADS)

    Hinson, William H.; Boehme, Johannes M.; Choplin, Robert H.; Santago, Peter, II

    1990-08-01

    We have implemented a voice recognition interface using a Dragon Systems VoiceScribe-1000 Speech Recognition system installed on an AT&T 6310 personal computer. The Dragon Systems DragonKey software allows the user to emulate keyboard functions using the speech recognition system and replaces the presently used bar code system. The software supports user voice training, grammar design and compilation, as well as speech recognition. We have successfully integrated this voice interface in the clinical report generation system for most standard mammography studies. We have found that the voice system provides a simple, user-friendly interface which is more widely accepted in a medical environment because of its similarities to tradition dictation. Although the system requires some initial time for voice training, it avoids potential delays in transcription and proofreading. This paper describes the design and implementation of this voice recognition interface in our department.

  1. Memory consolidation and expression of object recognition are susceptible to retroactive interference.

    PubMed

    Villar, María Eugenia; Martinez, María Cecilia; Lopes da Cunha, Pamela; Ballarini, Fabricio; Viola, Haydee

    2017-02-01

    With the aim of analyzing if object recognition long-term memory (OR-LTM) formation is susceptible to retroactive interference (RI), we submitted rats to sequential sample sessions using the same arena but changing the identity of a pair of objects placed in it. Separate groups of animals were tested in the arena in order to evaluate the LTM for these objects. Our results suggest that OR-LTM formation was retroactively interfered within a critical time window by the exploration of a new, but not familiar, object. This RI acted on the consolidation of the object explored in the first sample session because its OR-STM measured 3h after training was not affected, whereas the OR-LTM measured at 24h was impaired. This sample session also impaired the expression of OR memory when it took place before the test. Moreover, local inactivation of the dorsal Hippocampus (Hp) or the medial Prefrontal Cortex (mPFC) previous to the exploration of the second pair of objects impaired their consolidation restoring the LTM for the objects explored in the first session. This data suggests that both brain regions are involved in the processing of OR-memory and also that if those regions are engaged in another process before finishing the first consolidation process its LTM will be impaired by RI.

  2. Differential roles for Nr4a1 and Nr4a2 in object location vs. object recognition long-term memory.

    PubMed

    McNulty, Susan E; Barrett, Ruth M; Vogel-Ciernia, Annie; Malvaez, Melissa; Hernandez, Nicole; Davatolhagh, M Felicia; Matheos, Dina P; Schiffman, Aaron; Wood, Marcelo A

    2012-11-16

    Nr4a1 and Nr4a2 are transcription factors and immediate early genes belonging to the nuclear receptor Nr4a family. In this study, we examine their role in long-term memory formation for object location and object recognition. Using siRNA to block expression of either Nr4a1 or Nr4a2, we found that Nr4a2 is necessary for both long-term memory for object location and object recognition. In contrast, Nr4a1 appears to be necessary only for object location. Indeed, their roles in these different types of long-term memory may be dependent on their expression in the brain, as NR4A2 was found to be expressed in hippocampal neurons (associated with object location memory) as well as in the insular and perirhinal cortex (associated with object recognition memory), whereas NR4A1 showed minimal neuronal expression in these cortical areas. These results begin to elucidate how NR4A1 and NR4A2 differentially contribute to object location versus object recognition memory.

  3. A new behavioural apparatus to reduce animal numbers in multiple types of spontaneous object recognition paradigms in rats.

    PubMed

    Ameen-Ali, K E; Eacott, M J; Easton, A

    2012-10-15

    Standard object recognition procedures assess animals' memory through their spontaneous exploration of novel objects or novel configurations of objects with other aspects of their environment. Such tasks are widely used in memory research, but also in pharmaceutical companies screening new drug treatments. However, behaviour in these tasks may be driven by influences other than novelty such as stress from handling which can subsequently influence performance. This extra-experimental variance means that large numbers of animals are required to maintain power. In addition, accumulation of data is time consuming as animals typically perform only one trial per day. The present study aimed to explore how effectively recognition memory could be tested with a new continual trials apparatus which allows for multiple trials within a session and reduced handling stress through combining features of delayed nonmatching-to-sample and spontaneous object recognition tasks. In this apparatus Lister hooded rats displayed performance significantly above chance levels in object recognition tasks (Experiments 1 and 2) and in tasks of object-location (Experiment 3) and object-in-context memory (Experiment 4) with data from only five animals or fewer per experimental group. The findings indicated that the results were comparable to those of previous reports in the literature and maintained statistical power whilst using less than a third of the number of animals typically used in spontaneous recognition paradigms. Overall, the results highlight the potential benefit of the continual trials apparatus to reduce the number of animals used in recognition memory tasks.

  4. Different roles for M1 and M2 receptors within perirhinal cortex in object recognition and discrimination.

    PubMed

    Bartko, Susan J; Winters, Boyer D; Saksida, Lisa M; Bussey, Timothy J

    2014-04-01

    Recognition and discrimination of objects and individuals are critical cognitive faculties in both humans and non-human animals, and cholinergic transmission has been shown to be essential for both of these functions. In the present study we focused on the role of M1 and M2 muscarinic receptors in perirhinal cortex (PRh)-dependent object recognition and discrimination. The selective M1 antagonists pirenzepine and the snake toxin MT-7, and a selective M2 antagonist, AF-DX 116, were infused directly into PRh. Pre-sample infusions of both pirenzepine and AF-DX 116 significantly impaired object recognition memory in a delay-dependent manner. However, pirenzepine and MT-7, but not AF-DX 116, impaired oddity discrimination performance in a perceptual difficulty-dependent manner. The findings indicate distinct functions for M1 and M2 receptors in object recognition and discrimination.

  5. Remembering the object you fear: brain potentials during recognition of spiders in spider-fearful individuals.

    PubMed

    Michalowski, Jaroslaw M; Weymar, Mathias; Hamm, Alfons O

    2014-01-01

    In the present study we investigated long-term memory for unpleasant, neutral and spider pictures in 15 spider-fearful and 15 non-fearful control individuals using behavioral and electrophysiological measures. During the initial (incidental) encoding, pictures were passively viewed in three separate blocks and were subsequently rated for valence and arousal. A recognition memory task was performed one week later in which old and new unpleasant, neutral and spider pictures were presented. Replicating previous results, we found enhanced memory performance and higher confidence ratings for unpleasant when compared to neutral materials in both animal fearful individuals and controls. When compared to controls high animal fearful individuals also showed a tendency towards better memory accuracy and significantly higher confidence during recognition of spider pictures, suggesting that memory of objects prompting specific fear is also facilitated in fearful individuals. In line, spider-fearful but not control participants responded with larger ERP positivity for correctly recognized old when compared to correctly rejected new spider pictures, thus showing the same effects in the neural signature of emotional memory for feared objects that were already discovered for other emotional materials. The increased fear memory for phobic materials observed in the present study in spider-fearful individuals might result in an enhanced fear response and reinforce negative beliefs aggravating anxiety symptomatology and hindering recovery.

  6. Conversion of short-term to long-term memory in the novel object recognition paradigm.

    PubMed

    Moore, Shannon J; Deshpande, Kaivalya; Stinnett, Gwen S; Seasholtz, Audrey F; Murphy, Geoffrey G

    2013-10-01

    It is well-known that stress can significantly impact learning; however, whether this effect facilitates or impairs the resultant memory depends on the characteristics of the stressor. Investigation of these dynamics can be confounded by the role of the stressor in motivating performance in a task. Positing a cohesive model of the effect of stress on learning and memory necessitates elucidating the consequences of stressful stimuli independently from task-specific functions. Therefore, the goal of this study was to examine the effect of manipulating a task-independent stressor (elevated light level) on short-term and long-term memory in the novel object recognition paradigm. Short-term memory was elicited in both low light and high light conditions, but long-term memory specifically required high light conditions during the acquisition phase (familiarization trial) and was independent of the light level during retrieval (test trial). Additionally, long-term memory appeared to be independent of stress-mediated glucocorticoid release, as both low and high light produced similar levels of plasma corticosterone, which further did not correlate with subsequent memory performance. Finally, both short-term and long-term memory showed no savings between repeated experiments suggesting that this novel object recognition paradigm may be useful for longitudinal studies, particularly when investigating treatments to stabilize or enhance weak memories in neurodegenerative diseases or during age-related cognitive decline.

  7. Image registration and object recognition using affine invariants and convex hulls.

    PubMed

    Yang, Z; Cohen, F S

    1999-01-01

    This paper is concerned with the problem of feature point registration and scene recognition from images under weak perspective transformations which are well approximated by affine transformations and under possible occlusion and/or appearance of new objects. It presents a set of local absolute affine invariants derived from the convex hull of scattered feature points (e.g., fiducial or marking points, corner points, inflection points, etc.) extracted from the image. The affine invariants are constructed from the areas of the triangles formed by connecting three vertices among a set of four consecutive vertices (quadruplets) of the convex hull, and hence do make direct use of the area invariance property associated with the affine transformation. Because they are locally constructed, they are very well suited to handle the occlusion and/or appearance of new objects. These invariants are used to establish the correspondences between the convex hull vertices of a test image with a reference image in order to undo the affine transformation between them. A point matching approach for recognition follows this. The time complexity for registering L feature points on the test image with N feature points of the reference image is of order O(N x L). The method has been tested on real indoor and outdoor images and performs well.

  8. Compression of digital holograms for three-dimensional object reconstruction and recognition.

    PubMed

    Naughton, Thomas J; Frauel, Yann; Javidi, Bahram; Tajahuerce, Enrique

    2002-07-10

    We present the results of applying lossless and lossy data compression to a three-dimensional object reconstruction and recognition technique based on phase-shift digital holography. We find that the best lossless (Lempel-Ziv, Lempel-Ziv-Welch, Huffman, Burrows-Wheeler) compression rates can be expected when the digital hologram is stored in an intermediate coding of separate data streams for real and imaginary components. The lossy techniques are based on subsampling, quantization, and discrete Fourier transformation. For various degrees of speckle reduction, we quantify the number of Fourier coefficients that can be removed from the hologram domain, and the lowest level of quantization achievable, without incurring significant loss in correlation performance or significant error in the reconstructed object domain.

  9. Comparison of passive ranging integral imaging and active imaging digital holography for three-dimensional object recognition.

    PubMed

    Frauel, Yann; Tajahuerce, Enrique; Matoba, Osamu; Castro, Albertina; Javidi, Bahram

    2004-01-10

    We present an overview of three-dimensional (3D) object recognition techniques that use active sensing by interferometric imaging (digital holography) and passive sensing by integral imaging. We describe how each technique can be used to retrieve the depth information of a 3D scene and how this information can then be used for 3D object recognition. We explore various algorithms for 3D recognition such as nonlinear correlation and target distortion tolerance. We also provide a comparison of the advantages and disadvantages of the two techniques.

  10. Modafinil improves methamphetamine-induced object recognition deficits and restores prefrontal cortex ERK signaling in mice

    PubMed Central

    Gonzalez, Betina; Raineri, Mariana; Cadet, Jean Lud; García-Rill, Edgar; Urbano, Francisco J.; Bisagno, Veronica

    2016-01-01

    Chronic use of methamphetamine (METH) leads to long-lasting cognitive dysfunction in humans and in animal models. Modafinil is a wake-promoting compound approved for the treatment of sleeping disorders. It is also prescribed off label to treat METH dependence. In the present study, we investigated whether modafinil could improve cognitive deficits induced by sub-chronic METH treatment in mice by measuring visual retention in a Novel Object Recognition (NOR) task. After sub-chronic METH treatment (1 mg/Kg, once a day for 7 days), mice performed the NOR task, which consisted of habituation to the object recognition arena (5 min a day, 3 consecutive days), training session (2 equal objects, 10 min, day 4), and a retention session (1 novel object, 5 min, day 5). One hour before the training session, mice were given a single dose of modafinil (30 or 90 mg/Kg). METH-treated mice showed impairments in visual memory retention, evidenced by equal preference of familiar and novel objects during the retention session. The lower dose of modafinil (30 mg/Kg) had no effect on visual retention scores in METH-treated mice, while the higher dose (90 mg/Kg) rescued visual memory retention to control values. We also measured ERK phosphorylation in medial prefrontal cortex (mPFC), hippocampus, and nucleus accumbens (NAc) of METH- and vehicle-treated mice that received modafinil 1 hr before exposure to novel objects in the training session, compared to mice placed in the arena without objects. Elevated Extracellular signal-regulated kinase (ERK) phosphorylation was found in the mPFC of vehicle-treated mice, but not in METH-treated mice, exposed to objects (p<0.05). The lower dose of modafinil had no effect on ERK phosphorylation in METH-treated mice, while 90 mg/Kg modafinil treatment restored the ERK phosphorylation induced by novelty in METH-treated mice to values comparable to controls (p<0.05). We found neither a novelty nor treatment effect on ERK phosphorylation in hippocampus or

  11. Modafinil improves methamphetamine-induced object recognition deficits and restores prefrontal cortex ERK signaling in mice.

    PubMed

    González, Betina; Raineri, Mariana; Cadet, Jean Lud; García-Rill, Edgar; Urbano, Francisco J; Bisagno, Veronica

    2014-12-01

    Chronic use of methamphetamine (METH) leads to long-lasting cognitive dysfunction in humans and in animal models. Modafinil is a wake-promoting compound approved for the treatment of sleeping disorders. It is also prescribed off label to treat METH dependence. In the present study, we investigated whether modafinil could improve cognitive deficits induced by sub-chronic METH treatment in mice by measuring visual retention in a Novel Object Recognition (NOR) task. After sub-chronic METH treatment (1 mg/kg, once a day for 7 days), mice performed the NOR task, which consisted of habituation to the object recognition arena (5 min a day, 3 consecutive days), training session (2 equal objects, 10 min, day 4), and a retention session (1 novel object, 5 min, day 5). One hour before the training session, mice were given a single dose of modafinil (30 or 90 mg/kg). METH-treated mice showed impairments in visual memory retention, evidenced by equal preference of familiar and novel objects during the retention session. The lower dose of modafinil (30 mg/kg) had no effect on visual retention scores in METH-treated mice, while the higher dose (90 mg/kg) rescued visual memory retention to control values. We also measured extracellular signal-regulated kinase (ERK) phosphorylation in medial prefrontal cortex (mPFC), hippocampus, and nucleus accumbens (NAc) of METH- and vehicle-treated mice that received modafinil 1 h before exposure to novel objects in the training session, compared to mice placed in the arena without objects. Elevated ERK phosphorylation was found in the mPFC of vehicle-treated mice, but not in METH-treated mice, exposed to objects. The lower dose of modafinil had no effect on ERK phosphorylation in METH-treated mice, while 90 mg/kg modafinil treatment restored the ERK phosphorylation induced by novelty in METH-treated mice to values comparable to controls. We found neither a novelty nor treatment effect on ERK phosphorylation in hippocampus or NAc of vehicle

  12. Recognition of 3-D symmetric objects from range images in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A new technique is presented for the three dimensional recognition of symmetric objects from range images. Beginning from the implicit representation of quadrics, a set of ten coefficients is determined for symmetric objects like spheres, cones, cylinders, ellipsoids, and parallelepipeds. Instead of using these ten coefficients trying to fit them to smooth surfaces (patches) based on the traditional way of determining curvatures, a new approach based on two dimensional geometry is used. For each symmetric object, a unique set of two dimensional curves is obtained from the various angles at which the object is intersected with a plane. Using the same ten coefficients obtained earlier and based on the discriminant method, each of these curves is classified as a parabola, circle, ellipse, or hyperbola. Each symmetric object is found to possess a unique set of these two dimensional curves whereby it can be differentiated from the others. It is shown that instead of using the three dimensional discriminant which involves evaluation of the rank of its matrix, it is sufficient to use the two dimensional discriminant which only requires three arithmetic operations.

  13. Novel object recognition ability in female mice following exposure to nanoparticle-rich diesel exhaust

    SciTech Connect

    Win-Shwe, Tin-Tin; Fujimaki, Hidekazu; Fujitani, Yuji; Hirano, Seishiro

    2012-08-01

    Recently, our laboratory reported that exposure to nanoparticle-rich diesel exhaust (NRDE) for 3 months impaired hippocampus-dependent spatial learning ability and up-regulated the expressions of memory function-related genes in the hippocampus of female mice. However, whether NRDE affects the hippocampus-dependent non-spatial learning ability and the mechanism of NRDE-induced neurotoxicity was unknown. Female BALB/c mice were exposed to clean air, middle-dose NRDE (M-NRDE, 47 μg/m{sup 3}), high-dose NRDE (H-NRDE, 129 μg/m{sup 3}), or filtered H-NRDE (F-DE) for 3 months. We then investigated the effect of NRDE exposure on non-spatial learning ability and the expression of genes related to glutamate neurotransmission using a novel object recognition test and a real-time RT-PCR analysis, respectively. We also examined microglia marker Iba1 immunoreactivity in the hippocampus using immunohistochemical analyses. Mice exposed to H-NRDE or F-DE could not discriminate between familiar and novel objects. The control and M-NRDE-exposed groups showed a significantly increased discrimination index, compared to the H-NRDE-exposed group. Although no significant changes in the expression levels of the NMDA receptor subunits were observed, the expression of glutamate transporter EAAT4 was decreased and that of glutamic acid decarboxylase GAD65 was increased in the hippocampus of H-NRDE-exposed mice, compared with the expression levels in control mice. We also found that microglia activation was prominent in the hippocampal area of the H-NRDE-exposed mice, compared with the other groups. These results indicated that exposure to NRDE for 3 months impaired the novel object recognition ability. The present study suggests that genes related to glutamate metabolism may be involved in the NRDE-induced neurotoxicity observed in the present mouse model. -- Highlights: ► The effects of nanoparticle-induced neurotoxicity remain unclear. ► We investigated the effect of exposure to

  14. A Comparison of the Effects of Depth Rotation on Visual and Haptic Three-Dimensional Object Recognition

    ERIC Educational Resources Information Center

    Lawson, Rebecca

    2009-01-01

    A sequential matching task was used to compare how the difficulty of shape discrimination influences the achievement of object constancy for depth rotations across haptic and visual object recognition. Stimuli were nameable, 3-dimensional plastic models of familiar objects (e.g., bed, chair) and morphs midway between these endpoint shapes (e.g., a…

  15. Heterozygous Che-1 KO mice show deficiencies in object recognition memory persistence.

    PubMed

    Zalcman, Gisela; Corbi, Nicoletta; Di Certo, Maria Grazia; Mattei, Elisabetta; Federman, Noel; Romano, Arturo

    2016-10-06

    Transcriptional regulation is a key process in the formation of long-term memories. Che-1 is a protein involved in the regulation of gene transcription that has recently been proved to bind the transcription factor NF-κB, which is known to be involved in many memory-related molecular events. This evidence prompted us to investigate the putative role of Che-1 in memory processes. For this study we newly generated a line of Che-1(+/-) heterozygous mice. Che-1 homozygous KO mouse is lethal during development, but Che-1(+/-) heterozygous mouse is normal in its general anatomical and physiological characteristics. We analyzed the behavioral characteristic and memory performance of Che-1(+/-) mice in two NF-κB dependent types of memory. We found that Che-1(+/-) mice show similar locomotor activity and thigmotactic behavior than wild type (WT) mice in an open field. In a similar way, no differences were found in anxiety-like behavior between Che-1(+/-) and WT mice in an elevated plus maze as well as in fear response in a contextual fear conditioning (CFC) and object exploration in a novel object recognition (NOR) task. No differences were found between WT and Che-1(+/-) mice performance in CFC training and when tested at 24h or 7days after training. Similar performance was found between groups in NOR task, both in training and 24h testing performance. However, we found that object recognition memory persistence at 7days was impaired in Che-1(+/-) heterozygous mice. This is the first evidence showing that Che-1 is involved in memory processes.

  16. Novel object recognition ability in female mice following exposure to nanoparticle-rich diesel exhaust.

    PubMed

    Win-Shwe, Tin-Tin; Fujimaki, Hidekazu; Fujitani, Yuji; Hirano, Seishiro

    2012-08-01

    Recently, our laboratory reported that exposure to nanoparticle-rich diesel exhaust (NRDE) for 3 months impaired hippocampus-dependent spatial learning ability and up-regulated the expressions of memory function-related genes in the hippocampus of female mice. However, whether NRDE affects the hippocampus-dependent non-spatial learning ability and the mechanism of NRDE-induced neurotoxicity was unknown. Female BALB/c mice were exposed to clean air, middle-dose NRDE (M-NRDE, 47 μg/m(3)), high-dose NRDE (H-NRDE, 129 μg/m(3)), or filtered H-NRDE (F-DE) for 3 months. We then investigated the effect of NRDE exposure on non-spatial learning ability and the expression of genes related to glutamate neurotransmission using a novel object recognition test and a real-time RT-PCR analysis, respectively. We also examined microglia marker Iba1 immunoreactivity in the hippocampus using immunohistochemical analyses. Mice exposed to H-NRDE or F-DE could not discriminate between familiar and novel objects. The control and M-NRDE-exposed groups showed a significantly increased discrimination index, compared to the H-NRDE-exposed group. Although no significant changes in the expression levels of the NMDA receptor subunits were observed, the expression of glutamate transporter EAAT4 was decreased and that of glutamic acid decarboxylase GAD65 was increased in the hippocampus of H-NRDE-exposed mice, compared with the expression levels in control mice. We also found that microglia activation was prominent in the hippocampal area of the H-NRDE-exposed mice, compared with the other groups. These results indicated that exposure to NRDE for 3 months impaired the novel object recognition ability. The present study suggests that genes related to glutamate metabolism may be involved in the NRDE-induced neurotoxicity observed in the present mouse model.

  17. System and method for disrupting suspect objects

    SciTech Connect

    Gladwell, T. Scott; Garretson, Justin R; Hobart, Clinton G; Monda, Mark J

    2013-07-09

    A system and method for disrupting at least one component of a suspect object is provided. The system includes a source for passing radiation through the suspect object, a screen for receiving the radiation passing through the suspect object and generating at least one image therefrom, a weapon having a discharge deployable therefrom, and a targeting unit. The targeting unit displays the image(s) of the suspect object and aims the weapon at a disruption point on the displayed image such that the weapon may be positioned to deploy the discharge at the disruption point whereby the suspect object is disabled.

  18. A Neural Network Based Speech Recognition System

    DTIC Science & Technology

    1990-02-01

    encoder and identifies individual words. This use of neural networks offers two advantages over conventional algorithmic detectors: the detection...environment. Keywords: Artificial intelligence; Neural networks : Back propagation; Speech recognition.

  19. Combining scale-space and similarity-based aspect graphs for fast 3D object recognition.

    PubMed

    Ulrich, Markus; Wiedemann, Christian; Steger, Carsten

    2012-10-01

    This paper describes an approach for recognizing instances of a 3D object in a single camera image and for determining their 3D poses. A hierarchical model is generated solely based on the geometry information of a 3D CAD model of the object. The approach does not rely on texture or reflectance information of the object's surface, making it useful for a wide range of industrial and robotic applications, e.g., bin-picking. A hierarchical view-based approach that addresses typical problems of previous methods is applied: It handles true perspective, is robust to noise, occlusions, and clutter to an extent that is sufficient for many practical applications, and is invariant to contrast changes. For the generation of this hierarchical model, a new model image generation technique by which scale-space effects can be taken into account is presented. The necessary object views are derived using a similarity-based aspect graph. The high robustness of an exhaustive search is combined with an efficient hierarchical search. The 3D pose is refined by using a least-squares adjustment that minimizes geometric distances in the image, yielding a position accuracy of up to 0.12 percent with respect to the object distance, and an orientation accuracy of up to 0.35 degree in our tests. The recognition time is largely independent of the complexity of the object, but depends mainly on the range of poses within which the object may appear in front of the camera. For efficiency reasons, the approach allows the restriction of the pose range depending on the application. Typical runtimes are in the range of a few hundred ms.

  20. Region-Based Object Recognition by Color Segmentation Using a Simplified PCNN.

    PubMed

    Chen, Yuli; Ma, Yide; Kim, Dong Hwan; Park, Sung-Kee

    2015-08-01

    In this paper, we propose a region-based object recognition (RBOR) method to identify objects from complex real-world scenes. First, the proposed method performs color image segmentation by a simplified pulse-coupled neural network (SPCNN) for the object model image and test image, and then conducts a region-based matching between them. Hence, we name it as RBOR with SPCNN (SPCNN-RBOR). Hereinto, the values of SPCNN parameters are automatically set by our previously proposed method in terms of each object model. In order to reduce various light intensity effects and take advantage of SPCNN high resolution on low intensities for achieving optimized color segmentation, a transformation integrating normalized Red Green Blue (RGB) with opponent color spaces is introduced. A novel image segmentation strategy is suggested to group the pixels firing synchronously throughout all the transformed channels of an image. Based on the segmentation results, a series of adaptive thresholds, which is adjustable according to the specific object model is employed to remove outlier region blobs, form potential clusters, and refine the clusters in test images. The proposed SPCNN-RBOR method overcomes the drawback of feature-based methods that inevitably includes background information into local invariant feature descriptors when keypoints locate near object boundaries. A large number of experiments have proved that the proposed SPCNN-RBOR method is robust for diverse complex variations, even under partial occlusion and highly cluttered environments. In addition, the SPCNN-RBOR method works well in not only identifying textured objects, but also in less-textured ones, which significantly outperforms the current feature-based methods.

  1. Assessing rodent hippocampal involvement in the novel object recognition task. A review.

    PubMed

    Cohen, Sarah J; Stackman, Robert W

    2015-05-15

    The novel object recognition (NOR) task has emerged as a popular method for testing the neurobiology of nonspatial memory in rodents. This task exploits the natural tendency of rodents to explore novel items and depending on the amount of time that rodents spend exploring the presented objects, inferences about memory can be established. Despite its wide use, the underlying neural circuitry and mechanisms supporting NOR have not been clearly defined. In particular, considerable debate has focused on whether the hippocampus plays a significant role in the object memory that is encoded, consolidated and then retrieved during discrete stages of the NOR task. Here we analyzed the results of all published reports in which the role of the rodent hippocampus in object memory was inferred from performance in the task with restricted parameters. We note that the remarkable variability in NOR methods across studies complicates the ability to draw meaningful conclusions from the work. Focusing on 12 reports in which a minimum criterion of sample session object exploration was imposed, we find that temporary or permanent lesion of the hippocampus consistently disrupts object memory when a delay of 10 min or greater is imposed between the sample and test sessions. We discuss the significance of a delay-dependent role of the hippocampus in NOR within the framework of the medial temporal lobe. We assert that standardization of the NOR protocol is essential for obtaining reliable data that can then be compared across studies to build consensus as to the specific contribution of the rodent hippocampus to object memory.

  2. Effects of heavy particle irradiation and diet on object recognition memory in rats

    NASA Astrophysics Data System (ADS)

    Rabin, Bernard M.; Carrihill-Knoll, Kirsty; Hinchman, Marie; Shukitt-Hale, Barbara; Joseph, James A.; Foster, Brian C.

    2009-04-01

    On long-duration missions to other planets astronauts will be exposed to types and doses of radiation that are not experienced in low earth orbit. Previous research using a ground-based model for exposure to cosmic rays has shown that exposure to heavy particles, such as 56Fe, disrupts spatial learning and memory measured using the Morris water maze. Maintaining rats on diets containing antioxidant phytochemicals for 2 weeks prior to irradiation ameliorated this deficit. The present experiments were designed to determine: (1) the generality of the particle-induced disruption of memory by examining the effects of exposure to 56Fe particles on object recognition memory; and (2) whether maintaining rats on these antioxidant diets for 2 weeks prior to irradiation would also ameliorate any potential deficit. The results showed that exposure to low doses of 56Fe particles does disrupt recognition memory and that maintaining rats on antioxidant diets containing blueberry and strawberry extract for only 2 weeks was effective in ameliorating the disruptive effects of irradiation. The results are discussed in terms of the mechanisms by which exposure to these particles may produce effects on neurocognitive performance.

  3. Central noradrenergic depletion by DSP-4 prevents stress-induced memory impairments in the object recognition task.

    PubMed

    Scullion, G A; Kendall, D A; Sunter, D; Marsden, C A; Pardon, M-C

    2009-12-01

    Environmental stress produces adverse affects on memory in humans and rodents. Increased noradrenergic neurotransmission is a major component of the response to stress and noradrenaline (NA) plays an important role in modulating processes involved in learning and memory. The present study investigated the effect of NA depletion on stress-induced changes on memory performance in the mouse. Central NA depletion was induced using the selective neurotoxin N-(2-chloroethyl)-N-ethyl-2 bromobenzylamine (DSP-4) and verified by high performance liquid chromatography (HPLC). A novel cage stress procedure involving exposure to a new clean cage for 1 h per day, 4 days per week for 4 weeks, was used to produce stress-induced memory deficits measured using the object recognition task. 50 mg/kg DSP-4 produced large and sustained reductions in NA levels in the frontal cortex and hippocampus measured 24 h, 1 week and 5 weeks after treatment. Four weeks of exposure to novel cage stress induced a memory deficit in the object recognition task which was prevented by DSP-4 pre-treatment (50 mg/kg 1 week before the commencement of stress).These findings indicate that chronic environmental stress adversely affects recognition memory and that this effect is, in part, mediated by the noradrenergic stress response. The implication of these findings is that drugs targeting the noradrenergic system to reduce over-activity may be beneficial in the treatment of stress-related mental disorders such as post-traumatic stress disorder or anxiety in which memory is affected.

  4. Beta-glucan recognition by the innate immune system.

    PubMed

    Goodridge, Helen S; Wolf, Andrea J; Underhill, David M

    2009-07-01

    Beta-glucans are recognized by the innate immune system. This recognition plays important roles in host defense and presents specific opportunities for clinical modulation of the host immune response. Neutrophils, macrophages, and dendritic cells among others express several receptors capable of recognizing beta-glucan in its various forms. This review explores what is currently known about beta-glucan recognition and how this recognition stimulates immune responses. Special emphasis is placed on Dectin-1, as we know the most about how this key beta-glucan receptor translates recognition into intracellular signaling, stimulates cellular responses, and participates in orchestrating the adaptive immune response.

  5. Application of Voice Recognition Input to Decision Support Systems

    DTIC Science & Technology

    1988-12-01

    Support System (GDSS) Talkwriter Human Computer Interface Voice Input Individual Decision Support System (IDSS) Voice Input/Output Man Machine Voice ... Interface Voice Processing Natural Language Voice Input Voice Recognition Natural Language Accessed Voice Recognizer Speech Entry Voice Vocabulary

  6. Recognition of novel objects and their location in rats with selective cholinergic lesion of the medial septum.

    PubMed

    Cai, Li; Gibbs, Robert B; Johnson, David A

    2012-01-11

    The importance of cholinergic neurons projecting from the medial septum (MS) of the basal forebrain to the hippocampus in memory function has been controversial. The aim of this study was to determine whether loss of cholinergic neurons in the MS disrupts object and/or object location recognition in male Sprague-Dawley rats. Animals received intraseptal injections of either vehicle, or the selective cholinergic immunotoxin 192 IgG-saporin (SAP). 14 days later, rats were tested for novel object recognition (NOR). Twenty-four hours later, these same rats were tested for object location recognition (OLR) (recognition of a familiar object moved to a novel location). Intraseptal injections of SAP produced an 86% decrease in choline acetyltransferase (ChAT) activity in the hippocampus, and a 31% decrease in ChAT activity in the frontal cortex. SAP lesion had no significant effect on NOR, but produced a significant impairment in OLR in these same rats. The results support a role for septo-hippocampal cholinergic projections in memory for the location of objects, but not for novel object recognition.

  7. Pattern Recognition Methods and Features Selection for Speech Emotion Recognition System.

    PubMed

    Partila, Pavol; Voznak, Miroslav; Tovarek, Jaromir

    2015-01-01

    The impact of the classification method and features selection for the speech emotion recognition accuracy is discussed in this paper. Selecting the correct parameters in combination with the classifier is an important part of reducing the complexity of system computing. This step is necessary especially for systems that will be deployed in real-time applications. The reason for the development and improvement of speech emotion recognition systems is wide usability in nowadays automatic voice controlled systems. Berlin database of emotional recordings was used in this experiment. Classification accuracy of artificial neural networks, k-nearest neighbours, and Gaussian mixture model is measured considering the selection of prosodic, spectral, and voice quality features. The purpose was to find an optimal combination of methods and group of features for stress detection in human speech. The research contribution lies in the design of the speech emotion recognition system due to its accuracy and efficiency.

  8. Pattern Recognition Methods and Features Selection for Speech Emotion Recognition System

    PubMed Central

    Partila, Pavol; Voznak, Miroslav; Tovarek, Jaromir

    2015-01-01

    The impact of the classification method and features selection for the speech emotion recognition accuracy is discussed in this paper. Selecting the correct parameters in combination with the classifier is an important part of reducing the complexity of system computing. This step is necessary especially for systems that will be deployed in real-time applications. The reason for the development and improvement of speech emotion recognition systems is wide usability in nowadays automatic voice controlled systems. Berlin database of emotional recordings was used in this experiment. Classification accuracy of artificial neural networks, k-nearest neighbours, and Gaussian mixture model is measured considering the selection of prosodic, spectral, and voice quality features. The purpose was to find an optimal combination of methods and group of features for stress detection in human speech. The research contribution lies in the design of the speech emotion recognition system due to its accuracy and efficiency. PMID:26346654

  9. An Evaluation of PC-Based Optical Character Recognition Systems.

    ERIC Educational Resources Information Center

    Schreier, E. M.; Uslan, M. M.

    1991-01-01

    The review examines six personal computer-based optical character recognition (OCR) systems designed for u